The purpose of this working document is to set out considerations relevant for libraries developing a strategic response to Artificial Intelligence.

The text is organised around developing a set of questions that prompt reflection and action (section 4). It is hoped that the document can support local decision making about AI.

Authorship: This working document for discussion was prepared by Andrew Cox, as convenor of the Artificial Intelligence SIG. Comments for further iterations of the document are invited (link to comment form – if you have difficulty accessing this form send comments to a.m.cox@sheffield.ac.uk).

A note on methodology

Initial ideas for the report were derived from an event run at the University of Sheffield in April 2023. An early version of this document as a working paper for comment was published on 4 June 2023. We would like to thank everyone who added thoughts and comments to this draft. We have also drawn in some data from a survey (N=111) linked to the Sheffield event and recirculated in July; respondents were a mix of HE and FE librarians and health librarians. As the audience was primarily in the UK this data should be understood as simply a snapshot of opinion in one context.

Version 1.1 20 November 2023

A pdf version can be accessed @ https://doi.org/10.15131/shef.data.24631293.v1

Section 1: Defining AI

Definitions of AI typically revolve around the idea of computers performing tasks which are ordinarily undertaken using human intelligence. The UNESCO definition emphasises that this is imitation of human understanding.

“Machines that imitate some features of human intelligence, such as perception, learning, reasoning, problem-solving, language interaction and creative work (UNESCO, 2022: 9).

More expansively, UKRI (2021:4) define AI as

“a suite of technologies and tools that aim to reproduce or surpass abilities in computational systems that would require ‘intelligence’ if humans were to perform them. This could include the ability to learn and adapt; to sense, understand and interact; to reason and plan; to act autonomously; or even create. It enables us to use and make sense of data.”

The EC definition emphasizes the importance of data.

“Simply put, AI is a collection of technologies that combine data, algorithms and computing power.” (European Commission, 2020: 2)

AI is not new. We are already rather familiar with many of its applications in auto-suggestion, spam filtering, plagiarism detection, audio transcription, text summarisation and translation. Many familiar features of search and recommendation use AI. More specifically in the library context, Text and Data Mining (TDM), and the application of machine learning to library and archive collections in the digital humanities can be seen as AI.

While controversial there are many beneficial uses in every area of human activity. More specifically, AI is positive for access to information and knowledge. For example, improving translation tools enhances access to material written in other languages. Improved summarisation also makes access to content easier.

The most powerful applications of AI for libraries are of “descriptive AI” which can be used to make all kinds of material (photos, videos, sound, manuscripts) in collections machine readable data through such techniques as computer vision or sound to text, and provide description at scale for information retrieval (Cordell, 2020). Some libraries have special collections that could be made more accessible using these means; for others it may be more relevant to have access to an infrastructure around licensed or ) open content. Many technical challenges remain with digitization and attempts to automate description of historic collections. But there is already considerable experience of the issues especially in the national library and archive community (Lee et al., 2023):

Although AI promises to enhance access to knowledge, there are serious ethical concerns in the areas of bias; privacy protection; explainability, transparency and accountability; and social impact (Jobin et al., 2019; AIAAIC, 2023). These apply strongly in the context of AI developed by Big Tech companies, but may be managed in the context of library specific applications of AI (Padilla, 2019).

The release of ChatGPT has led to a surge of interest in AI and also a re-evaluation of how it is defined and the anticipated professional implications. Generative AI has shown remarkable ability to write all genre and styles of text, write code and generate images in response to prompts. The underlying technologies in themselves can be leveraged by libraries, e.g. Large Language Models like GPT can be trained with library selected data. The issues lie more with the commercial drivers that have shaped the development of tools such as ChatGPT. The informational and ethical issues around ChaGPT illustrate many of the issues posed by all AI, because it (IFLA AI SIG, 2023b):

  • makes biased statements, e.g. reproduces biased assumptions about gender and politics (Motoki et al., 2023; Deshpande et al, 2023)
  • “hallucinates” information which is inaccurate
  • fails to acknowledge its sources or even invents sources
  • threatens to accelerate the uncontrolled creation of content and can be used to create fake news, to manipulate and polarise public opinion, spread misinformation and undermine democracy, or even incite violence
  • may violate copyright by using text and data without permission (Dreben, 2023). Few LLM providers have made details of the training data they have used openly available
  • is unexplainable because it is not open about what data it is based on or how it works
  • threatens human jobs, e.g. journalists and those working in marketing
  • is available to people with money to subscribe, disadvantaging those without, and so deepening digital divisions
  • was developed by exploiting very low paid Kenyan workers to detoxify content, an instance of the dependence of AI on precarious, ghost labour (Perrigo, 2023)
  • has significant environmental impacts (Burruss 2020; Ludvigsen, 2022; Saenko, 2023)
  • reveals the disruptive power in the hands of Big Tech companies and the dizzying speed of change it seems to enable

The implication for the library world is to increase the importance of AI literacy training, as opposed to applying AI to library work itself.

Section 2: Impact of AI on libraries

AI has the potential to have “wide and deep” impacts on library work.

From table 1 below we can see AI impacting many library services, sometimes changing them fundamentally, but others only making marginal changes. It is logical to anticipate that libraries will adopt AI in ways which either align to existing roles, strongly link to user need or demand the least resource.

We have already stressed the relevance of descriptive AI to making library collections more accessible. AI is being used to provide initial metadata for items. It is likely to appear in search services and used in supporting some dimensions of systematic reviews (e.g. filtering of results).

As more and more scholars use AI techniques in their research, so the need to support data scientist communities will grow. Libraries can offer support in terms of data discovery, copyright issues, data management and data preservation.

AI is likely to change everyday knowledge work, e.g. through translation, summarisation and text generation. A proliferation of AI tools and apps can be applied to library professional work in particular. Tools such as ResearchRabbit, Scite, elicit and openread perform tasks to support literature reviewing. Generative AI has applications in library marketing because of its ability to adapt text to the needs of specific audiences.

AI’s ability to perform complex routine tasks accurately means that it is likely to be deployed in back end library systems. An example of this is use of RPA (Robotic Process Automation) to process bibliographic data.

Given the number of enquiries libraries receive, chatbots have been advocated for libraries for some time. This is increasingly plausible because of the decline in technical barriers to chatbot development. They could fulfil roles such as:

  • Answering routine queries
  • Collecting information from users
  • Supporting users through routine processes
  • Being a buddy to new students

AI will be used to create smarter library spaces. Some libraries have developed physical robots to answer user queries. Robots have also been used to perform such functions as shelving and stocktaking. Some libraries have applied Automated Storage and Retrieval System (ASRS) which retrieve book stock on demand. This usually requires a major rebuilding programme.

For educational libraries, other educational uses of AI such in creating adaptive learning content or chatbots to support student experience are somewhat relevant too (Jisc, 2023b).

Generative AI has shifted the focus of the debate because of its widespread use by users, bringing to the fore the need for staff and students to have some level of AI literacy (encompassing data and algorithmic literacies). This is a natural role for libraries  extending their promotion of information literacy and digital skills. AI literacy is the understanding of AI in any of its manifestations, involving the ability “to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (Long and Magerko, 2020).

It seems likely that AI literacy will be essential in the future workplace; although the exact nature of skills to use/collaborate with AI are still emerging; and how this is conceived is likely to be specific to particular disciplines.

AI can also be applied to predicting patterns of user behaviour and so decision making.

Table 1 AI impacts on library operations

AI application… impacts…
AI to make collections machine readable data and describe them at scale Collections team, Special collections, Archives team
AI to enhance or create metadata Metadata team
Discovery/retrieval, literature reviews Library systems, liaison team
Supporting data scientist communities Liaison team
AI generated text and images Marketing team
Library or institutional chatbot User services
AI in backend systems, e.g. RPA (Robotic Process Automation) Library systems
Use of robots to give information to users User services
Smart spaces Facilities team  
Use of robots to tidy shelves Collections team
Supporting student use of AI tools Academic services
Need for AI literacy (including data and algorithmic literacy) of users Training team
Analysing and predicting user behaviour Planning team

Results from the survey are suggestive of the level of planning/ development at the time of writing (N=111). AI literacy has rapidly moved to the fore.

Table 2 planned, pilot and mature AI services in libraries

  Planned Pilot Mature
Library specific chatbot 22 (20%) 12 (11%) 7 (6%)
Institutional chatbot 15 (14%) 6 (5%) 8 (7%)
Text and data mining support 17 (15%) 6 (5%) 5 (5%)
Automation of systematic reviews 14 (13%) 3 (3%) 1 (1%)
Knowledge discovery of collections 25 (23%) 3 (3%) 7 (6%)
Supporting institutional data science community 15 (14%) 5 (5%) 6 (5%)
Promoting AI (and data) literacy among users 52 (47%) 18 (16%) 3 (3%)
Library user behaviour prediction 11 (10%) 2 (2%) 1 (1%)

Which applications demand least resource and align most strongly with user need and existing library roles?

Which developments are most critical in reshaping the library role?

Which are most likely to happen and in what time scale?

How can AI technologies enhance our library services? What challenges can AI help address? What are the potential risks and ethical considerations, and how can we mitigate them?

How can we continuously monitor and stay up-to-date with emerging AI trends and advancements?

How can libraries teach users AI literacy effectively?

What are the key learning outcomes needed, and how do they vary by discipline?

How should AI literacy be integrated into existing IL, academic and digital literacy training?

How can material be updated to keep up with the changing nature of AI?

Section 3 Strategic context and a library SWOT

In a context of change and uncertainty thinking and acting strategically is seen of increasing importance. Many institutions are placing greater emphasis on strategy, on envisioning a desired future state, and planning to realise this vision. For libraries a key issue is to position themselves strongly in relation to wider institutional, sectoral and national priorities (Cox, 2021). This can be a form of passive alignment, seeking to demonstrate the library’s contribution to the organisational mission, or even proactive in seeking to take a leadership role in certain areas.

Library responses to AI happen in the context of government policy and existing and emerging legal frameworks. From around 2019 onwards, many states have recognised AI as a strategic priority. According to an analysis of these policies by Papyshev and Masaru Yarime (2023) some strong common themes emerge such as the need to:

  • Develop human capital
  • Apply AI ethically
  • Develop a research base
  • Regulate
  • Develop data infrastructure and policy.

One can immediately see information professionals playing a role in achieving many of these priorities, such as by educating citizens to help develop the skills for an AI literate workforce; by advocating for their unique perspective on the ethics of AI; by supporting researchers to develop the research base for AI; and by inputting on the design and use of a data infrastructure. If AI is a national priority, it seems that libraries have a significant role to play, alongside other actors.

While there are many common themes between them, the emphasis in different national policies is somewhat different. Papyshev and Masaru Yarime (2023) suggest that they fall into three groups:

  • Development – where the state steers development of AI towards national goals. This kind of policy is found in China and Japan, and in Russia and some of the former communist bloc in Eastern Europe.
  • Control – where the focus is on state regulation and protecting society from the risks of AI. This is the approach taken by the EU, for example.
  • Promotion – where the emphasis is on innovation, especially in the private sector, and the state plays only a facilitating role. This is the emphasis in USA, UK and other countries including Australia, Ireland and India.

These categories seem to reflect persistent patterns in the political culture in these different countries. There has probably been a shift towards regulation internationally because of the controversy around ChatGPT. This could have radical implications for how AI is developed and used in the library sector.

Libraries may also need to respond to sectoral strategy, such as around culture or health. Existing legal frameworks are still relevant, such as that for the protection of IPR.

The AI strategies of organisations within which libraries are embedded is obviously of importance. But to date AI appears to be rarely mentioned in university and academic library strategies as such (Huang et al., 2023).

How is the state’s stance towards AI likely to impact library uses?

What is the strategic stance of your institution and wider sector towards AI?

What hooks are there in existing strategies where AI may be relevant?

There can be themes in pre-existing institutional and library strategies that offer hooks for aligning AI related activity.

Table 3 Hooks in existing strategy example: university sector

Potential hooks in institutional strategy
Research excellence and impact
Teaching excellence
Equality, diversity and inclusion (EDI)
Sustainability
Mental health and wellbeing of students and staff
Civic role
Potential hooks in library strategy
User engagement and experience
Collections
Physical space
Collaboration
Information and digital literacies
Open Knowledge/Knowledge Equity, including decolonisation of knowledge

Where do we see our current strategic priorities leveraging AI?

Do we want to align to the institution or proactively shape the institutional response?

How do issues such as EDI and decolonisation play out within AI applications such as descriptive AI?

AI could be seen as the latest of a number of technologies that collectively offer digital transformation. Some authors refer to SMACIT technologies (social, mobile, analytics, cloud, and Internet of Things), but they could also include AI. The characteristics of these technologies are that they go beyond automation of previous practices to enabling fundamental rethinking of processes. “Digital transformation is the profound and accelerating transformation of business activities, processes, competencies, and models to fully leverage the changes and opportunities brought by digital technologies and their impact across society in a strategic and prioritized way” Demirkan et al. (2016). It is about changing competencies and organisational culture and structures as much as purely about technology.  

What are the current commitments of the institution around digital transformation? Does AI shift these priorities?

SWOT

The SWOT below evaluates the strategic position of libraries in general in relation to AI

Strengths

  • AI often seems to be technology driven. In this context, knowledge and prioritisation of user need, which is central to library professional discourse, is an important corrective, aligning to Institute for Ethical AI in Education (2021) recommendation that AI projects being driven by learner benefit.
  • Many aspects of descriptive AI have been long been a focus of library work and advocacy, e.g., clearing the legal and technical obstacles to Text and Data Mining. As data science techniques are used across more disciplines so this knowledge will be increasingly relevant.
  • Since data is key to AI, knowledge of data management and data governance is highly relevant. Libraries have knowledge about such issues as searching for data, data description, data ownership and licensing, promotion of data sharing, and data preservation. These are all relevant to AI.
  • In the context of the informational and ethical weaknesses of generative AI, the trust that is invested in information from libraries is an important value. The library is a place committed to open sharing and building multi-disciplinary communities.
  • Understanding the biases and promoting AI literacy has continuity with library work around data and algorithmic literacy.
  • Other strengths relate to the nature of the profession and the structures it has for sharing knowledge and mutual learning across the sector.
  • The values of the profession offer important reference points, for example the commitment to access to knowledge for all challenges the tendency of commercial services to intensify digital divides; the stress on impartiality speaks to bias in AI; and privacy protection is a key concern with AI.
  • As a female majority profession, librarians can play a special role in counter-balancing the impacts of gender bias in the wider IT industry.

Weaknesses

  • Many libraries have limited in-house technical development capacity. This means that it is hard for them to run AI based projects even as the technology becomes easier to apply off the shelf.
  • Libraries may lack ownership of collection or user data at the scale to justify the application of AI.
  • Commercial products are expensive and there is still a lack of off-the-shelf products for libraries.
  • The reason companies like Google, Microsoft and Amazon are dominating development of AI is that they have huge resources and have acquired big data about user behaviour. Libraries are likely to have smaller amounts of data, not necessarily of the same quality, and perhaps with issues around their inclusivity. Applying algorithms trained on modern data may be less successful with historic material.
  • AI developments driven by Big Tech often conflict with core library values such as protection of privacy, removal of bias, access for all and openness.
  • The levels of professional uncertainty and anxiety around AI remain high.
  • If new forms of collaboration are created around AI, there will be different professionalised understandings of the benefits and purposes and the usual challenges of communicating across professional boundaries.
  • Ultimately there is always a competition between priorities and the way libraries have developed has tended to focus on roles such academic skills and pedagogy rather than technical innovation.

In our survey, participants perceived several barriers to using AI in their library (N varied slightly across the items).

Table 4 Perceived barriers to implementing AI

  Key barrier Important barrier Not important
Concerns about ethics, such as bias, intelligibility and confidentiality 55 (50%) 50 4
Lack of relevant technical skills among library staff 53 (48%) 48 9
More important priorities 31 (46%) 26 21
Cost of commercial products 43 (41%) 48 15
Culture change required among users 33 (30%) 53 23
IT own the agenda 31 (28%) 35 44
Lack of data/ data quality 27 (25%) 62 19
Value of AI unproven 19 (17%) 61 29
Lack of turnkey solutions 15 (15%) 56 28

Opportunities

  • AI tools are improving access to knowledge, through description of content,  summarisation, translation and transcription.
  • Generative AI can be used in many professional tasks, such as drafting documentation, communications and policies
  • AI can be applied to certain types of routinised tasks, e.g. creating initial metadata records for material. There is the potential for it to take up routine and create more space for higher value, skilled work.
  • The information weaknesses of generative AI increase the demand for trusted information.
  • AI developments are creating opportunities for new forms of collaborations, such as with data scientists.
  • AI may be used to improve decision making and prediction in libraries.
  • Libraries may be able to influence how systems vendors incorporate AI into their products and how wider infrastructures are developed to incorporate AI
  • Libraries can influence the institutional approach to engaging with AI based on library principles such as openness, privacy and explainability
  • If there is a library vision of how AI can enhance access to knowledge then there is an opportunity the technologies being developed can deliver it.

Threats

  • Media coverage of AI, generating both hype and fear, creates an environment in which it is hard to make balanced decisions.
  • The speed of change, particularly initiated by ChatGPT, makes it difficult for institutions to respond in a timely way.
  • Change is driven by actors beyond influence or control.
  • The lack of a diverse workforce in the AI industry linked also to the cultural associations between technology, rationality and masculinity, suggest that on balance AI is likely to negatively impact social equality, just at a time when EDI has become recognised as a more urgent priority. An excessive focus on technologies detracts value placed on the caring dimensions of professional work.
  • Much of the policy and advocacy around AI stresses a productivity agenda, often linked to reduced employment, and potentially to intensification of demands on staff and increased surveillance.
  • The most direct threat is that the way that users find and use information is changing and this could make libraries seem less relevant. For example, the ChatGPT model of receiving a fully developed answer to a question expressed in natural language challenges the keyword/ search results model of finding information.

Table 5 Summary SWOT of libraries and AI

Strengths
Knowledge of user need
Data is key to AI
Previous experience with TDM, digital humanities, copyright
Trust in libraries as information sources
Professional knowledge sharing
Professional ethics, values and skills
Track record of successful collaboration and connecting different groups within the institution
Openness and cross disciplinary nature of the library
As a female majority profession given the lack of diversity in the AI industry
Opportunities
Improved access to knowledge/ collections: through content description, summarisation, translation and transcription
Completion of routine tasks with AI
Improved knowledge creation through generative AI
Demand for trusted information
Collaboration
Better-informed decision making
Higher-value work enabled
Influencing better products from vendors
Influencing rules of engagement with AI based on library values/principles
Having a vision for AI
Weaknesses
Limited technical development capacity of libraries
Cost of commercial products
Lack of off-the-shelf products for library context
Data quality issues, lack of data, limits on use of data, biased data, non-inclusive data
Differing understanding of issues and benefits within AI driven collaborations
Uncertainty, anxiety and lack of confidence in the sector about AI
Library and professional brand not associated with AI
Potential for AI to conflict with professional values (e.g. confidentiality, privacy, equal access)
Other pressing priorities, many more closely aligned to professional identity  
Threats
Emotion, hype and misinformation around AI
Speed of change, driven by exogenous actors
Ethical issues: bias, issues of privacy and confidentiality
Lack of diversity in the AI workforce
Risks attached to the productivity agenda driving AI strategy
New ways of accessing information change expectations about search etc    

What is the SWOT for our library?

Organisational capability

Mikalef and Gupta (2021:2) have developed a model of organisational capability for AI, the ability of an organisation “to select, orchestrate, and leverage its AI-specific resources.” Rooted in the resource-based theory of the organisation this differentiates three types of resource that make up AI capability: Tangible resources, Human resources and Intangible resources (Table 1).

Table 6: Resources required for AI capability (Mikalef and Gupta 2021)

Tangible resources:
Data
Technology
Basic resources
Human resources:
Technical skills
Business skills
Intangible resources:
Inter-departmental coordination
Organizational change capacity
Risk proclivity

Tangible resources are data resources (like user data or collection data), having a suitable technical infrastructure, and access to “basic resources” such as money and time to invest in AI. Many libraries do have data in the form of both collections and user data. They may also have access to the necessary technical infrastructure to support AI. Funding is always a challenge, but the exciting potential of AI may make it possible to build a business case for funding.

Human resources combine both the technical skills to develop AI applications, and, equally important, the business skills to plan and deliver AI projects and implement AI as a service. Libraries may well have significant technical skills in their teams. They are used to delivering on technical projects. Given the changing technical landscape of the last few decades, there is also a huge amount of experience in libraries in implementing and promoting new systems. It is increasingly recognised that AI should be developed in participatory ways with stakeholders.

For Mikalef and Gupta’s (2021) intangible resources include the ability to coordinate activities between departments, the ability to manage organizational change, and willingness to take risk. These might be seen as leadership challenges. Delivering them may also imply structural reorganisation. Again, libraries often have capabilities here, especially in terms of coordination. So much organisational change has happened in the last few years, that again the ability to adapt with agility has increased.

Mikalef and Gupta’s (2021) model could be used as a framework to evaluate whether a library (and its host organization) have capacity or readiness to develop, implement and AI systems, especially descriptive AI.

Capabilities differ between sectors. National libraries, and some research libraries, have a proven track record in developing descriptive AI. Critically they have vast bodies of collection data that would benefit from AI to enable improved access. Given the benefits, they may be able to find the funds to support such projects. They can develop technical skills through proof of concept projects and develop the business skills through turning projects to services.

The case is less clear for smaller, less resourced libraries, particularly if they do not have unique collections requiring special treatment. They are more likely to license systems. This is not to dismiss the possibilities of using descriptive AI, but is more likely to be through collaboration. Here there may need to be longer term processes of capacity building, e.g. through training staff and proof of concept projects. While they may not have AI capability for themselves, they may contribute in very significant ways to the AI capability of their wider organisation, such as the university, in the case of an academic library, or a health service, in the case of health libraries.

Promoting AI literacy has a place in promoting organisational or societal AI capabilities.

Section 4: Strategic responses to AI: Pros and cons

The strategic responses to AI could include one or a combination of the following broad approaches:

  1. Recruiting new staff with specialist AI skills
  2. Upskilling existing staff
  3. Engaging with users to see how they are using AI
  4. Studying sector best practice
  5. Running proof of concept projects
  6. Talking to the system suppliers and buying systems
  7. Aligning to what is happening in the institution
    1. Collaborating with other units
  8. Aligning to what is happening in the sector
    1. Collaborating with other libraries and organisations
  9. Adopting a wait and see stance

1. Recruiting new staff with AI skills

Recruiting AI-skilled staff is one option for libraries aiming to use AI. Building a skilled AI team could be critical to successfully tackling the complex technical (and implementation) challenges posed by AI. It would require a well-structured recruitment process to attract and retain suitable talent, given the current demand for AI skills and the relatively low pay in the sector. The ethos of the sector might attract workers. It remains unclear exactly what skillsets might be most useful, e.g. is it about technical or implementation skills?

What type of skills do we need to acquire through recruitment?

Where would such staff sit within the organisation?

Who should coordinate the library’s response to AI?

Pros Cons
Data scientists and other AI specialists may be attracted to the ethos of your organisation Data scientists and other AI specialists can command high salaries  
  It remains unclear what kind of skillsets are needed

2. Upskilling existing staff

Upskilling existing staff is a proactive strategy for keeping the workforce adaptable and competitive in a rapidly changing landscape. It benefits individual employees by enhancing their career prospects and strengthens the organisation by ensuring it has the talent and expertise needed to thrive. But it does place extra demands on staff who are often already hard pressed, especially with a complex topic like AI.

What types of technical, data related and business knowledge are needed?

What resources are there to support this learning?

How can we create space for staff to explore AI and learn relevant skills?

How do we ensure that staff continuously upskill in this area to keep up-to-date given the speed of change?

How can the efforts of diverse individuals be coordinated?

Some options:

  • Personal exploration of AI based productivity tools
  • Exploration of open source AI tools and apps
  • Reading and discussion groups
  • ‘Drip feeding’ AI into team meetings/conversations
  • Undertake job analysis exercise – a detailed look at individual and team workflow to identify potential opportunities, then look at relevant training to upskill
  • Training courses
  • Data related skill development

The IFLA AI SIG (2023a) published a list of 23 at the beginning of the year.

Pros Cons
Cost effective approach Complexity of topic
Staff interest in developing new skills Competing priorities
  Lack of accredited courses or agreed syllabus

Our survey gave an impression of how respondents prioritised different types of skill development. This suggests that willingness to learn about AI basics and applications is combined with an emphasis on enduring professional skills. Core data scientist activities such as coding and statistics appear very low on the list.

Table 6 Survey views on the types of knowledge, skills and other attributes librarians most need to develop to apply AI to knowledge discovery (N=111)

Open mindedness and willingness to learn 69 62%
Knowledge of user behaviour and need 54 49%
How to get best results from AI tools 52 47%
General understanding of AI 51 46%
Having a vision of the benefits 46 41%
Professional ethics 41 37%
IPR and copyright 37 33%
Problem solving 27 24%
Collaboration skills 26 23%
Data management 21 19%
Advocacy skills 18 16%
Risk taking 17 15%
Influencing skills 17 15%
Collection management 13 12%
Statistics 9 8%
Co-production 8 7%
Coding 4 4%

3. Engaging with users to see how they are using AI

AI is rapidly evolving in our time. Notably ChatGPT is changing how users discover information and write, and so their learning (Jisc, 2023a). It is critical to engage with our users to understand how it is impacting their information behaviour.

How are our different groups of users using AI?

How do we support all users to navigate the AI landscape and maximise the positive benefits of AI while using it ethically?

Pros Cons
Aligns to librarians’ focus on serving user need and interest in user experience Requires resource
  Individual user needs are complex and diverse

4. Studying best in sector practice

Studying sector best practices is about drawing insights from the experience of others to make well-informed, effective, and ethical decisions. Best practices serve as a valuable blueprint for achieving success. Learning about what has not worked, can be as valuable as discovering what has been successful. However, at the present time there do not seem to be many real world use cases to draw on.

What are comparator institutions doing? What works and what does not?

What can we learn from how other related sectors are using AI for (e.g. museums, galleries, archives etc.)?

What can we learn from how more distant sectors are using AI for (e.g. health, retail, transportation etc.)?

Pros Cons
Can be based on desk research and draw on professional networks Best practice is only just emerging and often not fully documented
  Challenges of transferring learning between contexts

5. Running proof of concept projects

Running proof of concept projects is a valuable practice for minimising risks and maximising the chances of success in developing services, especially in areas like AI, where complex technologies and innovative concepts often require validation before full implementation. They can help claim a place at the table by demonstrating the relevance of the library. Project management is a particular skillset. Turning a proof of concept project into a service is a major challenge in itself.

What immediate projects are there that could explore the benefit at point of need?

How can projects then be developed into services?

Pros Cons
Tests the issues in a real world context Resource demands
Minimises risk May be hard to manage expectations created in a pilot
Builds skills incrementally  

6. Talking to system suppliers and evaluating systems

Engaging with system suppliers and purchasing or licensing technology solutions is a common approach taken by libraries. It effectively outsources development costs, technical skills and risk. Due diligence is crucial to making informed decisions that align with organizational goals and deliver value to operations. There is a need for collective effort to influence the development of the marketplace. Sharing evaluation checklists might be one way to promote this.

Can our system suppliers offer us suitable tools?

What is the key functionality we wish to have access to?

What would be the rules of engagement for key requirements from AI in relation to transparency, data sources, bias, privacy and ownership of user data?

Pros Cons
Effectively outsource development cost and risk Cost
Effectively outsource technical skills required Lack of control

7. Aligning to what is happening in the organisation: Collaborating within the organisation

Aligning the implementation of new tools (and related processes) with what is happening within the wider organisation and promoting collaboration are vital for successful technology adoption.  A holistic approach that considers the organisation’s goals, people, processes, and culture will ensure the best result. At the same time, collaborations reduce control and create “political” and communication challenges.

Potential collaborators are:

  • IT services
  • Student facing services (if not integrated with the library)
  • Academic departments, especially computer science, data science and philosophy (ethics and society)
  • Emergent multidisciplinary networks of data scientists
  • Faculty deciding policy e.g. around AIED (AI in education) or AI in research

How are other departments in the institution using and responding to AI?

Who can we collaborate with internally to increase our capacity and influence ?

How can we gain a seat at the table to assert the relevance of our knowledge and needs? Who are the key decision makers?

Pros Cons
Share resources Differing needs  
  Institutional politics
  Library may not appear to be relevant

8. Collaborating with partners outside the organisation

Collaborating with peer institutions enables more informed decision making and reduces risk. But establishing collaborations requires an investment of time to build trust.

Turn to existing communities  that are working in the AI and library (and wider GLAM) space:

AI4LAM

IFLA Special Interest Group on AI

CENL AI in Libraries Network Group

AEOLIAN network

There are, of course, many organisations beyond the library world that are important to work with.

Who can we collaborate with externally to increase our capacity and influence?

Pros Cons
Maximizes the advantages of professional networking  

10.              Adopting a wait and see stance

Given the many competing pressures on libraries a wait and see stance conserves resources. But if this approach can be beneficial in many scenarios, libraries should be mindful not to wait too long, as they risk falling behind and losing perceived relevance. Finding the right balance between cautious evaluation and timely adoption is crucial for making informed decisions about AI implementation.

Where does the library want to sit in the diffusion of AI innovation: from innovator, early adopter, early majority, to late majority or late adopter?

Pros Cons
Conserves resources Risk of being seen as irrelevant  
Learn from early adopters Loss of control and potential influence

Section 5: Three important strategies

Given the breadth of impact of AI, there could be many strategies for libraries. But we have picked out three that seem to be important today.

Strategy 1: Using library AI capabilities to model responsible and explainable applications of descriptive AI

Where they have large collections of unique content needing improved description for retrieval libraries can apply descriptive AI to create exemplars of ethical, responsible and explainable AI in resistance to Big Tech’s offerings (Lee, 2023; Padilla et al., 2023). This can be achieved by following principles of good governance, such as through:

·         Surfacing the provenance of collections, so that usage is informed by a full understanding of the nature of the source

·         Ensuring that the selection of collections to apply AI to is appropriate, taking into account technical and copyright issues, but also respecting issues of inclusivity, indigenous rights and decolonisation;    Respecting the rights of those who are represented in collections and all other stakeholders; Appropriately rewarding/acknowledging volunteer and crowd workers; Respecting IPR issues, e.g. copyright in collections / licensing of content

·         Making services usable, accessible and explainable to intended users

·         Fully documenting the project to ensure explainability

·         Sharing code, training data, toolkits etc as openly as possible

·         Evaluating projects from a sustainability, including environmental impact perspective

There remain many challenges with achieving this, such as how to:

·         Prioritise collections to apply AI to

·         Determine whether affordable off-the-shelf tools work for historic data in library collections

·         Solve conceptual challenges such as in how to categorise images

·         Turn proof of concept projects into sustainable services

 

Strategy 2: Using librarians’ data competencies to enhance organisational AI capability

Not all libraries have collections needing the use of AI, but librarians’ data related expertise have high value for institutional applications of AI because today’s AI is data driven. This expertise can help support data scientists across the wider organization within which the library sits, such as multidisciplinary communities data scientists in an academic context, or analysts examining data within health services or government. Relevant activities include:

·         Finding data sources in complex information landscapes

·         Promoting the value of sharing, openness and interoperability for data

·         Explaining the importance of the provenance, validity and quality of that data to understand how it can be appropriately used

·         Explaining what data can be used for and what not according to copyright, IPR etc

·         Describing data using standards and the value of so doing

·         Storing, preserving (or destroying) data

All these practices align to professional knowledge of information governance and stewardship, but there is a need to translate this knowledge to the domain of data.

Strategy 3: Promoting AI literacy to enhance organisational and societal AI capabilities

The strategy most aligned to existing library practices and librarian identities, particularly in university, school and public libraries, is to take a lead role in promoting AI literacy. There is a widespread understanding that the public, as citizens and workers need to understand the new technologies. Students, whatever discipline they are studying, need such knowledge for employability.

 Librarians have already developed information literacy offerings, and many dimensions of AI literacy could be folded within these. They have developed the pedagogic knowledge and skills needed.

AI literacy is likely to include the ability to identify when AI is being used; to appreciate the differences between narrow and general AI; to understand what types of problem AI is good at solving; to understand how machine learning models are trained. It would also include awareness of ethical issues such as bias, privacy, explainability and social impact.

Since AI is based on data, data literacy is recognised to be a component of AI literacy. Algorithmic literacy is a concept that has already been developed to describe awareness of how services such as search and recommendation are increasingly shaped by algorithms to personalise and adapt content, but also can limit the visibility of information and create filter bubble effects. More formally it has been defined as “being aware of the use of algorithms in online applications, platforms, and services, knowing how algorithms work, being able to critically evaluate algorithmic decision-making as well as having the skills to cope with or even influence algorithmic operations” (Dogruel et al, 2022: p.4). Extending algorithmic literacy beyond the context of search is relevant to AI literacy.

AI is complex and hard to explain. It has multiple applications and guises. It is based on hard to understand computational ideas and statistics. Often the outcomes of decisions made by AI are hard to understand even for its designers because the machine learns patterns from data. While some images of AI lead us to expect to go to a service that is explicitly AI (as with ChatGPT), in fact it is often embedded in an infrastructure, so it is not easy to recognise or resist it’s working. Indeed, it would be fair to say that BigTech does not necessarily want how AI works to be known, because it’s a commercial secret.

References

AIAAIC (2023). AI, Algorithmic, and Automation Incidents and Controversies Repository  https://www.aiaaic.org/

Cordell, R. (2020). Machine Learning + Libraries: A report on the state of the field. https://labs.loc.gov/static/labs/work/reports/Cordell-LOC-ML-report.pdf?

Cox, AM (2021) The impact of AI, machine learning, automation and robotics on the information professions: A report for CILIP https://www.cilip.org.uk/general/custom.asp?page=researchreport

Cox, AM (2023). How artificial intelligence might change academic library work: Applying the competencies literature and the theory of the professions. Journal of the Association for Information Science and Technology74(3), 367-380.

Deshpande, A., Murahari, V., Rajpurohit, T., Kalyan, A., & Narasimhan, K. (2023). Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335.

Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115-133.

EuropeanaTech AI in relation to Glams taskforce (2021). Report and recommendations. https://pro.europeana.eu/project/ai-in-relation-to-glams

Huang, Y., Cox, A. M., & Cox, J. (2023). Artificial Intelligence in academic library strategy in the United Kingdom and the Mainland of China. The Journal of Academic Librarianship, 49(6), 102772.

IFLA AI SIG (2023a). 23 resources to get up to speed on AI in 2023 – selected by the IFLA Artificial Intelligence SIG https://www.ifla.org/g/ai/23-resources-to-get-up-to-speed-on-ai-in-2023/

IFLA AI SIG (2023b). Generative AI for library and information professionals (draft), https://www.ifla.org/generative-ai/

Jisc (2021). A pathway towards responsible, ethical AI. https://beta.jisc.ac.uk/reports/a-pathway-towards-responsible-ethical-ai

Jisc (2023a) Student perceptions of generative AI, https://beta.jisc.ac.uk/reports/student-perceptions-of-generative-ai/

Jisc (2023b) Artificial intelligence (AI) in tertiary education. 3rd edition. https://beta.jisc.ac.uk/reports/artificial-intelligence-in-tertiary-education

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9), 389-399.

Lee, B. C. G. (2023). The “Collections as ML Data” checklist for machine learning and cultural heritage. Journal of the Association for Information Science and Technology.

Long, D., & Magerko, B. (2020, April). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-16).

Ludvigsen, Kasper. 2022. The carbon footprint of Chat GPT. Last updated December 21, 2022. https://towardsdatascience.com/the-carbon-footprint-of-chatgpt-66932314627d

Motoki, F., Pinho Neto, V., & Rodrigues, V. (2023). More human than human: Measuring chatgpt political bias. Available at SSRN 4372349.

Padilla, T. (2019). Responsible operations: Data science, machine learning, and AI in libraries. OCLC. https://doi.org/10.25333/xk7z-9g97

Padilla,T.,Scates Kettler,H.,Varner,S.,& Shorish,Y.(2023).Vancouver Statement on Collections as Data.Zenodo. https://doi.org/10.5281/zenodo.8341519

Perrigo, B. 2023. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time, January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/

UKRI. (2021). Transforming our world with AI. https://www.ukri.org/wp-content/uploads/2021/02/UKRI-120221-TransformingOurWorldWithAI.pdf