Associations
AI/Analytics

Strategies for using AI for associations responsibly

Artificial intelligence (AI) is now being used across many industries in various ways, from drafting emails to writing code. Breakthroughs in the science behind AI have enabled a remarkable surge of new capabilities and use cases for associations and nonprofits, as well as in the for-profit space. 

Fíonta co-founder Dr. Lisa Rau partnered with US Transaction Corp, a boutique payment processing firm providing its association clients personalized services, on an episode of its UST Education Recorded Webinar series. As a computer scientist with a Ph.D. in Artificial Intelligence, Dr. Rau explains generative AI, how associations can leverage this tool, and what steps they can take to protect against risks. 

In this guide, we’ll explore these highlights from the webinar, which you can listen to here. Let’s get started.

What is a Generative Pre-trained Transformer (GPT)?

Dr. Rau refers to Generative Pre-trained Transformers (GPT) as “a major scientific breakthrough,” describing generative AI as “very large models that are trained on extremely large sets of text or corpora that are used to generate output, which is content.” 

These models are generative, in that they generate output. They are pre-trained, in that they work with an underlying previously trained model. They are transformers, in that they transform the input (or “prompt”) into the generated output. While the underlying neural net technology has been around for decades, advances in the underlying hardware have enabled computations required for learning tasks that would have taken a week to perform in 2012 to take a minute now. Core advances in the algorithms used for training, speaking simply and intuitively, include the ability to perform computations in parallel, and the ability to take into consideration context from anywhere within a data set, not just “nearby”.

These generative AI models can create text, generate speech and songs, create diagrams and images, and write computer code. Underneath GPTs are Large Language Models (LLM) continue to grow in size. While the largest model in 2019 had 330 million parameters (the weights on nodes inside the model), the current largest models have 1.76 trillion parameters .

Products like ChatGPT by OpenAI, a chatbot, are applications trained on specific GPT language models (e.g., GPT-3 or GPT-4). In simpler terms, GPTs are the “brains” that power these apps. 

What are some ways to leverage AI for associations?

As users innovate and technology evolves, AI-based tools have been created and customized for particular uses. Otter AI, for example, was created to generate follow-up emails and meeting summaries, while Whisper is used for automatic speech recognition. These tools allow associations to choose solutions that align with their needs, and there are many purpose-built tools with more seemingly being introduced regularly

To increase organizational impact, Dr. Rau recommends considering the following use cases as being uses of AI that have already seen some success:

Personalizing communications

AI systems can support associations’ interactions with members by taking existing member data and using it to automate and create custom experiences. Renewal notices can automatically take into consideration the length of membership to tailor different messages to long-term members from newer members. With events, these systems help associations highlight sessions specifically tailored to a member’s interests. Each message is unique, and the content dynamic – with the personalization so persuasive, it is hard to believe it was not generated fully by a human. Create dynamic content, segment audiences, and target messages. Tone can vary based on the member’s role in their organization.  

Other examples of using AI to personalize communications include:

  • Creating personalized product or merchandise recommendations members may be interested in. With access to a member’s history of purchases, sessions attended in training or meetings, data on “members who bought this, frequently bought that,” and the mission of their organization, recommendations for future purchases are highly relevant and more likely to be of interest.
  • Suggesting conference sessions members may enjoy based on past interests, activities, certifications, histories of attendance, and so forth. 
  • Developing curated learning paths according to members’ interests and career trajectories.

Streamlining data analytics and visualization

Associations gather and store large amounts of member and event data, feedback, legal documentation, and more. Associations can use generative AI to summarize documents and ask the system to present the data in new ways. 

These are some of the ways associations can use AI to understand data better:

  • Create clusters or themes of data (e.g., grouping presentations for a conference by theme)
  • Summarize documents to get a high-level overview of the main points
  • Automatically generate data visualizations like pie charts or bar graphics based on association data
  • Perform a sentiment analysis based on newspapers and social media, and/or survey data collected
  • Create data taxonomies – automatically clustering free text inputs into the major “themes”. This type of task has historically been tedious (reading through hundreds or thousands of free-text comments for example), and hard for humans to do consistently. 

With AI tools aiding in data analytics and visualization, associations can save time and effort and generate more accurate and consistent analyses. In some cases, these AI tools can perform tasks that would tax even the most dedicated human given large amounts of data. However, like with all output from an AI system, analyses need to be validated by a human. Indeed, all text should be fact-checked to ensure accuracy.

Generating drafts

Using a simple prompt, such as, “I am responsible for decreasing costs in my association. Draft me a policy to increase the energy efficiency of my association,” generative AI can produce a draft based on policies used by other associations and likely areas of energy consumption in use. They can automatically generate draft product descriptions, write marketing materials, develop draft copy for social media posts, and so much more. 

AI systems also serve as robust editors and translators. Text can quickly be translated into another language or tailored to a different reading level. Associations can also prompt the system to remove complicated jargon from written materials and correct spelling and grammar errors, making these systems a helpful tool to make writing more straightforward and accessible. It is safe to say that any task that involves creating text can be considered for augmentation by an AI. 

What are the risks of using AI for associations?

When using AI, there are important concerns regarding privacy and ethics. In the webinar, Dr. Rau discusses certain risks associations should be aware of before using generative AI, including:

  • Leaks and disclosure of proprietary information: Privacy assurances for AI services vary from tool to tool. New ways to cause a secure tool to reveal privileged information continue to arise. There is no guarantee that the data users provide will remain private, which could result in the infringement of intellectual property and leaks of members’ personal data and proprietary information. The risks need to be assessed, and all tools need to be considered from this perspective. 
  • Cost of implementing new systems: AI systems are changing quickly, meaning that developments from only months ago could now be outdated. Investing heavily into a particular application of the technology could be problematic or suboptimal if that application becomes overtaken by newer capabilities.
  • Not all information is accurate: Generative AI is not infallible, and it can make mistakes and even make up information (“hallucinate”). For example, while a model may have access to the heights of Mount Everest and the Burj Khalifa skyscraper, it may not have the numerical reasoning capabilities to determine which is taller. A human must be looped into the process to proof, edit, and fact-check any content generated by these systems. GPT systems have been known to make up quotes and ascribe them to specific individuals, as another example. 
  • Other ethical concerns like discrimination: Because AI models like ChatGPT are training on such large datasets, they can encode the bias present within those data sets. Using these tools can perpetuate those biases, reinforcing social stereotypes during processes like hiring.

Most in the field believe that GPTs are, in effect, “stochastic parrots” because statistics and math are all that guide AI models and the systems do not “understand” anything. Dr. Rau explains they are “just mimicking and keying off what they’re hearing” in the same way a parrot may sound uncannily like a human but without any understanding of the meaning of the words it is saying.

To prevent these risks from violating association values or becoming costly legal issues, it’s important to create a policy outlining the organization’s guidelines for using generative AI.

How to implement AI responsibly

Creating policies for using generative AI is a first step for any association before deploying capabilities, internally or externally. One resource Dr. Rau recommends leveraging is the Society for Human Resource Management’s article on creating ChatGPT policies to understand how to monitor the usage of these tools and technology, encourage innovation, ensure it is being used with appropriate data, and protect proprietary data.

Ideally, associations should include the following sections in their policy:

  • Prohibited uses
  • Acceptable uses
  • How to remain compliant with association policies
  • Regulations around using ChatGPT for personal reasons while at work
  • Requirements for transparency when content was generated partly by an AI
  • Any other organization-specific clarifications

Associations should brainstorm with various teams and departments to uncover use cases related to their goals. For example, these goals may be internal, like making your finance team more efficient, or external, like personalizing member interactions.

Also, rules and safeguards should be in place to protect the association and its members. At a minimum, these should include:

  • Requiring a human to fact-check AI content
  • Protecting member privacy (e.g., informing them that they may be interacting with a bot in association terms of service)
  • Preventing deception and disinformation through these tools by limiting or prohibiting the entry of personal data and intellectual property
  • Ensuring the ability to remove data from the training model to protect privacy and security
  • The ability to quickly deactivate an AI-enabled system in the case of issues

These basic precautions provide a good starting point for associations that want to implement generative AI safely and responsibly.

Wrapping up

Generative AI can save organizations countless hours of work daily by automating processes, summarizing documents, visualizing datasets, and more. AI systems can perform tasks difficult for humans to accomplish and produce consistent results. As Dr. Rau explains, people are discovering novel applications and new tools based on the underlying technology every day, expanding how associations can use AI.

Fíonta’s experts help associations uncover the best ways to leverage AI to improve efficiency, drive revenue, and strengthen member relationships. We create tailored solutions that enable associations to implement and customize new technology based on organizational needs. 

Schedule a consultation with Fíonta’s AI experts—transform the way your association operates and interacts. The future of AI in associations is here, and it’s within your reach.