Find Posts By Topic

City of Seattle Adopts Generative AI Policy

In April, the Seattle Information Technology department issued an interim policy to provide guidance for City of Seattle employees to reduce the risks and mitigate potential harms of generative AI systems. Earlier this week, Seattle’s actions on this topic were featured by Government Technology in the article “Boston, Seattle Issue Interim Generative AI Guidance.” Seattle Mayor Bruce Harrell has invited the City’s interim Chief Technology Officer, Jim Loter, to present on this topic to the United States Conference of Mayors annual meeting in Columbus, Ohio, on June 3.

What is Generative AI?

Generative artificial intelligence, or generative AI, uses a combination of computer algorithms and large volumes of data to create new digital content, such as text, images, video, or code. Generative AI systems are “trained” using enormous amounts of human-generated data (a “large language model”), can learn patterns in that data, and can generate new output by predicting what a human would likely say or produce given a certain input or “prompt.”

In 2016, AI researcher Arend Hintze created a useful framework for understanding different kinds of AI systems by classifying them into 4 types.

Type I AIs are known as “Reactive Machines.” This type of AI has no memory and tends to be built for specific purposes. Some popular examples of Type I AI systems include IBM’s “Deep Blue,” a chess-playing supercomputer that famously beat grandmaster Gary Kasparov in 1997. Type I systems are relatively common and have become very reliable over the years. Autonomous vehicles, for example, rely mainly on Type I AI systems.

Type II AI is termed “Limited Memory.” These systems can add data and analysis to their calculations and revise those calculations accordingly, which forms a kind of “memory” function. They can adjust and adapt to changing circumstances and use new inputs and their own outputs to readjust how they respond to future situations. Type II systems are used for speech recognition, image classification, and various data analytics applications, such as predictive behavior modeling and threat detection.

Types III (“Theory of Mind”) and IV (“Self-awareness”), in Hintze’s model, start to approach consciousness. The descriptions of these types begin to resemble the kinds of AI systems we see in science fiction movies – they begin to form their own impressions of the world and experience thoughts and emotions. A so-called “self-aware” system would be able to form a representation of itself. Hintze suggests that these types of AI systems don’t exist yet.

In applying this framework, generative AI systems could be considered a special case of Type II systems – they are not merely reactive but can adjust and modify their responses based on changing circumstances. However, they are not “thinking machines” and do not understand the world like humans do or possess self-awareness. They are designed to act on data and look for patterns, and to piece together responses based on that pattern recognition. They are judged on their ability to produce content that appears reasonably human-like.

What is the City of Seattle’s interim policy?

Seattle IT develops and delivers solutions that help City departments efficiently and effectively deliver equitable and responsive services to the public. We also recognize that we are entrusted with responsibly stewarding the public’s data and protecting our IT systems. We see the emergence of generative AI as providing both opportunities that can help us deliver our services, but it also has risks that can threaten our responsibilities.

Because the generative AI field is emergent and rapidly evolving, the potential policy impacts and risks to the City are not fully understood. The use of generative AI systems within the City of Seattle can have unanticipated and unmitigated impacts. Our interim policy is intended to minimize issues that may arise from the use of this technology while additional research and analysis are conducted.

Our policy directs City employees to obtain permissions from Seattle IT before accessing or acquiring a generative AI product. This is a standard operating practice in the City for all new or non-standard technology.

We further require employees to validate that the output of generative AI systems is accurate, properly attributed, free of someone else’s intellectual property, and free of unintended or undesirable instances of bias and potentially offensive or harmful material.

Next, employees are prohibited from inputting sensitive or confidential data, including personally-identifiable data about members of the public, into these systems.

Finally, the policy reminds City employees that most City work product is subject to the State of Washington’s Public Records Act, and that they must work within our administrative processes and policies to retain relevant material for potential public disclosure.

How is the City evaluating further policy developments?

We have formed a Policy Advisory Team comprised of our partners from the Community Technology Advisory Board and the University of Washington’s Tech Policy Lab. The advisory team will make recommendations to the CTO this fall to inform the City’s formal policy on generative AI.

Sarah Carrier, the City’s Privacy Program Manager, is heading up this collaborative effort. Sarah says:

“The City needs to develop a deep understanding of generative AI and its potential benefits as well as risks that may arise from leveraging these powerful, rapidly evolving tools. Developing a thoughtful approach to evaluating and addressing these risks – through the work the Advisory Team is taking on – will enable the City to put appropriate guardrails in place to ensure not only responsible use of these systems, but also support the City’s Race and Social Justice goals, and ultimately drive transparency and accountability to the public we serve.”

Interim CTO Loter is also concerned about the transparency of AI products, and the data and algorithms they use, and wants cities to gain more visibility into the entire product stack and supply chain for these technologies. In the Government Technology article, Loter said:

“As the city considers risks and its ability to minimize them, it needs to focus on its vendor relationships. Governments often use enterprise software from major providers, some of which are now starting to license tools from generative AI companies and infuse those into their software suites. That means that if government employees keep using the tools they’re used to, they’ll now be interacting with this emerging technology.”

What else is the City of Seattle doing on this topic?

Seattle IT is working with our City stakeholders to develop programs to provide members of our community and our municipal and county partners an opportunity to learn more about this technology and to provide us feedback. More information about these programs will be coming soon.