The Austin City Council has directed the city manager to establish guidelines and procedures centered on how employees can use Artificial Intelligence (AI) ethically and transparently.
The presence of big data and AI are changing how information is collected and managed, creating concerns over racial and cultural biases of the technology’s use, city officials said in a recently advanced resolution. The city has already begun implementing AI in everyday use. The Austin Police Department (APD) began using AI to conduct non-emergency reports in March 2023 and the City Council in October 2023 approved a 5-year contract to use AI to help identify potential wildfires.
Austin’s city manager will assemble an advisory committee comprised of digital privacy- and AI-focused academics, nonprofits and industry experts to help chart the city’s way forward. The committee will provide recommendations for privacy and information-technology protection procedures, as well as guidelines for city employees while interacting with AI systems. According to the resolution, the guidelines will address the following principles:
Innovation and collaboration – The guidelines will encourage collaboration between city employees and AI systems for improving and delivering city services. The focus will center on ensuring AI systems play a supplemental role while leaving final say on decisions to city employees.
Data privacy and security – The city will inventory and evaluate AI systems to maintain confidentiality, integrity and data availability to minimize security risks.
Transparency – The city will evaluate AI systems to ensure development, use and deployment of the technology is law-abiding. The process will require documenting and publicly disclosing the AI’s use, purpose, collected information, location and impact.
Explainability and interpretability – The principles will ensure that AI system outputs and models are communicated in clear language.
Validity and reliability – The city will ensure that the AI system’s performance is reliable and accurate.
Bias and harm reduction – The city will evaluate AI systems through an equity lens to ensure all usage aligns with the city’s anti-racist and anti-discriminatory commitments. The principle seeks to minimize potential impacts such as discrimination and unintended harms arising from data, human or algorithmic bias as much as possible.
The advisory committee will meet at least twice with the city manager, who will report to the council on AI guidelines, accountability strategies and workforce considerations by May 28, 2024. The city manager will also create accountability and oversight strategies and procedures surrounding AI’s acquisition and use.
Further, the city manager will establish processes for investigating and addressing errors stemming from using AI. The city manager will also develop a comprehensive plan for managing AI’s impact on the workforce. The plan will prioritize job protection, training, city employee support and best practices for mitigating AI harm while maximizing its utility.
Austin’s move toward creating clear requirements on using AI is part of a broader national trend of exploring or establishing similar guidelines. Seattle, New York, San Jose and Santa Cruz County, California, have all issued independent policies or guidelines for how employees use AI on the job. It also follows President Joe Biden’s October 2023 executive order establishing standards for AI safety and security to protect privacy and safeguard equity and civil rights.
All news and information on this site is provided by the team at Strategic Partnerships, Inc. Check out this short 1-minute video that provides a quick overview of how we work with clients.
Photo by charlesdeluvio on Unsplash
The post Austin to establish AI-use guidelines for city employees appeared first on Government Market News.