Blog

  • This Week in Government Technology – May 12th-19th, 2024

    This Week in Government Technology – May 12th-19th, 2024

    As the world of artificial intelligence(AI) continues to expand, we saw several instances of government leadership prioritizing funding, regulatory frameworks, and emerging technology pilots this week.

    $32 Billion Funding Proposal

    U.S. Senate Majority Leader Charles E. Schumer and his AI working group have endorsed an independent commission’s recommendation for the federal government to allocate at least $32 billion annually to non-defense AI systems. Initially put forth by the National Security Commission on Artificial Intelligence, the proposal is supported by a bipartisan group including Senators Martin Heinrich, Todd Young, and Mike Rounds. Alongside the funding proposal, the group released a road map for AI development, addressing privacy, data security, and emergency appropriations for research agencies. While Schumer aims to collaborate with House Speaker Mike Johnson, the plan has faced criticism for prioritizing innovation over addressing discrimination and civil rights concerns.

    Indiana Implements AI to Support Job Opportunities

    The Indiana Department of Workforce Development (DWD) has launched Pivot, an AI-powered tool designed to assist job seekers in identifying personalized career paths with precision. Developed by Resultant and introduced last November, Pivot uses wage record data to help users visualize current and future job opportunities. The AI capabilities of Pivot enable it to project sector-specific job demands and align job seekers’ skills with relevant training programs. DWD plans to expand Pivot’s functionality this fall to include recommendations on training providers and eventually make it accessible beyond the unemployment insurance system.

    Biden Administration Releases AI Principles to Protect Workers

    Following last year’s AI executive order, the Biden administration has introduced a set of principles to guide the interaction between workers and artificial intelligence (AI). These principles emphasize responsible use of workers’ data, upskilling support, and AI deployment transparency. Although voluntary, these guidelines reflect a commitment to enhancing work quality and life for all workers. Acting Labor Secretary Julie Su highlighted the administration’s dedication to ensuring technology serves people first. Companies like Microsoft and Indeed have already agreed to these principles, with the administration seeking further commitments from other tech firms. A new list of best practices is expected from the Labor Department soon.

  • The Risks and Rewards of AI-Generated Code in Government Systems

    The Risks and Rewards of AI-Generated Code in Government Systems

    You can read the full article here.

    As artificial intelligence (AI) integrates into government operations, its advantages in terms of efficiency and automation are undeniable. However, with these benefits come significant challenges, particularly security vulnerabilities associated with AI-generated code. While the efficiency gains are substantial, the security risks cannot be overlooked. As AI continues to develop, government agencies must remain vigilant, ensuring that advancements in AI technology are matched with equally sophisticated security and oversight measures. This balanced approach will be crucial in harnessing AI’s full potential while safeguarding the public interest.

    John Breeden’s article “Feds Beware: New Studies Demonstrate Key AI Shortcomings” delves into the complexities of AI adoption in government and highlights the necessity of balancing innovation with rigorous security measures.

    Why Read This Article?

    Comprehensive AI Accountability Framework: The federal government has instituted guidelines such as the AI Accountability Framework for Federal Agencies, which underscores responsible AI usage. This framework emphasizes governance, data management, performance, and monitoring to ensure that AI applications align with agency missions and operate securely.

    Challenges with AI-Generated Code: While AI’s capability to swiftly generate code is impressive, it introduces pressing security concerns. For instance, a study by the University of Quebec revealed that most applications created by AI harbor severe cybersecurity vulnerabilities. This finding emphasizes the need for improved security measures in government AI coding practices.

    Human-AI Collaboration: Given the identified challenges, the article advocates for a model where AI tools are used with human oversight. This approach not only mitigates risks but also enhances the effectiveness of governmental operations. By pairing AI’s rapid capabilities with the nuanced understanding of human developers, agencies can better safeguard against potential security flaws.

    Operational Enhancements: AI’s ability to automate complex coding tasks can significantly expedite government development processes. However, as technology continues to evolve, so too does the necessity for robust mechanisms to monitor and refine AI outputs to ensure they meet stringent security standards.

  • Nevada’s Generative AI Integration Contributes to State Executive of the Year Award

    Nevada’s Generative AI Integration Contributes to State Executive of the Year Award

    You can read the full article here.

    Under the guidance of Timothy Galluzi, Nevada’s Chief Information Officer, the state has pioneered the integration of generative AI (GenAI) to enhance government operations. The Nevada Department of Employment, Training and Rehabilitation (DETR) utilized GenAI to improve the unemployment claims adjudication process. This use of AI made it possible to automate interactions and data analysis, which helped streamline operations and reduce the administrative burden. Recognized for his leadership with the State Executive of the Year award, Galluzi attributes this success to the collective efforts of his team and the broader network of state agencies.

    Why Read This Article?

    • Comprehensive AI Guidelines: In response to emerging tech trends, Nevada developed robust guidelines to safeguard citizen data in AI applications, emphasizing a strategic and secure approach to technology adoption.
    • Operational Enhancements: DETR has effectively employed GenAI to optimize the unemployment claims adjudication process, showcasing substantial improvements in efficiency and user interaction.
    • Human-Centric AI Applications: Galluzi’s approach underlines the importance of maintaining human oversight in AI implementations, ensuring technology supports rather than supplants human decision-making.
  • Use Cases From Early AI Adoption in Government

    Use Cases From Early AI Adoption in Government

    You can read the full article here.

    As governments at various levels grapple with the complexities of artificial intelligence (AI), Julia Edinger’s article, “Where to Start With AI? Cities and States Offer Use Cases,” explores states, local governments, and cities that demonstrate practical implementations of AI, from enhancing cybersecurity to streamlining 311 services. Despite the absence of federal regulation until late 2023, Utah and Santa Cruz County have pioneered their own policies, aiming to balance innovation with necessary safeguards. The article underscores a pivotal shift towards localized AI governance, suggesting that early and considered policy frameworks can foster safe and beneficial AI applications across the public sector.

    Why Read This Article?

    • Practical AI Applications: Learn from the diverse and practical ways various states and cities use AI to enhance public services and cybersecurity.
    • Policy Pioneering: Explore how early adopters navigate the absence of broad federal regulations to create localized frameworks that could serve as models nationwide.
  • California’s Landmark Bill Prioritizes Human Safety and Ethical Standards in Artificial Intelligence Development  

    California’s Landmark Bill Prioritizes Human Safety and Ethical Standards in Artificial Intelligence Development  

    You can read the full article here.

    In the Washington Post’s article, “In Big Tech’s backyard, California lawmaker unveils landmark AI bill,” writers Gerrit De Vynck and Cat Zakrzewski explore the proactive steps taken by California State Sen. Scott Wiener to regulate artificial intelligence (AI) technologies. The bill outlines the requirement for thorough testing and robust safety measures for AI models before deployment, addressing potential dangers and unethical applications. This legislation could set a precedent for other states and influence nationwide policy, reflecting a pivotal shift toward integrating safety and ethics in AI development.

    Why Read This Article?

    • Regulatory Pioneering: Gain insights into how California’s legislative actions could shape national AI regulation.
    • Safety and Ethics in AI: Understand the importance of preemptive testing and ethical frameworks in deploying AI technologies.
  • Policy2Code Prototyping Challenge

    Policy2Code Prototyping Challenge

    You can read more about the Policy2Code Prototyping Challenge here.

    The Policy2Code Prototyping Challenge is an invitation to explore the application of generative AI, specifically Large Language Models (LLMs), in translating U.S. public benefits policies into plain language and software code. This initiative is part of the broader movement known as Rules as Code (RaC), which aims to make policy implementation more efficient and accessible by converting policy directives into software executable formats. Designed to foster collaboration among technologists, policy specialists, and other practitioners, the Prototyping Challenge explores AI’s potential benefits and limitations in this context. This unique opportunity can save countless human hours currently spent on manual translations and coding and accelerates the adoption of the RaC approach, potentially transforming how public services are delivered. 

    Applications are due May 22, 2024.

    Host Organizations

    The Policy2Code Prototyping challenge is organized by the Digital Benefits Network (DBN) at the Beeck Center for Social Impact + Innovation in partnership with the Massive Data Institute (MDI), both based at Georgetown University in Washington, D.C.

  • Gov Tech Today Podcast: Bridging the Gap with AI in Government Services in Gov Tech Today

    Gov Tech Today Podcast: Bridging the Gap with AI in Government Services in Gov Tech Today

    You can listen to this podcast on Apple and Spotify.

    In this episode of Gov Tech Today, hosts Russell Lowery and Jennifer Saha discuss with Justin Brown, former Secretary of Human Services for Oklahoma, the transformative impact of Artificial Intelligence (AI) in government operations. Brown brings valuable perspectives from his tenure during critical times and his efforts to foster a national dialogue on AI’s potential across state governments. This conversation is vital for understanding how AI can streamline government services, enhance workforce efficiency, and reduce regulatory burdens—all while emphasizing the importance of ethical deployment and education.

    Listeners will gain a deeper understanding of how AI solutions can significantly improve decision-making processes, increase accessibility, and address systemic challenges such as the benefits cliff. The episode also highlights the pioneering efforts behind creating the Center for Public Sector AI (CPSAI), underscoring its ongoing mission to support ethical AI integration that respects public values and enhances service delivery.

    Why Listen?

    • Insightful Leadership: Learn from a leader’s firsthand experience with AI in public sector reforms.
    • Relevant Challenges and Solutions: Understand the practical challenges and innovative solutions AI brings to public services.
    • Alignment with CPSAI Goals: This episode aligns closely with CPSAI’s mission to promote ethical AI use that benefits all citizens.
  • Thousand Stories Podcast: CPSAI — A Framework for Supporting Ethical AI in Public Sector

    Thousand Stories Podcast: CPSAI — A Framework for Supporting Ethical AI in Public Sector

    You can listen to this podcast on Apple and Spotify.

    The Thousand Stories Podcast episode from April 18, 2024, provides a comprehensive view of the Center for Public Sector AI (CPSAI), a transformative force in integrating Artificial Intelligence into public sector operations, particularly within health and human services. The episode delves into the multifaceted strategies and initiatives undertaken by CPSAI to ensure that AI technology enhances public services responsibly and effectively.

    CPSAI is a groundbreaking initiative demonstrating a commitment to nonpartisanship and nonprofit values. It operates with a clear focus on the ethical deployment of AI technologies, ensuring that these powerful tools are used to their fullest potential to improve public sector services without compromising ethical standards or public trust.

    CPSAI strongly emphasizes educating leaders within the health and human services sectors about the nuances of AI. This includes detailed training on the technological, ethical, and practical aspects of AI deployment, aiming to equip leaders with the knowledge to make informed decisions. These educational efforts are designed to transform state leaders into informed stakeholders who can effectively navigate the complexities of new technologies, thereby enhancing their ability to oversee and implement AI-driven projects.

    To safeguard against potential risks, CPSAI develops operational guardrails that serve as ethical and practical boundaries for AI deployment. These guidelines are continuously refined and shared across states to standardize safe and responsible AI integration. By establishing a framework of best practices and ethical standards, CPSAI helps ensure that AI applications are beneficial and do not inadvertently exacerbate existing disparities or introduce new ethical dilemmas.

    Project Clearinghouse stands out as a key initiative where AI projects are rigorously evaluated for their adherence to established guardrails and potential impact on public services. This platform fosters collaboration and resource sharing among various stakeholders. Project Clearinghouse standardizes the evaluation of AI projects and facilitates the exchange of best practices, enhancing the effectiveness and efficiency of AI applications across different states.

    Recognizing the importance of collaboration, CPSAI actively forms partnerships with academic institutions, technology firms, and other governmental and non-governmental organizations. These partnerships pool expertise and resources, thereby amplifying AI’s positive impacts. Through these strategic alliances, CPSAI leverages collective insights and innovations, ensuring that AI technologies are deployed in transformative ways that are aligned with public interests.

    Beyond theoretical discussions, CPSAI is deeply committed to the practical application of AI, focusing on how these technologies can concretely improve service delivery and the operational efficiency of public services. By encouraging experimentation and learning from successes and failures, CPSAI promotes an adaptive approach to technology deployment, aiming to solve real-world problems efficiently and ethically.

  • Governors Leading On Artificial Intelligence

    Governors Leading On Artificial Intelligence

    You can read the full article here.

    In a time when technology is rapidly advancing, governors across the United States are taking action on artificial intelligence (AI). Explore how these leaders are actively embracing AI’s potential and addressing its challenges.

    From healthcare to education and more, AI is reshaping industries, and governors are leading the way. Learn how the National Governors Association (NGA) supports states in navigating the AI landscape.

    But it’s not just about economic competitiveness—it’s also about fairness and privacy. Governors are promoting ethical AI adoption through initiatives like AI ethics commissions and workforce development programs.

    Discover how governors are driving progress and shaping a future where AI isn’t just a buzzword, but a tool for positive change. Read on to learn more about their efforts and the impact on citizens’ lives.

    In this ever-evolving world, ensuring new technologies are both safe and effective is an important public safety measure. Today, that new technology is Artificial Intelligence, maybe better known as AI. And look y’all, I am not going to stand here and preach like I know a lick about AI. However, I do know that new technologies can have benefits, but if not used responsibly, they can be dangerous. We are going to ensure that AI is used properly.”

    Governor Kay Ivey, 2024 State of the State

  • AI and the Future of Government Services

    AI and the Future of Government Services

    You can read the full article here.

    Benefits Data Trust (BDT) is a company dedicated to helping people live healthier, more independent lives by creating smarter ways to access essential benefits and services. Their new series, “Human-Serving AI,” features BDT CEO Trooper Sanders in discussion with AI experts, exploring how AI can transform government services by modernizing operations and increasing equity and efficiency in public benefits systems.

    Why Should You Watch the Series?

    • In-depth Expertise: Gain valuable insights from leaders like Afua Bruce and Karen Levy on integrating AI responsibly in public sectors.
    • Strategic Guidance: Learn about AI’s practical applications and ethical considerations in public services.
    • Policy Impact: Discover the implications of recent government actions on AI.