What We're Reading

Legislative activity at the federal level concerning existential AI risks, in Colorado concerning neural privacy, and a few articles that shed light on the gap between a policy's intended impact and real world results as it relates to tech regulation. 

Framework for Mitigating Extreme AI Risks | senate.gov

Senators Romney, Reed, Moran, King released a framework designed to protect against the risk posed by frontier AI models in the form of biological, chemical, cyber, and nuclear threats. It proposes a federal agency or interagency coordinating body that would oversee evaluation and licensing, but the scope is limited to "only the very largest and most advanced models." 

Joint Guidance on Deploying AI Systems Securely | CISA 

Alongside international counterparts from New Zealand, Australia, the UK, and Canada, the NSA, FBI and CISA recently released best practices for cyber-secure AI systems, covering deployment, operations and maintenance.  

The Ethics of Advanced AI Assistants | Google - DeepMind

A new (and very long – Axios' provides a topline recap here) DeepMind paper looks at the ethics underlying AI assistants, to include risks of "manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy." The paper stresses that alignment of these AI assistants requires balancing the competing needs of user, society and developers, and urges broad assessment of sociotechnical implications. 

Will AI accelerate or delay the race to net-zero emissions? | Nature

The amount of electricity required to train and operate AI models is poised to completely overwhelm the US grid, absent the addition of significant capacity. This article however, considers AI's implications for emissions more broadly, to include indirect effects such as AI informed gains in energy efficiency, optimized supply chains, development of new materials for renewable energy, and the potential easing of natural resource exploitation. The article urges development of "policy-relevant scenarios to quantify the effects that AI expansion could have on the climate under a range of assumptions," which the authors explore in some detail. 

Your Brain Waves Are Up for Sale. A New Law Wants to Change That | New York Times

Data privacy is taking on new meaning with devices that track users' brain activity (apparently the claim that AirPods may soon be able to read users thoughts isn't pure science fiction). Colorado just passed the first state law in the US to protect neural privacy by expanding the "definition of 'sensitive data' in the state’s current personal privacy law to include biological and 'neural data' generated by the brain, the spinal cord and the network of nerves that relays messages throughout the body." This new report, assessing existing privacy practices in the consumer neurotechnology market, shows a clear need for regulatory engagement.  

Full Disclosure: Stress testing tech platforms’ ad repositories | Mozilla Foundation

Mozilla put out a study looking at DSA mandated ad transparency tools across 11 platforms/websites. Results were lackluster, at best. The report concludes that although "we find a huge variation among the platforms, but one thing is true across all of them: none is a fully-functional ad repository and none will provide researchers and civil society groups with the tools and data they need to effectively monitor the impact of VLOs advertisements on Europe’s upcoming elections." 

When Facebook blocks news, studies show the political risks that follow | Reuters 

Unsurprisingly, but alarming: Reuters reports on two studies that explore the consequences of removing news sites from social media sites (Canada proved a real world case study by instituting a requirement that social media companies compensate news sites. Meta opted to remove news content entirely.) Both studies found that removing news creates a vacuum that is filled by memes and disinformation. 

News Categories