The AI Optimist Debate Question 1:
While AI holds immense potential in education and business, serious privacy concerns exist. AI systems need access to vast amounts of personal data to provide personalized learning and business insights.
How can we ensure this data is not misused or exploited, and what safeguards should we put in place to protect individuals' privacy rights?
The AI Optimist sees a future of international cooperation and collaboration, moving into possibility and out of fear.
The AI pessimists see it going wrong, with each country doing its own thing. And in the current state of regulations, the pessimists are winning.
What do you think?
Reply to the email and comment below; let’s explore how this works for people, not just governments and businesses!
Given the pessimist’s constant focus on AI as being as dangerous as nuclear bombs, it’s interesting that no one has proposed not using AI in the military. Nuclear war doesn’t happen based on the theory of MAD – mutually assured destruction.
Do we want to release the same kind of danger with AI, which is already being abused by government, businesses, and military usage in major countries?
Likely because AI misuse has already happened; if we know what we do now about how the threat of nuclear war and the spread of atomic capabilities threaten us all, why don’t we focus on taming AI now…..together?
From Facial Recognition to Tracking Digital Activity with Multiple Sets of Rules
The first wave of AI’s serious usage came with facial recognition in China, the US, and the EU, not just by governments and cameras everywhere, but with businesses doing this in all these countries without asking for permission. They just did it!
While we won’t explore the impact of AI-driven social algorithms on our habits, cocooning us into echo chamber silos and subjecting younger generations to this behavioral testing.
Some of the following regulations finally cover societal impact, and we hope this continues because AI has been around for a while with few rules.
China is the only one with a clear policy aligned with controlling the population. Yet the use in other countries is not precisely about freedom and individual rights; it’s also about control.
The more you study AI regulation, the more you see it’s a game of control.
The AI Conundrum
How do we regulate what’s already happening?
Global regulation of AI poses various challenges requiring careful consideration. Yet the variety of laws, cultures, technology savvy, and governments who already abuse AI creates contradictions that no single proposal or framework can cover.
Nonetheless, there are current frameworks and proposals that we can explore to address these challenges effectively. You could learn from each other.
One of the significant challenges is determining common values that align with different countries, regions, and cultures.
· For example, China prioritizes regulations that align with socialist values, while the US, EU, and Canada focus on individual rights.
· Despite the existence of diverse cultures and rules, can we regulate AI effectively and navigate challenges as they emerge?
· Another issue is identifying the appropriate entity to regulate AI. Should it be independent audits, third-party organizations, government entities, or institutions?
· What is the risk? Measuring AI risk and determining what constitutes a risk means different things in different parts of the world, like China, the US, Japan, Canada, and the rest.
KEY QUESTIONS FOR AI REGULATION
1. How can we safeguard individuals' privacy rights while utilizing AI for personalized learning and business insights?
2. Privacy concerns arise because AI systems require extensive access to personal data. What measures can be taken to prevent the misuse or exploitation of this data?
3. Selling data – the US is a leader in sales of personal data, including by parts of the government (which, if they did directly, is against the law, but when buying the data from private parties, there are no rules).
Before you think this is some conspiracy talk, read this from the US government in January 2022:
· Defense Intelligence Agency buys data from LexisNexis;
· Navy buys a database of people who might be tied to sanctioned people from Sayari Analytics;
· FBI buys social media alerts from ZeroFox, a Cybersecurity company;
· Foreign entities and governments have also purchased this data with location data, social activity, proximity to shopping areas and protests, and the list goes on and on.
1. When comparing the approaches of the U.S. and EU, the U.S. Algorithmic Accountability Act of 2022 proposal focuses specifically on automated processes and systems that make critical decisions.
2. In contrast, the EU Artificial Intelligence Act framework applies to a broader range of AI systems. It imposes regulatory requirements corresponding to the level of risk a system poses to the public.
3. Canada has no laws explicitly addressing AI like the U.S. and EU Acts. The Artificial Intelligence Act (AIA) is a proposal to address these issues and is quite restrictive like the EU’s.
· Canada does have the Directive on Automated Decision-Making in the public sphere, which mandates an assessment of the algorithmic impact of each automated decision-making system used by a federal institution.
Japan was the first G7 country to release comprehensive AI ethics guidelines in 2019, gradually prioritizing ethics and human oversight in using AI.
· There's an ongoing discussion regarding whether this self-regulatory model is enough or if more stringent laws are necessary to address potential harm.
China is taking a solid stance on regulating the development and use of AI, focusing on ensuring technical safety and promoting innovation in government and industry.
· However, this approach may not prioritize the empowerment of citizens and could lead to isolation if human rights concerns are not addressed.
These regulations will have a significant impact worldwide. Even though AI – in the form of Machine Learning in the early days – has been used by most countries in the past ten years, regulation is lacking.
Let’s dive into what exists and explore further to form your own opinions – links are provided to each of the measures where possible below.
Leading the way with an Actual Privacy Framework – China
China aims to lead globally in AI while mitigating risks. Regulations focus on managing data, algorithms, and application scenarios.
China’s Policy: Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023
July 14, 2023 Update to the Policy: China takes significant step in regulating generative AI services like ChatGPT
“The rules will now only apply to services available to the general public in China. Technology developed in research institutions or intended for overseas users' use is exempted.
The current version has also removed language indicating punitive measures that had included fines as high as 100,000 yuan ($14,027) for violations.
The state “encourages the innovative use of generative AI in all industries and fields” and supports the development of “secure and trustworthy” chips, software, tools, computing power, and data sources, according to the document announcing the rules.
China also urges platforms to “participate in the formulation of international rules and standards” related to generative AI, it said.
Still, among the key provisions is a requirement for generative AI service providers to conduct security reviews and register their algorithms with the government, if their services are capable of influencing public opinion or can “mobilize the public.”
Chinese AI Governance
China is rolling out some of the world's earliest and most detailed regulations governing artificial intelligence (AI).
These rules will impact how AI technology is built and deployed within China and internationally.
In the West, China's regulations are often dismissed as irrelevant or seen purely through geopolitical competition to write the rules for AI.
However, these deserve careful study on how they will affect China’s AI trajectory and what they can teach policymakers worldwide about regulating the technology.
This article breaks down the regulations into their parts—the terminology, key concepts, and specific requirements—and then traces those components to their roots, revealing how Chinese academics, bureaucrats, and journalists shaped the regulations.
· 3 Key Regulations: China’s three most concrete and impactful regulations on algorithms and AI are its 2021 regulation on recommendation algorithms, the 2022 rules for profound synthesis (synthetically generated content), and the 2023 draft rules on generative AI.
The rules for recommendation algorithms bar excessive price discrimination and protect workers’ rights subject to algorithmic scheduling.
The deep synthesis regulation requires conspicuous labels on synthetically generated content.
The draft generative AI regulation requires the training data and model outputs to be “true and accurate,” a potentially insurmountable hurdle for AI chatbots to clear.
Lessons for Policymakers: By rolling out targeted AI rules, Chinese regulators are steadily building up their bureaucratic know-how and governing capacity.
Reusable regulatory tools like the algorithm registry can act as scaffolding to ease each successive regulation's construction.
· Key Players: The Cyberspace Administration of China (CAC) is the clear bureaucratic leader in governance to date. However, that position may grow more tenuous as the focus moves beyond the CAC’s core competency of online content controls. The Ministry of Science and Technology is another key player.
· Future of AI in China: In the years ahead, China will continue rolling out targeted AI regulations and laying the groundwork for a capstone national AI law. Any country, company, or institution that hopes to compete against, cooperate with, or understand China’s AI ecosystem must examine these moves closely.
· China's AI regulations provide a comprehensive framework for AI governance, which can be a reference for other countries.
· These aim to protect individuals and society from potential adverse impacts of AI, such as excessive price discrimination and worker exploitation.
· They promote transparency and accountability in AI development and deployment.
· Addressing AI safety early before harms emerge. Emphasize controlling AI instead of empowering citizens and stifling innovation with a top-down approach.
· China's tech industry has pushed back against some regulations like mandatory algorithm audits. But compliance is increasing as enforcement rises.
· With looser ethics restrictions, China could pull ahead in AI through the sheer scale of data and research. But ethical lapses could hamper global collaboration
· The requirement for AI outputs to be "true and accurate" could pose significant challenges for AI developers, particularly for AI chatbots.
· The regulations could stifle innovation and limit the creative use of AI technologies.
· The regulations may be seen as a means for the Chinese government to control AI technologies and their use.
· Key aspects include required impact assessments before deploying high-risk AI, registering AI companies, and liability rules. Voluntary ethics principles exist.
· Controversial areas include broad surveillance uses of AI, blurring between voluntary and binding rules, and keeping some things opaque.
· As a significant AI player, China's regulations could influence global norms. But its top-down, control-focused approach differs from Western emphasis on individual rights.
China is assertively regulating AI development and use primarily from government and industry perspectives.
The focus is on technical safety and innovation gains rather than empowering citizens.
China's regulations will shape the global landscape but may also isolate it if human rights implications still need to be addressed.
Articles exploring China’s approach to AI:
· Carnegie Endowment for International Peace article: "China’s AI Regulations and How They Get Made,”
· China’s New Blueprint: Regulating the Wild Wild East of AI by Shelly Palmer
EU's proposed Artificial Intelligence Act:
· The European Commission has proposed a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). This Act is designed to regulate the use of AI across various industries and social activities.
· The first significant attempt to regulate AI globally. It aims to address the risks of specific AI systems while supporting innovation.
· Classifies AI as low/minimal, high-risk, or unacceptable. The strictest rules apply to high-risk systems like self-driving cars.
· Key requirements for high-risk AI: human oversight, robustness/accuracy, transparency, and provision of info to users.
· Premarket conformity assessments are required before high-risk AI can be used. Ongoing monitoring once operational.
· Controversial areas include a single definition of AI, the scope of high-risk systems, and stifling innovation with red tape.
· As the first significant framework, it could influence AI regulation globally. But risks being too EU-focused. The ongoing debate over the right balance between safety and innovation.
· The Act aims to ensure that AI systems placed on the Union market and used are safe and respect existing laws on fundamental rights and Union values.
· It also seeks to provide legal certainty to facilitate investment and innovation in AI, enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems, and facilitate the development of a single market for lawful, safe, and trustworthy AI applications.
· The Act proposes a single future-proof definition of AI. Certain particularly harmful AI practices are prohibited as contravening Union values.
· At the same time, specific restrictions and safeguards are presented concerning certain uses of remote biometric identification systems (like facial recognition) for law enforcement.
· The Act lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to persons' health and safety or fundamental rights. These AI systems must comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market.
· The Act proposes a governance system at the Member States level, building on already existing structures, and a cooperation mechanism at the Union level with the establishment of a European Artificial Intelligence Board.
· Additional measures are also proposed to support innovation, mainly through AI regulatory sandboxes and other efforts to reduce the regulatory burden and support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups.
· The Act is part of a broader comprehensive package of measures that address problems posed by the development and use of AI.
· It is also coherent with the Commission’s overall digital strategy in its contribution to promoting technology that works for people.
· It is one of the three pillars of the policy orientation and objectives announced in the Communication ‘Shaping Europe's digital future.’
The EU AI Act takes a precautionary approach to ensure trustworthy AI. But its breadth and requirements, like conformity assessments, raise concerns about slowing European AI innovation.
Aspects like its risk-based approach could provide a model for global oversight while allowing benign AI to thrive.
The Act aims to ensure that AI systems are safe and respect existing laws on fundamental rights and Union values.
· Protects fundamental rights, allows safer AI applications, and builds trust.
· It provides legal certainty, facilitating investment and innovation in AI.
· It enhances governance and effectively enforces laws on fundamental rights and safety requirements applicable to AI systems.
· It enables the development of a single market for lawful, safe, and trustworthy AI applications.
· The Act may impose additional regulatory burdens on AI developers and users, specifically those with high-risk AI systems.
· It may limit certain AI practices, potentially stifling innovation.
· A burden for developers, unclear distinctions between risk categories.
· The Act's requirements for high-risk AI systems may be challenging for some organizations, potentially limiting their ability to develop or deploy such systems.
FUNDAMENTALS OF A REGULATORY SYSTEM FOR ALGORITHM-BASED PROCESSES – Expert opinion prepared on behalf of the Federation of German Consumer Organisations (Verbraucherzentrale Bundesverband)* – 1/5/19
US Proposal: Introduction of the Algorithmic Accountability Act of 2022
Source: U.S. House and Senate Reintroduce the Algorithmic Accountability Act Intended to Regulate AI
On February 3, 2022, U.S. Democratic lawmakers introduced the "Algorithmic Accountability Act of 2022" in both the Senate (S. 3572) and the House of Representatives (H. R. 6580).
This act aims to hold organizations accountable for using algorithms and other automated systems to make critical decisions affecting individuals in the U.S.
The AAAI is a proposed law regulating the development and use of automated decision systems (ADS) in the United States.
The AAAI would require companies to assess their ADS impacts on individuals and take steps to mitigate any adverse effects.
The AAAI would also give individuals the right to access and correct information about themselves used in an ADS.
Some have criticized the AAAI for being too burdensome, while others have argued that it does not go far enough.
One of the most controversial aspects of the AAAI is the definition of an "automated decision system."
The AAAI defines ADS as any system that "uses algorithms or other automated processes to make decisions that have a significant impact on individuals." However, the specific criteria for determining whether a system is an ADS need to be clarified.
Purpose of the Act: The U.S. Act intends to increase transparency over how algorithms and automated systems are used in decision-making contexts to reduce discriminatory, biased, or harmful outcomes.
Covered Entities and Key Definitions: The U.S. Act applies to businesses under its definition of "covered entities.”
These can be divided into two broad categories:
(i) businesses that deploy "augmented critical decision processes" (ACDP); and
(ii) businesses that deploy "automated decision systems" (ADS), which are then used by the first category of companies in an ACDP.
Impact Assessments: The U.S. Act will require the Federal Trade Commission (FTC) to circulate regulations that require covered entities to perform impact assessments of any deployed ACDP or any deployed ADS developed for use by a covered entity of the first category in an ACDP.
Content of the Impact Assessment: While the FTC still needs to define the precise form and content of impact assessments, the U.S. Act already provides a long list of action items for covered entities to carry out when conducting them.
· The AAAI could have several benefits for the AI industry, such as:
Increased public trust in AI systems.
Improved compliance with data protection and privacy laws.
Reduced risk of negative publicity or legal action.
· The U.S. Act aims to increase transparency and accountability using automated decision-making systems, which can help reduce discriminatory or harmful outcomes.
· The Act could serve as a model for other countries in developing their regulations for AI and automated decision-making systems.
· The Act could impose significant compliance burdens on small and medium-sized businesses.
· The Act's focus on "critical decisions" may limit its applicability and leave specific uses of AI and automated decision-making systems unregulated.
· The Act leaves much to be decided by the FTC, which could lead to uncertainty for businesses regarding compliance requirements.
Here are some additional resources:
The Algorithmic Accountability Act of 2022 (AAAI): https://www.congress.gov/bill/117th-congress/senate-bill/3572
The AI Now Institute:
The Center for Data Innovation:
“The Fourth Amendment is Not for Sale Act closes the legal loophole that allows data brokers to sell Americans’ personal information to law enforcement and intelligence agencies without any court oversight – in contrast to the strict rules for phone companies, social media sites, and other businesses that have direct relationships with consumers.
“Doing business online doesn’t amount to giving the government permission to track your every movement or rifle through the most personal details of your life,” Wyden said. “There’s no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider. This bill closes that legal loophole and ensures that the government can’t use its credit card to end-run the Fourth Amendment.”
AI Regulation in Japan Proposal
The critical points about Japan's proposed approach to AI:
Source: AI Governance in Japan Ver. 1.1
· Japan was the first G7 country to release comprehensive AI ethics guidelines in 2019, focusing on transparency, fairness, privacy, human control, and accountability.
· Japan aims to balance innovation and regulation, taking an "ethics by design" approach that encourages voluntary industry adoption of ethical principles.
· Key aspects of Japan's strategy include certification systems, sandbox regulatory environments to test AI, and incorporating ethics into the school curriculum.
· Controversial areas include handling China's advances in AI amid rising tech competition and debate over whether guidelines should become legally binding.
· As G7 president in 2023, Japan will likely promote its vision of "human-centric AI" and Ethics by Design globally but faces challenges reconciling different regulatory approaches across countries.
Japan is taking an incremental approach focused on ethics and human control of AI.
An ongoing debate exists about whether this self-regulatory model is sufficient or if stricter laws are needed as harms emerge.
· AI governance is an urgent issue that requires the knowledge and experience of experts from various fields.
· Weak AI has reached the practical application stage, and Japan uses the term “AI” to mean “Weak AI, markedly in the academic discipline related to machine learning.
AI Governance Trends in Japan
The discussion on AI governance is shifting from AI principles to AI governance that carries out or puts into operation AI principles in society.
A risk-based approach is taken, where the degree of regulatory intervention should be proportionate to the impact of risks.
AI governance requires multi-stakeholder engagement, and the discussion must consider diversified views.
Japan proposes a shift from rule-based regulations that specify detailed duties of conduct to goal-based rules that ultimately specify the value to be attained.
· Provides a comprehensive overview of AI governance in Japan, which can serve as a reference for other countries.
· The risk-based approach to AI governance ensures that regulatory intervention is equal to the impact of risks, which can prevent over-regulation and promote innovation.
· Measured pace, collaboration with industry, and emphasis on ethics education.
· Designing actual AI governance is not straightforward due to government control's complexity and multi-layered nature.
· The voluntary nature of principles and the potential lag behind regulating specific harms.
· Does not provide specific solutions to the issues raised but proposes a general framework for AI governance.
Canada: The Artificial Intelligence Act (AIA)
The Artificial Intelligence and Data Act is part of Bill C-27, also known as the Digital Charter Implementation Act, 2022, tabled in the House of Commons on November 2022.
The AIA is a proposed law regulating the development and use of artificial intelligence (AI) systems in Canada.
The AIA would create a risk-based framework for regulating AI systems, with higher levels of oversight for systems that pose more significant risks to individuals and society.
The AIA would require AI systems to be designed and developed in a way that respects human rights, is transparent, and is accountable.
The Act aims to regulate international and interprovincial trade and commerce in artificial intelligence systems.
It establishes requirements for designing, developing, and using AI systems, including measures to moderate harm and biased output risks.
It also prohibits specific practices with data and AI systems that may seriously harm individuals or their interests.
The AIA is part of a more considerable legislative effort that includes the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act.
· These acts aim to modernize and extend existing rules on collecting, using, and disclosing personal information for commercial activity in Canada.
· The Consumer Privacy Protection Act would also enhance the role of the Privacy Commissioner in overseeing organizations’ compliance with these measures.
· The Personal Information and Data Protection Tribunal Act would create a new administrative tribunal to hear appeals of orders issued by the Privacy Commissioner and apply a new administrative monetary penalty regime created under the Consumer Privacy Protection Act.
The AIA may implicate rights under section 8 of the Charter, protecting against unreasonable searches and seizures.
The Privacy Commissioner’s powers and specific provisions allowing government institutions access to personal information may involve information subject to a reasonable expectation of privacy.
The AIA may also impact freedom of expression as restrictions on collecting, using, and disclosing personal information could affect commercial expressive activities.
Includes provisions for administrative monetary penalties and offenses for failing to comply with specific regulatory requirements. These offenses would be punishable by way of fine or imprisonment.
It is intended to provide legal information to the public and Parliament on a bill’s potential effects on rights and freedoms that are neither trivial nor too speculative. It is not intended to be a comprehensive overview of all conceivable considerations.
Some have criticized the AIA for being too restrictive, while others have argued that it does not go far enough.
One of the most controversial aspects of the AIA is the definition of "high-risk" AI systems.
The AIA defines high-risk AI systems as those that pose a significant risk to individuals or society. Still, the criteria for determining whether an AI system is high-risk are unclear.
The AIA could have several benefits for the AI industry, such as:
Increasing the trust people have in AI systems.
Could you make sure companies comply with data protection and privacy laws?
Clear boundaries to avoid legal snafus and bad press.
The AIA could also have some drawbacks for the AI industry, such as:
Increasing costs to follow the regulations, limiting growth.
Delays in the development and deployment of AI systems.
Reducing AI innovation with stringent rules and fines.
The AIA is a complex piece of legislation that is still under development. It remains to be seen how the AIA will be implemented and enforced and its ultimate impact on the AI industry.
The Artificial Intelligence and Data Act (AIDA) – Companion document: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
Responsible use of artificial intelligence (AI): https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html
CONCLUSION - Taming the AI Giant
Each framework and proposal is based on the country or region of origin, so how can this be applied worldwide? Or at least create standards that we all follow?
Almost all regulation would inhibit innovation and be costly, with compliance based on a slow-moving political system that the rapid growth, use, and scaling of AI can quickly leave behind.
Most of all, especially concerning China, the US, and the EU, governments, and businesses have already broken these rules, gathered data, and used it. While regulation cannot be retroactive, how do we minimize the damage and stop the negative AI from happening?
The regulation of AI needs to catch up to its usage of it, and without rules, countries and businesses have decided to jump in without asking for permission.
Remember that AI didn’t start with ChatGPT in November 2022; the algorithms and machine learning have been around for over a decade. The data gathered has already been used for commercial and governmental gain without rules.
And these are the same institutions that plan on regulating it now. They rarely refer to historical usage of AI or abuses, only focusing on control at a country level and between countries.
In the next pod, we’ll explore some of these impacts, what they mean, and how your privacy is not some political ideal; it’s a right….at least outside of China, which ironically has the most transparent and actionable framework of any nation so far.|