Making Sense of Tech Companies’ AI Commitments to the White House.
By Robert Seamans
The White House recently announced that seven prominent technology companies agreed to commitments regarding artificial intelligence (AI). The seven companies are a mix of large tech companies, including Amazon, Google, Meta, and Microsoft, and well-known startups at the forefront of AI, including Anthropic, Inflection, and OpenAI. Notably absent are other large tech companies like Apple, and other notable startups in this space such as Elon Musk’s X.AI.
The commitments are grouped into three categories: (1) safety, (2) security, and (3) trust. There are tradeoffs associated with each of these commitments, as described in more detail below. An important point is that these commitments are voluntary and non-binding, and hence there is no enforcement mechanism if the companies do not comply.
Some of these commitments are quite sensible and are likely already being done, at least in part, by the companies. For example, the first commitment is that the “companies commit to internal and external security testing of their AI systems before their release.” All these companies surely already do testing of their products before their release. What’s new is the use of external parties to do some of the testing. However, it’s not clear how the third party testing would be implemented. Open questions include: Which external parties will do the testing? Will the government “certify” the external testers? What criteria will the testers use to say whether a new AI product is safe or not? How long will testing take?
Read the full Forbes article.
Robert Seamans is Associate Professor of Management and Organizations and Director of the Center for the Future of Management