They're really at it, trying to regulate AI, but it will hopefully still take a lot of time and mostly affect some of the biggest players.
> 6-hour bipartisan AI forum hosted by Sen. Chuck Schumer with tech executives, advocates, and researchers - huge percentage of women.
> Chuck Schumer also proposed the SAFE Innovation Framework
> Closed door meeting with limited press access to enable candid conversations
> All Senators were invited, not all attended
Good to see, that people pushing for more regulation aren't necessarily getting the support they want. If something comes out of it it's going to take quite some time.
CEOs, some people from the content industry and education, activists for human rights, labor and self-appointed "social justice" advocates:
> Rumman Chowdhury - Advocates for red-teaming and safety
> Tristan Harris - Alignment with humanity
> Alex Karp - AI for law enforcement and intelligence operations
> Deborah Raji - Algorithmic bias and accountability
> Janet Murguia - Civil rights activist
> Charles Rivkin - MPAA?
> Elizabeth Shuler - Labor rights advocate
> Meredith Stiehm - Writers (creatives)
> Randi Weingarten - Teachers
> Maya Wiley - Human and civil rights
To me picrel looks like a list of representatives of groups who fear that they will loose power over society.
Dave Shapiro covered it, I only follow him because of his work on Cognitive Architecture. I'm not promoting his view on the topic in general, though it's certainly not the worst since he's for open source:
https://youtu.be/rp9_YdVjNaM
Sam Altman moved a bit more towards open source, but they're the most vocal about licensing, it's just in their business interest. Zuck is "our man" for now, but probably just because his business is different. Tristan Harris and Bill Gates seem to be the main supporters of restrictions, but I think many others just want to create "educated" elites and institutions first, but those would be leaning towards regulation. They often just seem to want privileged access for researchers and maybe some other groups. So, institutions around science and education for example could assign privileges to some people, but the models wouldn't be open to the public. Others probably just want regulation to protect their (possibly high paying and political influential) jobs. Musk is pushing for a dedicated regulatory body for AI, about which I'm not sure, but maybe it might prevent other interest groups from overregulating the field.