Many would say that some of the
lawmakers have never been lobbied as hard as they have been over this
particular act. What's the takeaway for companies trying
to build? That's a great question.
So right now, it's a big celebratory day here in Brussels.
I think last time I was speaking to you, it was after 40, almost 40 hours of
negotiations and mid-December. That's for, you know, three months.
And now we have the parliament signing off on this act.
And while it's not a celebration, we are gettin
g lots of concerns, of course,
from tech companies. They've been really sounding the alarm
bells for quite some time about overregulation, concerns that, you know,
the European continent will be behind because they're regulating much more
ahead than their U.S. and Iraqi counterparts.
We also had this concern, of course, from European tech companies, too, and
startups who say, hey, we really want to compete with U.S.
hyperscalers and you can't overregulate us, you overregulate us, We will never
h
ave a chance to actually compete. Okay, I'm Mistral and I am building
large language models and I'm based in Europe.
Now that the act is passed, this is dumb.
What is it that I actually have to do to comply?
So there are a few things. I mean, I guess it's good to look at the
EU's approach as a risk based approach. So for the most part, they're actually
respecting the use of the technology, not the technology itself.
And so this means that in practice, the U.S.
banning, you know, the worst possib
le uses of A.I.
So A.I. systems cannot be used to, you know,
perform emotion recognition in workplaces or schools cannot be used for
social scoring. So giving citizens a score based on
their behavior. When it comes to high risk situations.
So these are some of the more tricky ones.
And this is, you know, AI systems being used for migration applications or
starting job applications. A lot of these companies like Openai or
Mistral, will have to perform a lot more checks in order to prove to regula
tors
that they're safe. And the one exception that a lot of
companies like Mr. Obama really lobbied hard against were
these additional controls to general purpose or generative A.I..
And these are not based on the use of A.I.
explicitly regulating technology itself. And in the end, what these companies
will have to do is perform are going to prove to regulators, you know, their
energy consumption, prove that they are actually complying with copyright laws.
And there's a new A.I. office that we b
ased in Brussels and
that the EU is currently setting up. This will actually can operate almost
like a police force where they can go to the likes of Mr.
Open Eye Open Area, Microsoft, whomever, and say, Hey, we want more data on how
you've trained your large language model and ultimately can even ban an
application if it's performing poorly. Gillian Misrule has indeed partnered
with Microsoft. And one expert saying that EU lawmakers,
they've got played in this particular situation.
How why will
this not be a read across to just how the rest of the world adopts
A.I. legislation?
I think it's really interesting because last year we saw lots of tech companies,
even, you know, pushing for regulation. We had Sam Altman obviously in Congress
that we know is pushing for regulation. And so we've had that obviously in
Brussels. But, well, they've been arguing for
regulation. They've also been simultaneously
lobbying quite hard against some of the worst or strictest controls that they've
seen i
n Brussels. Mr.
All was a really clear cut piece of that last year.
They were really effective in getting the French government.
Also, I'll have also a German startup really pushing these governments saying,
Hey, if you overregulate us, then we will not be able to compete with U.S.
companies. Now, fast forward four or five months.
They were obviously partner with Microsoft.
And so that's left a bad taste in a lot of lawmakers mouths.
But we also have lots of other companies that are really, you
know, signing on
for voluntary commitments, trying to prove to governments they're taking this
seriously, that they're taking as many commitments as possible.
But I think I think one thing that is starting to wake up to the tech
companies might be self-serving.
Comments
Time to ban ai generated
An issue is i can still tune and run biased models at home for fake content - but it’s good to see regulators understanding and regulating the risks and threats of big operators
As long as google or bing still can tell me how many minutes of boiling is needed to get the perfect pasta, i dont mind this regulation.
Rip mistril AI
Europe us done. It's like we invented fire and they banned flint
Zephaniah 3:8. But there is Romans 4:24* and Romans 10:9*- 13 for forgiveness and imputed righteousness and receiving of the Holy Spirit unto salvation 🎉❤
i don't believe in regulation by people who were not part of the AI
We make our own laws in America!
Crypto Bull run is making waves everywhere and I have no idea on how it works. What is the best step to get started please?