Thanks for providing such a great summary of this!
Of course we need some legislation in this area but it’s difficult, and I think like the recent EU laws they have tried to future proof this and have made several assumptions about concepts that are quite problematic and hard to make this law work or make it a total nightmare.
I also appreciate this is based on your summary and my understanding of it and that there might be some things I’ve assumed that are not true so feel free to highlight.
So…
Principle 1 - What kind of safety does this aim to provide? Physical safety? Mental safety? The former seems easier to define the latter less so? How do you prove an AI harmed my mental wellbeing? What are the limits?
Principle 2 - How do they define unjustified? Let’s say we have a credit score AI, and for some unknown reason there is more financial risk from someone from a certain racial background and location (but not racial background generally so a specific set of conditions) is that unjustified? Or just making a rational decision based on a correlation not racial discrimination exactly?
Principle 4 - The right to understand seems odd, for example that assumes everyone can understand which I’d say even for the basics of this technology is not possible, some people just don’t get it. A right to explain might seem more reasonable, but requiring companies to be responsible for how people understand it? Bizarre!
Well intentioned, but not well thought out and likely to lead to trouble?
Thanks for providing such a great summary of this!
Of course we need some legislation in this area but it’s difficult, and I think like the recent EU laws they have tried to future proof this and have made several assumptions about concepts that are quite problematic and hard to make this law work or make it a total nightmare.
I also appreciate this is based on your summary and my understanding of it and that there might be some things I’ve assumed that are not true so feel free to highlight.
So…
Principle 1 - What kind of safety does this aim to provide? Physical safety? Mental safety? The former seems easier to define the latter less so? How do you prove an AI harmed my mental wellbeing? What are the limits?
Principle 2 - How do they define unjustified? Let’s say we have a credit score AI, and for some unknown reason there is more financial risk from someone from a certain racial background and location (but not racial background generally so a specific set of conditions) is that unjustified? Or just making a rational decision based on a correlation not racial discrimination exactly?
Principle 4 - The right to understand seems odd, for example that assumes everyone can understand which I’d say even for the basics of this technology is not possible, some people just don’t get it. A right to explain might seem more reasonable, but requiring companies to be responsible for how people understand it? Bizarre!
Well intentioned, but not well thought out and likely to lead to trouble?