AI Laws in the US are all over the place

AI is being adopted faster than anything the IT industry has ever seen. Unlike technologies like smart phones or the internet, AI is being adopted within months vs years. This creates a security problem since guidelines and other best practices takes time to create. What is happening is AI is racing along with little to no security guidelines leading to organizations / users creating their own security policies. I’m finding many organizations don’t realize this is happening. They blindly believe the vendors are “doing the right things” to protect them ignorant to the risk being introduced. In some cases, larger AI vendors are enforcing responsible AI and other default protections but in other cases such as open-source options, there is very little default protections.

I’m seeing vendors try to provide their recommendations for securing AI however, they are not required to do so. Law makes this a requirement. That or tying requirements to winning business will drive the right behavior. Things get worse as recently announcements are driving the US laws in different directions. States like California are releasing laws such as THIS. But recently, President Trump executed a executive order to block states from enforcing their own AI laws (see HERE for more details). Trump even added a non-woke in AI executive order (see this HERE). This all creates even more lack of focus by having conflicting things being released. In one hand, you have vendors trying to secure things and states pass laws. On the other hand, you have the president blocking state laws and attempting to force vendors to change how they provide AI. Its a mess and further strengthens the need to take securing your AI into your own hands. Vendors and law are not aligned leaving you exposed to what should be your AI security policies.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.