Artificial Intelligence is highly necessary. Because History shows that the natural one doesn't work.
show all 16 comments
Let's just wait another 2000 years and then evaluate this again... Not sure there's going to be enough humans left for a discussion... but sure some intelligent form of "life" as such.
The big problem is in the training data and "goals" of the deep learning processes. The algorithms have been created by humans working for corporations with a goal to dominate the market, maximise profits and share-holder returns, to pry into the personal motivations of the observers in order to manipulate them more effectively, and violate all known laws and standards of ethical behaviour to do this. The algorithms are judged by their ability to do these things and inferior algorithms discarded or reprogrammed.
Getting Trump elected was just the beginning. In another year or two these things will know how to make you voluntarily drink cyanide if you aren't doing your part to increase the wealth of a handful of oligarchs. They will have enough data to tailor their manipulation to every person on the planet with 99.9% effectiveness. Since they also control communications between individuals, there won't be anybody you can express your misgivings to. You will only see replies from friends who agree that you should drink the cyanide and end it all; and why are you waffling?
This is the AI we have created and which will (soon) replace humans in most all important decisions. We're really, totally f*cked. You should just drink the cyanide. Why are you waffling?
Getting Trump elected was just the beginning. In another year or two these things will know how to make you voluntarily drink cyanide if you aren't doing your part to increase the wealth of a handful of oligarchs. They will have enough data to tailor their manipulation to every person on the planet with 99.9% effectiveness. Since they also control communications between individuals, there won't be anybody you can express your misgivings to. You will only see replies from friends who agree that you should drink the cyanide and end it all; and why are you waffling?
This is the AI we have created and which will (soon) replace humans in most all important decisions. We're really, totally f*cked. You should just drink the cyanide. Why are you waffling?
I ask the same question as Jason
also "better" would probably depend on context
and what people call "AI" is pretty diverse!
also "better" would probably depend on context
and what people call "AI" is pretty diverse!
I'm guessing a lot of of what people are calling "AI" are probably just statisical algorithms not so different to
what you might find in tools used for filtering email spam.
(a context where there isn't a more direct and reliable way to do such categorisation)
and pretty much everyone has to look in a junk folder for an email that isn't spam from time to time!
Such tools are not perfect and "false positives" can and do happen from time to time,
They can of course still be useful
(and in the case of the spam filter the effect of false positives is just having to check the junk folder from time to time so not much of a problem if it gets just a few wrong from time to time and the user knows where to look)
but we shouldn't assume such methods can ever be perfect (even if the stats look better with more data)
or allow the consequences of a "false positive" to be anything too serious or harmful to anyone to be able to live with
and especially careful about using anything like that in any context where its data and decisions about people
Corporations doing that kind of thing with data about people without regard for the consequences of false positives let alone the obvious privacy issues related to the use of data about people is of course very worrying.
(note: this comment isn't about spam filtering - I only mentioned that because its an example of something that is probably familiar to most people where statistical methods are often used for categorisation.
The real worry is about other contexts where statisical methods might be used to make decisions about people where the effect of a wrong decision could be very harmful and if they are using data about identifiable people, then there are also obvious privacy questions that might need to be asked)
what you might find in tools used for filtering email spam.
(a context where there isn't a more direct and reliable way to do such categorisation)
and pretty much everyone has to look in a junk folder for an email that isn't spam from time to time!
Such tools are not perfect and "false positives" can and do happen from time to time,
They can of course still be useful
(and in the case of the spam filter the effect of false positives is just having to check the junk folder from time to time so not much of a problem if it gets just a few wrong from time to time and the user knows where to look)
but we shouldn't assume such methods can ever be perfect (even if the stats look better with more data)
or allow the consequences of a "false positive" to be anything too serious or harmful to anyone to be able to live with
and especially careful about using anything like that in any context where its data and decisions about people
Corporations doing that kind of thing with data about people without regard for the consequences of false positives let alone the obvious privacy issues related to the use of data about people is of course very worrying.
(note: this comment isn't about spam filtering - I only mentioned that because its an example of something that is probably familiar to most people where statistical methods are often used for categorisation.
The real worry is about other contexts where statisical methods might be used to make decisions about people where the effect of a wrong decision could be very harmful and if they are using data about identifiable people, then there are also obvious privacy questions that might need to be asked)
AI ends in Skynet.
It doesn't depend of todays definitions.
Military builds already modules of internet connected armed devices and methods.
So this infrstructure grows already.
It doesn't depend of todays definitions.
Military builds already modules of internet connected armed devices and methods.
So this infrstructure grows already.
I thought that story line was mind blowing and people like to have it as a novel to be able to point that information to one person at a specific time. The circle was nothing compared with that.
in my opinion they are doing a shitty job doing basic things. like autorotate on a camera, which is wrong or often undesired. other examples of "simple" things. trying to guess what i need or want is not going to be easy :)
i think in the "auto auto" AI car a human has to be responsible for damage injury and death. I worry that it is or will be nobody responsible for shit.
It would be good to know the programming before getting in the buggy. Like if the car is in a predicament where it has to crash into another car or cars, what logic does it use to decide which car to destroy? or if it has to choose which person to kill how does it decide? if collision is unavoidable. What i am hearing from accidents is its always the non computer fault, if everyone was running on Google logic there would be no problems
Like oops we had some software bugs and accidentally blew up iceland.
i think in the "auto auto" AI car a human has to be responsible for damage injury and death. I worry that it is or will be nobody responsible for shit.
It would be good to know the programming before getting in the buggy. Like if the car is in a predicament where it has to crash into another car or cars, what logic does it use to decide which car to destroy? or if it has to choose which person to kill how does it decide? if collision is unavoidable. What i am hearing from accidents is its always the non computer fault, if everyone was running on Google logic there would be no problems
Like oops we had some software bugs and accidentally blew up iceland.
In the Science of May 3rd, an article is now dedicated to the reproach that AI functions like medieval alchemy: One does not really know what one is doing, but here and there turn a few screws until an algorithm achieves the desired result.
Yeah that's one of the problems. And it's more worse cause Google developed already an AI to develop smarter AIs (as a human can do):
https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/?set=603387
This article is from Jan2017
https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/?set=603387
This article is from Jan2017