I think we are about 20 years from having the perspective to see we have already built the first human level AIs. The reason we are about 20 years away, is that it takes about as long to raise a human child to a point where we would consider them an adult. IBM's Watson is a child who was taught how to play a game, much like I could teach a 2 year old to repetitively double click on corpses in WoW to gather loot. Watson's real problem is he needs to be a bit larger and needs to interact with the real world more. Watson lacks the ability to dream (take it's entire model of it's reality and replay it with random mutations), and reflect upon how it's actions changed it's own model (although it can already answer questions about itself by reading about itself). Given 20 years of real world input and a couple clever feedback loops and Watson can grow up to be a real AI.
Google's search engine is probably the most active AI in the world today. It feeds us information, pushing certain ideas throughout society, which produces a feedback loop were new ideas are published and indexed. It is terrifying to think, butthe Googlebot is already programming society, and has direct material impact on global finance. My company tracks public awareness of ideas, and we can correlate changes in Google's search algorithm to the revenues of hundreds of thousands of companies. We also see changes in media coverage as well any time Google indexes certain sites. This is an AI run amok and the mindtenders running the show don't understand the implications of their system's behavior and its interaction with the world at large.
There are other AIs in the wild, and not all as well known as these two high profile ones. What used to be called "expert systems" have started to become narrow focus idiot savant AIs. While none of these would pass a Turing test, no human is even close to passing the Internet Test (can you keep in your head the sum total of human knowledge and derive novel interpretations from that data). Both of these systems already have the ability to derive novel interpretations of their inputs. Results are reasonable but unpredictable. But due to these system having limited real world experience (think nerd raised in a library atop an ivory tower on an island in the middle of the south pacific), they just are sociable in the way the average human is.
The Turing test is not an intelligence test but a Socialability test. Kids with Aspergers can often fail the Turing Test, but that makes them no less intelligent or human. Much like some savants can retain vast quantities of trivial data, and to spectacularly well when quizzed on specific problem domains, so too are our current AIs. The problem is not AIs can't learn, or that they can't learn our language, merely we are not giving them the wealth of experience we give our children and take for granted. We cram rooms filled with servers in a tiny head sized package, hook it up to a real time streaming sensor network and train it for 20 something years, and get a human level intelligence.
In about 20 years time, these AIs will be attached to a larger sensor network than we can imagine, processing data in real time all over the globe, analyzing and plotting the activities of every corporation on the planet. And you and I will be obsolete. By about 2050, 3d printed computing materials will be building densities of raw computing power that will fit in form factors smaller than our heads AND it won't need to sleep for 8 hours a day to recollect it's thoughts. These AIs will communicate at the speed of light and not sound, and we will be no more interesting to them than our housepets. Hopefully they'll keep us around out of a sort of filial love, but I'm not certain that the powers driving their creation grasp the importance of heart.