Moreover, these artifacts hamper the performance of computerized image analysis systems. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Google I/O is usually a huge event focusing on consumer technology like phones, but 2017's edition showed that the company is serious about bringing AI to the masses in real, practical ways, not just gimmicks.Īn abridged version of this post first appeared in Charged Newsletter this week – but I thought you might find it useful.The development of whole slide scanners has revolutionized the field of digital pathology. I'm constantly reminded of this great post on understanding AI's impact on the world at large, and it really feels like we're standing on the precipice where it's about to accelerate away from us incredibly quickly. Siri is a great example of this, which right now looks incredibly basic compared to Google's Assistant. We'll probably see more on this front at WWDC next month, but Apple's approach means it doesn't have enormous data sets to process like Google does, and is clearly resulting in slower progress. Google is accelerating away incredibly quickly because it's not restricting AI processing to on-device like Apple does. What's contrasting here is Apple's approach to AI. Just two years ago, the idea of a highly-accurate voice assistant sitting on the bench in your home was absurd, but is quickly becoming normal. With tech giants competing on AI at high clip, innovations are only going to speed up. We're at an interesting cross-roads in technology right now. With optimized processing power becoming quickly available for technology like Google Home, it's no wonder mis-recognized words have been able to be cut in half in just a year. The chip won't be on sale – for now – but available for use through the Google Cloud platform. It's a huge deal, because it brings training time down from a day or more to a matter of hours – changing the way it can be used at scale. Another project it announced is the Cloud TPU, which could be poised to change the AI game altogether.Ĭloud TPU is a custom-designed processor built for training neural networks and actually using them in production. Google's AI work is finally reaching a point where it's not a gimmick, and the projects it's working on are not trivial. It's coming to Google Photos first – which has a whopping 500 million active users – so you can search the objects and places inside the photos themselves, but Google Translate as well, then Google Assistant too (the company showed a neat trick where you can point the camera at your router's WiFi password and have it automatically saved). What's interesting about Google Lens isn't that it's a standalone app, but Google's strategy is to integrate that technology into its other apps. It's a very similar concept to Google Goggles, an app from years ago that could search real world objects, but takes it to the next level thanks to machine learning. It's not lost on me that it's just like the Hotdog or Not app from this week's Silicon Valley episode, but it is legitimately useful – one demo showed the app detecting the type of flower the user was pointing it at. Google Lens, an app for your phone, identifies objects in the real world if you point your camera at them. The new Google AI website shows just how many public projects it's working on, which include hardware, and they're accelerating at an astounding rate. This week Google held its annual developer conference, I/O, and the focus on its progress in AI was evident as the center of everything.
0 Comments
Leave a Reply. |