Phiar raises $3 million to develop computer vision and deep learning navigation app

VR 360 is a news, analysis and opinion site devoted to virtual reality technology, promoting thought leadership from leading brands and platforms and collaborating with industry bloggers to deliver insight, reviews and strategy for all aspects of the ecosystem, from developers to CIOs.

Machine learning and augmented reality provider Phiar has announced a seed funding round of $3 million to develop a computer vision and deep learning navigation app for smartphones, to be launched in app stores in mid-2019. Unlike traditional GPS apps, the Phiar app displays the driver’s real-world surroundings augmented with an easy to follow painted path.

The seed round, co-led by the Venture Reality Fund and Norwest Venture Partners, saw participation from Anorak Ventures, Mayfield Fund, Zeno Ventures, Cross Culture Ventures, GFR Fund, Y Combinator, Innolinks Ventures and Half Court Ventures.

The Silicon Valley-based company, founded by computer vision researcher Dr. Chen-Ping Yu and deep learning expert Ivy Li, launched at Y Combinator in early 2018. The founding team brings backgrounds in software optimisation, deep learning, 3D reconstruction and AR design from Microsoft, Apple, Shutterstock and VMware.

Marco DeMiroz, General Partner of the VR Fund, said: “AI-driven immersive and spatial technologies are becoming mainstream. Phiar’s breakthrough deep learning technology and AR navigation app provides not just value to users, but also sets an example of the kind of incredible potential at the nexus of AI and AR.”

On similar grounds, Nvidia has developed an artificial intelligence (AI) system that helps in turning real-life videos into 3D renders, thus helping in making creation of games and Virtual Reality experiences simpler. During the NeurIPS AI conference in Montreal, Nvidia set up a dedicated area showing its technology, for which the company used its DGX-1 supercomputer so this is not achievable on the average household computer, at least for now. The AI running on the DGX-1 took footage taken via a self-driving car’s dashcam, extracted a high-level semantics map using a neural network, and then used Unreal Engine 4 to generate the virtual world.

https://www.iottechexpo.com/northamerica/wp-content/uploads/2018/09/all-events-dark-text.pngInterested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *