Mapbox Ushers In The Next Generation Of Mapping With New SDKs

Forbes
By Anshel Sag
July 6, 2018

Right now, a lot of peo­ple are very excit­ed about the future of tech­nolo­gies like AR, VR, AI, and autonomous vehi­cles. How­ev­er, as I’ve writ­ten before, most of these tech­nolo­gies are rel­a­tive­ly use­less with­out con­tex­tu­al aware­ness. I have also writ­ten in the past about the impor­tance of image sen­sors and how they enable AI and autonomous sys­tems to bet­ter under­stand the world around them. Com­bin­ing loca­tion aware­ness and vision is incred­i­bly dif­fi­cult and is fun­da­men­tal­ly what enables app devel­op­ers to anchor dig­i­tal assets in the real world for aug­ment­ed real­i­ty. There are cur­rent­ly only two com­pa­nies capa­ble of doing this— GoogleGOOGL ‑0.66% and Map­box. Today I want­ed to talk about the less­er known of the two.

Map­box announces new SDKs and partnerships

Map­box has a leg up on Google in that it pro­vides more flex­i­ble options for link­ing image sen­sors and con­tex­tu­al aware­ness. Just in the last month, Map­box announced numer­ous part­ner­ships and ini­tia­tives to fur­ther improve loca­tion aware­ness. First, Map­box announced a part­ner­ship with the world leader in mobile chip design, Arm, to imple­ment its new Vision SDK. Map­box claims the Vision SDK will pro­vide a fusion of visu­al and loca­tion data to improve the accu­ra­cy and over­all expe­ri­ence of AR. The Vision SDK is arguably one of the biggest announce­ments out of Map­box in quite some time—it expands the company’s capa­bil­i­ties while also giv­ing its devel­op­ers more tools to work with when it comes to live loca­tion. It will help devel­op­ers enable more robust AR in places like auto­mo­tive nav­i­ga­tion. The more devel­op­ers uti­lize Mapbox’s plat­form in their appli­ca­tions, the more Map­box will thrive.

To read the full arti­cle, vis­it Forbes