ARKit 2.0 – For Humans

At its annual developer conference this week, Apple announced ARKit 2.0, which will be part of iOS 12, its next update to the operating system that the iPhone and iPad rely on. We’ve been exploring the betas of the new software, examining the SDK, and trying out the sample projects, and have a few thoughts to share.

The new version of ARKit adds a number of features that will be of interest to AR developers and companies that are using AR:

– Persistent AR Experiences: With earlier versions of ARKit, your experiences would only last as long as you kept your app active. Once you moved on to something else, you couldn’t come back to your work in progress. ARKit 2.0 adds the ability to save a session in progress and come back to it later with your augmented objects still in the same place. Users can now start designing their living room decor in the morning and pull their work back up to share with their housemates that evening. This makes AR a viable tool to do real, substantive work — creating persistent designs and stories — rather than just providing fun but short-lived experiences.

– Shared AR Experiences: AR has previously been a solitary activity. There was no way for the items in your AR session to be visible to others using different devices. In ARKit 2.0, the same mechanisms that allow persisting an AR session also provide the ability to share it with others. Multiple designers can style a car together, each exploring and making changes to the same vehicle while examining it from her own viewpoint. Or the florist, caterer, decorator, baker, and photographer can all plan out the space for a wedding reception, combining their individual elements in a shared environment, ensuring the “Just Married” banner doesn’t end up in the cake.

– More Flexible Image Tracking: ARKit 1.5 added the ability to identify static images, like posters, murals, or signs in your environment. With ARKit 2.0, an app can see and respond to images that move around — boxes, magazines, books, etc. — making it useful for augmented books or finding the gluten-free options among all those cereals on the supermarket shelf.

– Object Tracking: In addition to flat images, the new version also tracks 3D objects in a scene and responds to them. With robust object tracking, maintenance technicians will be able to use their ARKit device to quickly identify engine parts and pull up technical references and interactive guides for common maintenance procedures. In a retail setting, workers will be able to conduct inventories simply by showing the products on a shelf to an iPad, which will recognize and tally all of the store’s inventories. (These are both a little beyond the beta ARKit’s capabilities currently, but the technology will doubtless continue to improve at a rapid clip.)

– Reflection Texturing: One of the subtle details that contribute to making AR objects look real is their reflection of the environment around them. Creating accurate reflections is extremely difficult for a variety of technical reasons. Apple added some clever engineering to ARKit 2.0, combining spatial mapping with image capture to generate the reflections it can know about and using machine learning to fill in the remaining gaps. Your shiny pretend teapot can now accurately reflect the real banana sitting right next to it.

– In addition to the new capabilities, ARKit 2.0 also adds a common file format for storing and sharing AR content: the awkwardly-named USDZ. Apple is baking support for this format deep into its operating systems, so that when you visit a web page with an embedded USDZ model of that juicer/blender you’ve been considering, you’ll be able to view it from all angles on the web page or to switch to an AR view to see what it would look like right on your kitchen counter.

While many of the features Apple built into ARKit 2.0 are ones we’ve already seen in Vuforia and ARCore (Google’s analogue to ARKit on Android), Apple’s ARKit tends to work exceptionally well thanks to their control over both the hardware and the software it runs on. In addition, for iOS developers who are used to Swift/Objective C and UIKit, ARKit provides a very capable solution with a familiar API.

We’re excited about the possibilities that ARKit 2.0 brings, and are already busily exploring how best to bring these capabilities to our customers!

 

Weekly Round Up: May 3, 2018

In Social News…

There are a lot of updates going on in the VR/AR world, but most significantly standalone headsets are becoming more real in the US, and with the Facebook F8 Conference this past week – the Oculus Go is now on sale.

The introduction of stand alone devices have long been hailed as the hardware that will truly change VR adoption by consumers. The need for a specific smart phone, gaming computer and/or large amounts of flexible walking space prevents your average person from consuming VR content – not even to mention price tags. While reports of the Oculus Go have circled for months, F8 made it official. The Go is coming and already has some critical relationships (Netflix, and Hulu) to name a few.

A feature I haven’t heard too much about, but I think is critical for mass market, is the ability to personalize with prescription lens inserts. This may seem like a small feature, but changes like this and the flexibility to customize devices will appeal to more consumers.

In other social news, AR Face Filters are continuing to make waves. Last week, Snap announced that it was opening it’s development platform to include AR Face Filters. They also introduced a applet ability (Snappables). Instagram, not one to be left behind, is following suit. To counter Snaps engagement with brands, Instagram is utilizing AR filters in conjunction with brands, in a different way.

Snappables

Also at F8, Facebook introduced AR Capabilities for Messenger, specifically for brands, which until now are housed in Facebook Stories. Brands are one of the largest user bases to use Messenger bots to communicate with consumers.

New “hardware” (a phrase I’m hesitant to use) continues to infiltrate the news – most notably, the “Force Jacket” from joint effort between Disney Research, MIT Media Lab, and Carnegie Melon University. This is a far leap from the 4D movie experiences of my childhood in creating interaction and imposing specific reactions on viewers.

Leaving the Social Networks behind…

Forgetting the social world, today, Vive released three new SDKs (currently in early access) related to interactions within the Vive. It seems that Vive is moving more towards pass-through AR and audio AR/VR. While it’s clear that the Pro is a more sophisticated and clear device, these updates to audio will continue to create more realistic environments for participants.

The New York Times is steadily increasing their AR usage – specifically in exploring the red planet. The Times is clearly pushing themselves into the digital age, seeing AR as a realistic future for engaging with content. The Weather Channel is also attempting to distinguish themselves, and utilize AR to more realistic indicate weather movements. Now this isn’t new news given that the station toyed with AR in 2015, but it’s certainly making waves currently.

Weekly Roundup: February 8, 2018

Applestylus patent

This week, we saw two different patents from Apple published – both of which are somewhat unsurprising (which might make them surprising moves for Apple). One for a VR/AR headset display. The headsets primary goal is VR, but the patent focuses on optical designs and begs the question of how the headset could be utilized in AR applications. The other published patent, a stylus for the air, certainly focuses more on AR. In truth, I don’t know anyone who purchased the pencil (which only works with iPad Pro) – but reviews are generally favorable of the smart stylus even if the price was a deterrent. Products of the past aside, a stylus for literally our surroundings seem extremely next level. While the conversation on the patented new stylus revolves around creating 3D objects in the air, its future as an AR tool seems quite obvious.

Returning to a time when Ghosting isn’t a bad thing

Urban dictionary and millennials changed the meaning of “to ghost” or “ghosting” someone. If you’re unfamiliar, read any recent article on dating. Not purposefully changing our vernacular, Vreal allows users to experience VR games played by others, illustrated a shaded out, ghost fashion. With a huge culture of watching other people play video games, it’s no wonder that this has finally become an option in VR.

monkeymediaVR Motion Sickness? An element of the past. 

A huge consideration in VR content is how do you communicate your environment to your audience without making them ill. In some ways, this aspect is highly congruent to basic elements of film and cinematography. In VR, you want viewers to feel immersed and one way to accomplish that is through the field of view you provide them. What MonkeyMedia proposes, is for viewers to navigate themselves through an environment with their body language – thus reducing disagreements which can cause motion sickness.

AR in Mobile

8th Wall rightfully points out that for most consumers, their first (and primary) interaction with AR will be on mobile. At this moment, ARCore and ARKit are limited in terms of reach – only so many devices out there are able to run the platforms. In a move to equalize devices, and increase reach, 8th Wall specifically uses, “…computer vision to enable six degrees of freedom tracking, light estimation, and surface detection capabilities for apps on iOS or Android.” As more people age out of their current devices (planned obsolescence anyone?), this will become less of an issue – but is critical for increasing sample sizes now.  

Last, but not least, grab your Cardboard and join the fun: Winter Olympics VR

Weekly Roundup: February 1, 2018

“Mixed Reality”

In a trippy pixelated fashion, Imverse captures your body and it’s movements, recreating them in a virtual space. Though the effect is strange and clearly produced, it does hint at a future without body trackers in which your space records your movements. It’s easy to discount certain technologies that are discontinued, but as our engineer proclaimed, “See, look! The Kinect is still useful!”.

In other mixed reality news, Facebook is developing another response to the problem addressed above – how do we accurately capture body movements? What it comes down to is computer vision and the ability to predict and confirm the thousands of potential movements. The tech is moving forward, but more work is still to come.

mask-r-cnn2go

Too impatient for the above companies to figure it out? Look to the Vive Pro which already has the capability to track your hands without trackers with the depth sensor on the headset. While capabilities are still limited, it’s certainly a step forward.

While the Hololens is effectively an expensive marketing tool, Trimble is trying to reinvent its image as a hard hat for workers. Can you image the Hololens replacing your typical protective eyewear? I can’t imagine it’d protect me from actual dangerous debris, but I’ll wait for them to prove me wrong.

AR For Everyone! (You get AR, and you get AR!)

Even though Tango is dead, Google is declaring AR for the web. A potential reason for this decision and declaration is that one of Google’s main offerings is Web. With AR applications taking off, Google needs to compete. Especially with Poly, this puts Google in a prime position to offer AR access to the masses.

gifarticle3

Just in time for the Olympics, the New York Times announced their future use of AR within their content. Embedded in future news stories, AR content will be interlaid like other images, thus allowing readers to interact with the AR object in their own space.

Misc.

I love a good patent (thank god for Google Translate), and HTC is no exception. Last week, WIPO published and approved a patent for a mobile VR “accessory and lens system”. The patent illustrates a phone case with an attached Cardboard-like headset.

H-E-B, one of the largest grocery chains in the US, will now pilot Vuzix smart glasses in their manufacturing operations. AR within industrial enterprises is clearly coming and utilizing glasses is the ideal.

The State of AR: 2018

Augmented Reality is in the midst of its moment in the sun. While Virtual Reality has had a death grip on the hype spotlight since Facebook acquired Oculus in 2014, AR has been oddly quiet. According to Gartner’s Hype Cycle, AR is in the “Trough of Disillusionment” 5-10 years away from plateau, whereas VR sits on the “Slope of Enlightenment” 2-5 years from its plateau. We believe that this accurately reflects mainstream acceptance of head-mounted AR – but mobile-based AR and other similar platforms (e.g., heads-up displays) are delivering value today across many industries under the umbrella of technologies we identify as “AR”.

AR has crept into our daily lives without the fanfare some would expect, via technologies not traditionally classified as AR: car-based heads-up displays, photo filters, etc. Indeed, much of the AR research that companies such as Apple, Microsoft, and Google, have invested in remains mostly under wraps. Shrouded in secrecy and patent filings, Magic Leap has been steadily building a mountain of hype surrounding a new generation of AR products. Products that promise to blend the virtual with the physical will compete with our own biology’s ability to tell them apart. They claim an ability to deliver a world where data is no longer restricted to a glass rectangle but is instead woven into our environment seamlessly. These visions of the future may seem distant and lofty, but we are already experiencing the early Genesis and will soon integrate these capabilities into our everyday lives.

The Back Story

For years, a single type of AR reigned supreme. Marker-Based AR, known colloquially as “QR Code AR,” has been around since the turn of the century. The idea is simple: a camera points at a fiducial marker (i.e., QR Code), the program finds the target it was looking for, and it displays the 3D model – reorienting the model to match the pose information that the camera’s perspective of the marker creates. It performs these functions as fast as the camera captures the frames, therefore updating the position, rotation, and scale of the virtual object in real time. All of this conveys the illusion that the object is in front of the camera, but this method lacks true spatial awareness of the space surrounding the marker. In order for any virtual object to persist in the physical world, it must understand the physical world’s spatial properties. Newly developed computer vision (CV) techniques alongside performance gains in modern computing hardware have enabled a new class of AR applications that can do just that.

Enter “Markerless AR”

Spearheaded by the development of the Microsoft Hololens in 2016, Markerless AR has quickly become a movement unto itself. Google experimented with a suite of devices and custom software collectively called Project Tango which, like the Hololens, used LiDAR to map environments in real time and allowed users to place virtual objects in space with respect to physical boundaries like actual floors and walls. No longer were markers needed to interact with holograms. However, this hardware proved to be quite expensive to mass-produce, especially so early in the technology’s lifecycle. The Microsoft Hololens sells for a whopping $3,000 USD, and although Tango devices did make it to market, they were never intended for consumer use. With only a handful of developers able to afford the hardware – and near zero consumer adoption – Google shut down Project Tango and the Hololens became little more than a marketing tool for Microsoft (albeit a powerful one!). However, everything changed in mid-2017 when Facebook, Apple, Google, and Snapchat each announced their own Markerless AR solutions for mobile devices.

While Facebook and Snapchat added world tracking features to their existing camera apps, Apple developed an entire AR platform from the ground up for iOS. While ARKit doesn’t allow for some advanced features that the Hololens supports such as cross-session persistence (saving room scans and recognizing them automatically), or head-mounted holograms (still a handheld iPhone), it did effectively eliminate AR’s barrier to entry. For the first time, consumers had instant access to high-quality Markerless AR content. With consumer adoption in hand, developer interest piqued; and the largest market for immersive content to ever exist was instantly created.

2018 and Beyond

Now in 2018, there is more interest in Augmented Reality than ever before. Google answered Apple’s ARKit with ARCore, and in a few months, Magic Leap will release the first consumer-facing untethered Markerless AR Head-Mounted Display. Soon, we will be interacting with the world in ways we could have never imagined, dwarfing the creativity of fantasy and science fiction, prompting the query from future generations: “What is a screen?”

Weekly Roundup: January 11, 2018

CES: Why write about anything else?
 
New forms of transportation, enhanced TV screens (fruit roll-up LG anyone?), and of course, VR/AR dominated CES this year. While there is still one more day to go, we do have some thoughts on what has been released thus far.
 
From my informal office poll, the below product updates and announcements are our favorites:
 
The Vive Pro, of course. Already the darling of VR/AR enthusiasts, HTC released a pro version with XYZ as well as a branded wireless adapter. This announcement further places the VIVE as a top VR hardware provider. The Pro update includes increased resolution and sound fidelity, thus continuing the trend of making our virtual worlds more realistic. This trend is also visible in the increased wave of haptic startups and offerings).
 
As we’ve been playing with our TPCAST, the ability to be non-tethered is an incredibly freeing sensation. What’s the saying? You’re only as good as your tools and this release is certainly pushing premium VR forward.
 
It’s easy to say this was just an update, not a new product release but most supporters would beg you to differ. From increased to comfort, with upgraded headphones and more buttons, the Pro is an exciting update.
 
The Lenovo Mirage Solo is certainly another promising push into VRs future. A powerful standalone headset, the Mirage Solo is clean and simple. Under $400, the headset is already making waves for being considerably cheaper than the other standalone headset presented at CES (the Pico Neo).

 

While AR glasses are still waiting for their prime, several were released at CES and give us hope for growth. ASTRI came out with an increased field of view compared to the Hololens (placing more objects around us), while the Vuzix Blade is seemingly a resurrection of the Google Glass.
 
As for other mentionable updates in the AR/VR world, Looxid (pronunciation is not as it seems) takes the cake with the CES 2018 innovation award. A new premium headset from Pimax is coming our way, and though it can create beautiful visuals it is rather large and probably $$. 

 

 
The rest of CES? We’re becoming closer and closer to my childhood reality: Smart House with a focus on AI, smart home, and appliances.
 
Before the week closes, some state history. Since 1951, Oregonians were not permitted to pump their own gas, the reason being a concern of spilling fuel. While most other contiguous states permit this activity (New Jersey is the only other state which bans self-pumping), only this year did Oregon opened the opportunity for the majority of their state. The result? Memes galore and even a VR experience.

Weekly Roundup: January 4, 2018

That’s a wrap for 2017….

magic leapThough we didn’t reach the pinnacle of VR that The Guardian or other sources predicted, 2017 still closed with major updates in the VR/AR/MR/etc. space. This past year we saw an explosion in content, as well as updates in existing hardware. Some of the most notable outcomes were the Magic Leap announcement and the introduction of ARKit and ARCore. TechRadar even claims that we’re in a ‘second wave’ of virtual reality. From both an entertainment and enterprise perspective, the market seems to be in a prime position to continue growth an expansion.  

…and entering 2018

2018 is here and is already filling up with pertinent announcement and product releases. Everyone and anyone is making note of what’s coming and how it might change the game (perhaps most significant will be standalone VR headsets).

In the midst of announcements regarding both updated software and hardware, comes further proof that VR/AR is far more versatile. 

amazon mirror2

One of the biggest fears of eCommerce (and missteps for many CPG companies) was that consumers wouldn’t buy things that they couldn’t feel. This assumption, that we need kinesthetic senses to make that leap from the shopping cart, to check out, to purchase was proven unfounded (I mean, where do you buy most of your goods/media content/clothing?). And yet, eCommerce companies are still looking for ways to integrate a deep sense of reality within the shopping experience. Enter Amazon, which has submitted a patent for a VR mirror that dresses you in virtual clothes. It’s not a far cry from the dream mirror of every girl, as shown in Clueless.

In other news, the cost-prohibitive Hololens still is one of the main HMD (head mounted displays) in the current market. We’ve discussed medical uses before but Nomadeec created a Hololens program to assist first responders and doctors for making tough decisions.

Furthermore, VR experiences as a form of preparation are continuing to arrive (recall the simulation Walmart created for Black Friday). In one example, VR simulations are becoming part of programs for juvenile inmates who are about to re-enter society as adults.  

Last but not least…Ready Player One

While hype has already begun for the VR film of the year (i.e. the photoshop snafu of Tye’s leg), 2018 brings us even closer to a film that will likely change the public’s perception and consumption of VR/AR. When we think about VR representation in media, we consider content such as Black Mirror, the Matrix, Tron, etc. While these shows and films laid groundwork for actual technology, Ernest Cline’s novel feels much more familiar. It takes concepts and hardwares that already exist, products that are on the shelves and intensifies them in a grim reality. Like most of our peers, we too are waiting for the ball to drop. 

3D Asset Production with Real-Time Rendering

The flexibility of game development engines like Unity and the exponential increases in GPU throughput (thank you cryptocurrency miners!) represent a revolution in 3D art and animation for manufacturing and other engineering-led industries. Mature product companies design, develop and manufacture their products using CAD/CAM engineering software. CAD/CAM, or Computer Aided Design/Computer Aided Manufacturing precisely renders product designs within the strict confines of engineering data sets.

Engineering Data 01

When these data sets – representing a complete digital blueprint of a product – are combined with modern manufacturing techniques, it dramatically reduces the cycle time between design and manufacturing. Unfortunately, using those models for print, website, animation or even augmented reality/virtual reality via a traditional pre-rendering based production process is complicated, time consuming , and expensive. This process can be so arduous that many will create new, customer-facing 3D models from scratch – sacrificing accuracy and efficiency that should be derived from the original engineering data.

Due to their intricate detail, the 3D assets are typically very large files with massive  polygon counts (oftentimes in the millions, which is impractical and not necessary for real time engine use). In standard practice, files are translated into 3D geometry models useful for print and digital via a lengthy rendering process. But given the size, it can take a significant time and manual labor to translate for their final intended purposes. Rather than investing in modification and rendering each time we need to create a new asset, we can lower the polygon count and texture models in order to allow a real-time environment like Unity to quickly generate specific views, animations, or even interactive 3D experiences.

To get the CAD models to a point in which they are optimized for correct display, you need to reduce the polygon count. This process at times can produce anomalies that cause  issues with the integrity of surface quality (see below pictures as examples), but can be solved by creating normal maps using the original high poly model as reference. When you apply these normal maps to the low poly model, the surface quality of the models will render  correctly – despite not being the same polygonal count. If downsampling is well managed and performed by a skilled artist, the visual fidelity can match and potentially exceed the traditional production quality at a tiny fraction of the time.

Geometry_anomalies

This drives the cost of high quality 3D work down, bringing the capability to deliver 3D content and immersive experiences  to companies and product lines that couldn’t previously justify the investment. And leveraging the CAD data as a starting point for real-time rendering via game engines, we net an increased level of accuracy and fidelity that does not compromise the original intent of the object.

With translation complete the result is a versatile, accurate 3D geometric model. Surface materials are applied to the geometry that allow plastics to look like plastics and metals to behave as metals. Lighting, camera placement and an entire range of other artistic decisions provide complete visual control of imagery or animation – all in real-time. This allows artists and designers to experiment with immediate feedback  – creating deliverables for multiple end applications with efficiency.

How-To: Make an AR Birthday Card

Last year I started making digital birthday cards for my niece.  As she lives far away, these cards are a way I can share with her what I do for a living given that I only get to see her once a year at most.  

I start the process by asking her mother what she is into this year, i.e. pop culture, animals, etc.  Last year she was really into triceratops, so I took an old dinosaur model that I made years beforehand for an unrealized project and made an animation of it leaping out of an opening box.  At the time, I was freelancing so I had time to make it more elaborate.  

This year, her favorite thing is the pangolin, which is an armadillo like creature featured in a meme she likes.
pangolin meme
I was limited by time this year and since this was something I wanted to animate and run via the older model phone my brother in law has, I knew I had to make an animation of my own as quickly and ergonomically as possible.

media-20171211

First, I started by saving a lot of various pangolin pictures off google images for reference. Then, I modeled a very basic low poly model pangolin in Silo.   

media-20171211 (1)From there I imported the model into Zbrush.  I was quickly able to add scales and some minor details to the creature.   

media-20171211 (2)

I then took the high poly model and the low poly model and brought it into Substance Painter 2018 to shoot the normal maps so I could work on the textures.   

When I brought the pangolin into Maya I initially thought I could use Maya’s Quickrig function to give the creature bones, but that I realized that I would have to repose the model in a way for the application  to detect the limbs, so I reposed the creature. However, Maya’s Quickrig ultimately didn’t work the way I wanted it to, so I ended up making my own skeleton for the creature.  

media-20171211 (3)

Since I am using Vuforia for this AR project, I  needed a marker for the camera to detect when to start playing the animation.  I decided to make the marker a simple rock that my brother in law could print out and hang on the wall.  I took an old rock that I made for another unrealized project and started animating the pangolin with that in mind.  

media-20171211 (4)

The pangolin peeks out behind the rock and then musters up the courage to waddle it’s way around the rock, sheepishly waving at the viewer.  It was a very quick and simple project for me to complete and my niece really liked it –  so I won some awesome uncle points.  🙂

Weekly Roundup: December 7, 2017

Health and Fitness

ekgreaderOk, ok, it’s not VR/AR related, BUT, the FDA just approved an EKG reader compatible with the Apple Watch. While knowing your heart rate is helpful (a key feature on many fitness trackers), more focused data will certainly affect the potential audience of smart watches. This could perhaps increase the purchasing presence of older consumer groups, as well as insurance deductions. This release and approval sets a tone for FDA approved medical devices on our persons which will surely impact other extensions and devices.

Furthermore, a new study from Tel Aviv University suggests that VR can help improve brain functionality in patients that suffer from Parkinson’s. By coordinating a virtual experience with a treadmill, it combines cognitive rehab with motor functionality.

In, what I’d call, a highly anticipated use case for Medical VR, a doctor in France has performed the first VR assisted surgery. While surgeries have been broadcasted in VR before, this is the first occasion in which the surgeon wore a headset during the surgery to project 3D images onto the patient, as well as a way for him to connect to SMEs in other countries.

hotstepper

Last, but not least, stay fit by using this little guy to lead you around. He may lack muscle tone and (probably) some bones, but he’s having such a good time! 

The Holidays, a prime time for nostalgia

Nothing gets more meta than playing a retro game on a retro gaming system, as a game. EmuVR released videos of their experience, which is essentially reliving your teenage years (for better, or for worse). The experience allows you to return to “your” childhood home, and simulates that game playing ritual of the past. I wonder if within the mocked up bedroom, there will be food trash, clothes on the floor, etc? Based on the amount of detail they’ve poured into it so far – I wouldn’t be surprised. Retro is clearly in (self-promotion plug: Starfox).

In other VR/AR News…a quick run down, bullet style.

 

The Snapchat Disappointment (and why you, yes you, need AR)

For non-gamer millennials, such as your writer here, AR is slowly seeping into the way we interact with technology. VR can sometimes feel like a distant future, something we see as inevitable (especially with those terrifyingly real Black Mirror episodes) but not quite here impacting the masses. But AR? It’s the future that’s already happening.

Cue the dancing hotdog.

via GIPHY

Snapchat and it’s dancing hotdog are a critical step in understanding what AR is, in it’s simplest form, for mass consumption. In my experience, and I think for many similar to me, AR most likely found it’s way into my life earlier. But only now was it incredibly clear that I was experiencing something that was AR. Incredibly silly and honestly useless, the hotdog still made its mark. 

snap3

And while Snapchat has faltered time and again, from Spectacles, to low usage (hello Instagram and Facebook stories), and disappointing quarterly sales calls, it still indicates a start to general consumer usage of AR. It introduced only a sliver of the possibilities of AR to those that otherwise wouldn’t have cared to Google it. And perhaps, that’s what is disappointing about Snapchat. Millennials, and the beloved consumer age group of <25, were captured by Snap, a ready market to blow up AR and discuss it’s pros, and yet Snap squandered them. It repeatedly made a difficult to use app, with limited applications outside of its interface, and now a limited audience. 

pokemon-go-update-allows-android-players-to-listen-to-their-favorite-vibes-512386-2

I’m not the first to write about and contemplate Snap’s failures and missteps as a tech company.  But I think it’s critical to recognize that for many of Snapchat’s users, these filters are consumers largest interactions with AR. Perhaps they’ve been to Harry Potter World, played Pokemon GO for a week, but I’d argue that Snapchat’s hotdog is just as critical to mass development as AR as those gaming experiences.

Whether you like it or not, AR will soon infiltrate your life. Why not be an early embracer? The Harvard Business Review recently commented on the space, noting that our data and work is stuck in a 2D world with limited functionality. If you and I are 3D, shouldn’t our processes be, too?

AR is quickly becoming recognized as what will really take the market by storm. One potential reason in this shift is the lack of additional accessories (as opposed to VR). With ARKit and ARCore, and a multitudes of other changes in our devices used daily, AR is far more accessible. It’s way past thinking that this is a fad, destined to meet the demise of antiquated devices and services. AR is here and it’s here to stay – join us.

More reading/learning:

Study: Global AR Market will Grow 65% a year until 2023

The Reality of VR/AR Growth

A.I., Big Data, and AR are Already with us and Growing

Why Investing in Obstacles to Augmented Reality Today Could Result in Billions

Apple Bets the Future of Augmented Reality will be on Your Phone 

The Creation of “Starfox AR” – VR Austin Jam

Since it was first announced, I have been interested in experimenting with the iPhone X’s fancy new TrueDepth front facing camera. As soon as I got my hands on one, I downloaded the Unity ARKit Plugin and started digging into the new face tracking API’s. The creepy grey mask in the example project immediately reminded me of the final boss from Starfox (SNES), Andross. I found this video of the final battle from Starfox and thought it would make an awesome face tracking experience. This coalesced just days before the VR Austin Jam 2017 was set to begin, giving me the perfect idea for my Jam entry.

I knew going into the weekend, the secret to a successful hackathon is limiting scope. So I decided to focus on getting the face tracked Andross rig working first while my dev partner, Kenny Bier, focused on game mechanics. Luckily, Jeff Arthur (Banjo’s talented 3D artist) supplied me with the low poly Andross model, Starfox’s Arwing, and the door-like Andross projectiles before the Jam began so I had assets to work with.

This Unity blog post got me started by explaining at a high level how to access the iPhone X user’s face position, rotation, and blend shape location properties. Basically, you begin a new ARKitFaceTracking session, subscribe a FaceUpdated event and access the blend shape locations from within that function using the ARFaceAnchor anchorData.blendShapes dictionary.

private UnityARSessionNativeInterface m_session;

void Awake()
 {
m_session = UnityARSessionNativeInterface.GetARSessionNativeInterface();
 }

void Start () {
 Application.targetFrameRate = 60;
 ARKitFaceTrackingConfiguration config = new ARKitFaceTrackingConfiguration();

config.alignment = UnityARAlignment.UnityARAlignmentGravity;
 config.enableLightEstimation = true;

if (config.IsSupported ) {
 m_session.RunWithConfig (config);
 UnityARSessionNativeInterface.ARFaceAnchorAddedEvent += FaceAdded;
 UnityARSessionNativeInterface.ARFaceAnchorUpdatedEvent += FaceUpdated;
 UnityARSessionNativeInterface.ARFaceAnchorRemovedEvent += FaceRemoved;
 }

}
void FaceUpdated (ARFaceAnchor anchorData)
 {
 mAnchorData = anchorData;

currentBlendShapes = anchorData.blendShapes;
 mouthOpenInt = andross.GetComponent().sharedMesh.GetBlendShapeIndex("MouthOpen");

//Open Mouth
 currentBlendShapes.TryGetValue("jawOpen", out jawOpenAmt);
 andross.GetComponent().SetBlendShapeWeight(0, jawOpenAmt * 100);

//Left Eye Blink
 currentBlendShapes.TryGetValue("eyeBlink_L", out l_eyeOpenAmt);
 andross.GetComponent().SetBlendShapeWeight(1, l_eyeOpenAmt * 100);

//Right Eye Blink
 currentBlendShapes.TryGetValue("eyeBlink_R", out r_eyeOpenAmt);
 andross.GetComponent().SetBlendShapeWeight(2, r_eyeOpenAmt * 100);

}

Once you have the iPhone X blend shape hook ins, you route them to the imported blend shapes that correspond to your model. As a test, I imported a fully rigged model from the Unity Asset Store and got the mouth flapping. 

*IMPORTANT: ARKit’s blend shape values operate from 0-1 but your mesh’s blend shape weights operate from 0-100, so remember to multiply by 100 or you won’t see any animations*

Next, I had to rig my own model to be driven by these values.

I knew very little about creating blend shapes going in but I found this article that explains it fairly well. Normally, you would rig an entire face before creating the blend shapes so that the animation would render realistic muscle movement. However, due to the low poly nature of my Andross face, I could skip the rigging step and just manipulate the individual vertices by hand. I created three blend shapes: left eye closed, right eye closed, and mouth open. 

Once I exported the face mesh out of Maya with the blend shapes attached and imported it into Unity, I could manipulate the blend shape weights in the editor. 

After swapping out some variables, I replaced my example face rig with Andross and got my first retro game boss animoji working as intended.

I wanted all the user input to rely on facial expressions such as opening your mouth to fire and closing your eyes to turn ‘invisible’, allowing the Starfox bullets to pass through Andross without hurting him. So all I had to do was trigger functions based off the blend shape weight value (and control firing with a coroutine so that there wasn’t 1 million projectiles coming out Andross’s mouth!).

The bulk of the time spent after this was just creating the *game* part of it: randomizing enemy flight paths, firing projectiles, health systems, placing UI elements (all retro 2D assets created by the lovely/talented Kaci Lambeth), game over/win conditions and generally attempting to make it fun. After squashing a litany of bugs and balancing gameplay, Starfox AR was ready… to make people look strange in public!

Download here: https://rigelprime.itch.io/starfox-ar 

Weekly Roundup: November 9, 2017

htc-vive-standalone-1

Do We See the HTC Standalone Next Week?

Road to VR recently reported that HTC, one of the leading VR hardware makers, may finally show off its upcoming standalone VR headset at next week’s Vive Developer Conference. The headset was first announced in May and is planned to include inside-out tracking, which may allow for a scaled-down version of the room-scale capabilities of HTC’s Vive headset, the current top-end consumer VR experience. This would put the new standalone headset’s capabilities ahead of Facebook’s Oculus Go, announced in October, which will not include inside-out tracking. Vive Developer Conference kicks off on November 14th in Beijing.

google-poly-vr-3D-object-library-designboom-600Google Goes Poly

Adding to the already impressive list of developer resources Google has offered up to the VR/AR community, Poly was announced on November 1st. With Tilt Brush and Blocks Google created tools to make creating 3D content dead-simple, and with Cardboard, Daydream, and ARCore they have created great platforms to view and interact with that content. Poly is the (free) marketplace that connects the creation and viewing of the 3D assets, integrating directly with Tilt Brush and Blocks to allow users to upload their creations, and allowing VR and AR developers to easily grab objects to include in their projects.

apple-iphone-x-5-1500x1000

iPhone X Ships and Killer Babies Get Glasses

Eager Apple fans started getting their hands on the iPhone X last week, and the facial recognition enabled by the TrueDepth camera is already fueling a flood of new hacks and tricks by developers. Warby Parker quickly updated their Glasses app to recommend frames based on a facial scan, and Kite & Lightning has a proof-of-concept using the camera to animate characters from their upcoming game Bebylon, which has something to do with immortal infants engaging in combat.

Also, Apple AR headsets in 2020? Maybe. But we’ve heard that before.

PLNAR: Replacing your Tape Measure

Working with Apple and SmartPicture to Launch an ARKit App on Day 1

SmartPicture approached Banjo prior to the iOS 11 and ARKit launch event with an interesting challenge: Develop a consumer version of the pro-level SmartPicture 3D room planning application in five (5?!) weeks and have it ready for launch. Apple hosted the combined Banjo/SmartPicture team in Cupertino pre-launch to consult and ensure that we hit no roadblocks.

On September 12th 2017, Banjo and SmartPicture launched PLNAR – and it has since racked up over 200,000 downloads from users measuring their rooms and automatically creating floor plans.

As one of the first developers to deploy a large-scale augmented reality application on iOS, Rigel has shared some of the UX considerations and challenges faced when asking users to interact with a third dimension on a two-dimensional screen.

Check out our case study on PLNAR, or download the app from the App Store.

rigel-apple