Lighthouse and Core Web Vitals have brought concepts such as Largest Contentful Paint and Time to Interactive into many front-end developer’s lingo.
However, these three-letter-acronyms are easy to get confused with each other. What’s CLS again? What’s the difference between FID and TTI?
I’d like to propose an alternative language, from the user’s experience:
When Can Users Do X?
User starts loading
User can see something
User can hear something
User can read
User can scroll
User can zoom
User can interact
User thinks they can interact
User knows they can interact (from previous experience)
User actually successful interacted
User can share
User can close
This potentially goes broader than a particular browser life-cycle event. It starts to ask questions like “what’s the user’s goal in visiting this page?”, is it to read themselves (to read a recipe), or to share the link with someone else (to share the page to a booking with a partner)?
Perhaps the user tried zooming, but because the page was still loading then content moved around like a new born calf on a trampoline. So you’d say that the user couldn’t zoom until layout stopped shifting. I’m thinking less in terms of “Cumulative Layout Shift” and more in terms of “some users like to zoom, so when can they start doing that?”
And it begins to break the assumption that all users read the page by seeing, but some rather read by hearing. As great as tools like Lighthouse have been, they sort of assume the user is interacting with the page visually.
Apple has been iterating for years on a curious set of APIs called ARKit that has been largely ignored by the iPhone & iPad developer community, which hints that more compelling hardware is coming to take full advantage. CEO Tim Cook has openly said AR is the ‘next big thing’.
Rumors suggest Apple is developing an augmented reality (AR) glasses product. This product would likely act as an iPhone periphery, at least initially, similar to how Apple Watch once relied on a host iPhone to provide the main computational grunt. Well informed supply chain analyst Ming-Chi Kuo has said the AR glasses “will primarily take a display role offloading computing, networking, and positioning to the iPhone.”
Apple has recently introduced M1 Macs powered by Apple Silicon. These Macs are notable because they bring a marked improvement in battery life and performance. But they also bring Apple’s developer devices finally in line with the more capable hardware of their consumer devices.
Apple’s head of software engineering Craig Federigi talked with Ars Technica about the advantages of M1’s unified memory architecture:
Where old-school GPUs would basically operate on the entire frame at once, we operate on tiles that we can move into extremely fast on-chip memory, and then perform a huge sequence of operations with all the different execution units on that tile. It’s incredibly bandwidth-efficient in a way that these discrete GPUs are not.
So how would you develop apps for such a device? Let’s look at how developing software for the iPhone works today.
Developers buy Macs and install Xcode which allows them to write, compile, and deploy iPhone (and iPad, Mac, Watch & TV) apps. To actually experience the user experience of their apps, developers either push the app to their own iPhone and launch it like any other app, or they run it directly on their Mac within the Simulator. They choose which device they want to simulate and then see an interactive representation of the device’s screen running their app within a window on their Mac.
Could you one day choose AR Glasses from the list here?
Currently this has worked by compiling the iPhone or iPad software for Intel chips, allowing the app to be run ‘natively’ on the Mac. Macs are powerful enough to run several of these simulators at once, however checking graphic intensive experiences such as 3D or animation sometimes means avoiding the simulator and trying the app directly on the target device. On the whole the simulator does a capable enough job to preview the experience of an app.
(An iPad version of Xcode has been speculated for years, been even with their improved keyboards and fancy trackpads, nothing has been released. The Mac maintains its role as the developer device for Apple’s platforms.)
How will this work for the AR glasses? Will Xcode provide an AR glasses Simulator for Mac? Would that appear as a window on screen with a preview for each eye? Or would you need to push the app to an actual device to preview?
If a simulator was provided, the pre-Apple-Silicon technology of an Intel chip and AMD GPU would not be able to reproduce the capabilities of a unified memory architecture, tiled rendering, and the neural engine. It would either run poorly, at low frame rates, or some capabilities might not even be possible at all. An Intel Mac can simulate software but it cannot simulate hardware. A Mac with related Apple Silicon hardware would allow a much better simulation experience.
Instead of seeing a preview of the AR display on your Mac’s screen, consider if the product could pair directly to your Mac. The developer could see a live preview of their work. The Mac could act as a host device instead of the iPhone, providing the computation, powerful graphics, machine learning, and networking needs of the AR glasses.
With the same set of frameworks brought over allowing iPhone & iPad apps to be installed and run on the Mac, both software and hardware will be ready to run AR-capable apps designed for iPhone. The Mac is now a superset of iPhone, and so what the iPhone can do, the Mac can also do. App makers now have a unified developer architecture.
Perhaps AR-capable apps from the iPhone App Store could even be installed by normal users directly on their Mac. With augmented reality perhaps the glasses will augment the device you currently use, whether that’s the iPhone in your pocket or the Mac on your desk. And allow switching back-and-forth as easily as a pair of AirPods (which would likely be used together with AR glasses).
There’s one last picture I want to leave you with. Swift Playgrounds works by showing a live preview of interactive UI alongside editable code. Change the code and your app immediate updates. The Simulator has been integrated into the app developer experience.
Now imagine Swift Playgrounds for AR — as I edit my code do my connected AR glasses instantly update?
I want to cover what the user experience of an AR Glasses product from Apple could look like, and how it might integrate with today’s products. First, let’s survey Apple’s current devices and their technologies for input & output:
iPhone input methods
Multitouch: use your fingers naturally for UI interactions, typing text, drawing, scrolling
Voice: request commands from Siri such as change volume or app-switching, dictate instead of typing
iPad input methods
Multitouch: use your fingers naturally for UI interactions, typing text, scrolling, drawing
External Keyboard: faster and more precise typing than multitouch
Pencil: finer control than multitouch, especially for drawing
Trackpad: finer control than multitouch, especially for UI interactions
Voice: request commands from Siri such as changing volume or app-switching, dictating text
Mac input methods
Keyboard: dedicated for typing text, also running commands
Trackpad / Mouse: move and click pointer for UI interactions, scrolling, drawing
Function keys: change device preferences such as volume, screen/keyboard brightness; app-switching
Touchbar: change volume, screen/keyboard brightness; enhances current app with quick controls
Voice: request commands from Siri such as changing volume or app-switching, dictating text
Apple Watch input methods
Touch: use your fingers for UI interactions, awkward typing text, scrolling
Digital Crown: change volume,scrolling, navigate back
Voice: request commands from Siri such as changing volume or app-switching, dictating text
AirPods input methods
Voice: request commands from Siri such as changing volume, dictating text
Tap: once to play/pause, double to skip forward, triple to skip backward
So what would a rumoured Apple Glasses product bring?
Apple’s Design Principles
Deference to Content
The most sparse approach might be to rely on voice for all input. Siri would become a central part of the experience, and be the primary way for switching apps and changing volume and dictating text. Siri currently can be activated from multiple devices, such as personal hand-held devices such as iPhone or shared devices such as HomePod. So it makes sense that the glasses would augment this experience, providing visual feedback that accompanies the current audible feedback.
Contrast Siri’s visual behaviour between iPadOS 13 and 14:
Siri in iPadOS 13 takes over the entire screenSiri in iPadOS 14 layers discretely over your screen with a compact design
This provides a glimpse of the philosophy of the Apple Glasses. Instead of completely taking over what the user current sees, Siri will augment what you are currently doing with a discrete compact design.
This also relates to the Defer to Content design principle that has been present since iOS 7 which was the opening statement from Apple’s current design leadership. So we can imagine a similar experience with the Glasses, but where the content is everything the user sees, whether that’s digital or physical.
Content from a traditional app could be enhanced via augmentation. A photo or video in a social media feed might take over the user’s view, similar to going into full screen. Text might automatically scroll or be spoken aloud to the user. Content might take over briefly, and then be easily dismissed to allow the user to get back to their life.
Widgets such as weather or notifications such as received messages might be brought in from the outside to the centre. I can imagine a priority system from the viewer’s central vision to the extremes of their field-of-view). Content could be pinned to the periphery and be glanced at, while periodically in the background it receives updates.
If worn together with a set of AirPods, an even more immersive experience would be provided, with the AirPod’s tap input for playing and skipping. The active noise cancellation mode would probably pair well with a similar mode for the glasses, blocking the outside world for maximum immersion. Its counterpart transparency mode would allow the user to reduce the audible and visual augmentation to a minimum.
Clarity
So with a Glasses product, what is the content? It’s the world around you. But what if the world sometimes is an iPhone or Mac you use regularly through your day? Do the Glasses visually augment that experience?
With AirPods you can hop from an iPhone to a Mac to an iPad, and automatically switch the device that is paired. Wouldn’t it make sense for the AirPods and Glasses to perform as synchronised swimmers and pair automatically together to the same device that someone decides to use?
Can the Glasses recognise your device as being yours and know it precise location in the Glasses’ field-of-view? That sounds like what the U1 chip that was brought to iPhone 11 would do, as 9to5Mac describes it “provides precise location and spatial awareness, so a U1-equipped device can detect its exact position relative to other devices in the same room.”
Perhaps instead of tapping your iPhone screen to wake it, you can simply rest your eyes on it for a moment and it will wake up. The eyes could be tracked by the Glasses and become an input device of their own. If precise enough they could move the cursor on an iPad or Mac. The cursor capabilities of iPadOS 13.4 brought a new design language with UI elements growing and moving as they were focused on, and subtly magnetised to the cursor as it floated across the screen.
The cursor becomes the object of focus.
Similar affordances could allow a Glasses user’s eyes to replace the cursor, with the realtime feedback of movement and size increase enough to let the user know exactly what is in focus. The Mac might not need touch if the eyes could offer control.
In the physical world, a similar effect to Portrait mode from iPhone could allow objects in the world to also be focused on. The targeted object would remain sharp, and everything around it would become blurred, literally putting it into focus.
AirTags could enhance physical objects by providing additional information to their neighbour. Instead of barcodes or QR codes, the product itself could advertise its attributes and make it available for purchase via Apple Pay.
Use Depth to Communicate
If the Glasses not just show you the world around you but see the world around you, then your hands gesturing signals in the air could also be a method of input. Simple gestures could play or pause, skip ahead or back, change the volume. The could also be used to scroll content or interact with UI seen through the Glasses.
These gestures would close the loop between input and output. The iPad’s multitouch display works so well because of direct manipulation: your fingers physically touch the UI your eyes see. As your fingers interact and move, the visuals move with it. The two systems of touch input and flat-panel-display output become one to the user. Hand gestures would allow direct manipulation of the content seen through the Glasses.
Speculated Apple ‘Glasses’ input methods
Voice via AirPods or nearby device: request commands from Siri such as changing volume, dictating text
Eyes: interact with devices that have a cursor, focus on elements whether digital or physical
Air Gestures: use your hands for UI interactions, scrolling, changing the volume, playing, pausing, skipping
U1: recognise nearby Apple devices and interact with them
Plus whatever device you are currently using (if any)
So the Glasses could offer a range of novel input methods from a user’s eyes to their hands, or it could simply rely on the ubiquitous voice-driven world that most Apple devices now provide. The U1 chip seems to hint at an interaction between Glasses and hand-held device, perhaps modest like simply recognising it, or perhaps augmenting its input and output allowing a new way to interact with iPhones, iPads, and Macs. The Glasses accompanies what the user already sees and interacts with every day, enhancing it visually but deferring to the outside world when it needs. It could offer an immersive experience for content such as video and games, or future formats that Apple and other AR-device-makers hope will become popular.
Sometimes it feels like we decide the developer experience is more front of mind than the user experience.
Or: DX > UX.
Why is that? Does web development today feel like such a beast that I must wield a powerful weapon with which to conquer it?
We need the latest browser features to build web apps. If they are not available in every user’s browser, we will write the code we prefer and transpile or package it up into to a common lingo.
The team really wants to use this new language or framework, so we should adopt it and be cutting edge! We’ll be more attractive in the industry so hiring will be easier.
It’s quicker for us to choose the same tech everyone else is using, because they have solved lots of problems that make it sometimes tricky and there’s a huge ecosystem of ready-to-go components and plugins. We’ll move so fast! ?
While I think these considerations should be discussed, where is the user here?
Will it be easier for the user? Will they move quicker than they had before? Will they be provided something ready-to-go to solve the problem they have? Will they really want to use what you’ve made? What did they need anyway?
If you have failed here for the user, or if the developer experience is more satisfying to the developers than the user experience is to the users, what has been gained? If it’s fast for the team to see changes live but slow for the user to load, that’s a tax the user is paying for. Was it worth it? Why should they pay?
As developers our natural sense is to pick up on what makes a compelling and fresh developer experience that will lead us to learn interesting new concepts and be involved in the currents of the industry. But does the user care? Are you creating a large tax for them to swallow? Will your team get swept away from the user? What value does the user get for paying that tax?
Keep the user experience front of mind. Talk with the user and measure so that you know that your UX is at least as compelling as your DX. Keep the tax from your DX choice low.