Library

[MUSIC PLAYING]
Video Player is loading.
 
Current Time 0:00
Duration 38:42
Loaded: 0.00%
 
[MUSIC PLAYING]

[MUSIC PLAYING]

x1.00


Back

Games & Quizzes

Training Mode - Typing
Fill the gaps to the Lyric - Best method
Training Mode - Picking
Pick the correct word to fill in the gap
Fill In The Blank
Find the missing words in a sentence Requires 5 vocabulary annotations
Vocabulary Match
Match the words to the definitions Requires 10 vocabulary annotations

You may need to watch a part of the video to unlock quizzes

Don't forget to Sign In to save your points

Challenge Accomplished

PERFECT HITS +NaN
HITS +NaN
LONGEST STREAK +NaN
TOTAL +
- //

We couldn't find definitions for the word you were looking for.
Or maybe the current language is not supported

  • 00:00

    [MUSIC PLAYING]
    [MUSIC PLAYING]

  • 00:06

    CHRIS KELLEY: Thank you so much for joining us.
    CHRIS KELLEY: Thank you so much for joining us.

  • 00:09

    My name is Chris.
    My name is Chris.

  • 00:10

    I'm a designer and prototyper working
    I'm a designer and prototyper working

  • 00:12

    on immersive prototyping at Google,
    on immersive prototyping at Google,

  • 00:15

    and I'm joined by Ellie and Luca.
    and I'm joined by Ellie and Luca.

  • 00:17

    And today, we're going to talk about exploring AR interaction.
    And today, we're going to talk about exploring AR interaction.

  • 00:19

    It's really awesome to be here.
    It's really awesome to be here.

  • 00:23

    We explore immersive computing through rapid prototyping
    We explore immersive computing through rapid prototyping

  • 00:26

    of AR and VR experiments.
    of AR and VR experiments.

  • 00:29

    Often, that's focused on use case exploration or app ideas.
    Often, that's focused on use case exploration or app ideas.

  • 00:34

    We work fast, which means we fail fast,
    We work fast, which means we fail fast,

  • 00:37

    but that means that we learn fast.
    but that means that we learn fast.

  • 00:41

    We spend a week or two on each prototyping sprint,
    We spend a week or two on each prototyping sprint,

  • 00:43

    and at the end of the sprint, we end
    and at the end of the sprint, we end

  • 00:45

    with a functional prototype starting
    with a functional prototype starting

  • 00:47

    from a tightly scoped question.
    from a tightly scoped question.

  • 00:49

    And then we put that prototype in people's hands
    And then we put that prototype in people's hands

  • 00:51

    and we see what we can learn.
    and we see what we can learn.

  • 00:56

    So this talk is going to be about takeaways we have
    So this talk is going to be about takeaways we have

  • 00:58

    from those AR explorations.
    from those AR explorations.

  • 01:01

    But first, I want to set the table a little bit
    But first, I want to set the table a little bit

  • 01:03

    and talk about what we mean when we say augmented reality.
    and talk about what we mean when we say augmented reality.

  • 01:08

    When a lot of people think about AR,
    When a lot of people think about AR,

  • 01:10

    the first thing they think about is bringing virtual objects
    the first thing they think about is bringing virtual objects

  • 01:13

    to users in the world.
    to users in the world.

  • 01:14

    And it is that.
    And it is that.

  • 01:15

    That's part of it.
    That's part of it.

  • 01:16

    We call this the out of AR.
    We call this the out of AR.

  • 01:20

    But AR also means more than that.
    But AR also means more than that.

  • 01:22

    It means being able to understand the world visually
    It means being able to understand the world visually

  • 01:25

    to bring information to users, and we call this understanding
    to bring information to users, and we call this understanding

  • 01:28

    the in of AR.
    the in of AR.

  • 01:30

    Many of the tools and techniques that
    Many of the tools and techniques that

  • 01:32

    were created for computer vision and machine learning perfectly
    were created for computer vision and machine learning perfectly

  • 01:35

    complement tools like ARCore, which is Google's AR
    complement tools like ARCore, which is Google's AR

  • 01:38

    development platform.
    development platform.

  • 01:42

    So when we explore AR, we build experiences
    So when we explore AR, we build experiences

  • 01:45

    that include one of these approaches or both.
    that include one of these approaches or both.

  • 01:49

    So this talk is going to be about three magic powers
    So this talk is going to be about three magic powers

  • 01:52

    that we've found for AR.
    that we've found for AR.

  • 01:55

    We think that these magic powers can help you build better AR
    We think that these magic powers can help you build better AR

  • 01:58

    experiences for your users.
    experiences for your users.

  • 02:00

    So we're going to talk about some prototypes that we've
    So we're going to talk about some prototypes that we've

  • 02:02

    built and share our learnings with you
    built and share our learnings with you

  • 02:04

    during each of these three magic power areas during the talk.
    during each of these three magic power areas during the talk.

  • 02:10

    First, I'll talk to you about context-driven superpowers.
    First, I'll talk to you about context-driven superpowers.

  • 02:13

    That's about how we can combine visual and physical
    That's about how we can combine visual and physical

  • 02:16

    understanding of the world to make magical AR experiences.
    understanding of the world to make magical AR experiences.

  • 02:21

    Then Ellie will talk to you about shared augmentations.
    Then Ellie will talk to you about shared augmentations.

  • 02:25

    And this is really all about the different ways
    And this is really all about the different ways

  • 02:27

    that we can connect people together in AR,
    that we can connect people together in AR,

  • 02:29

    and how we can empower them just by putting them together.
    and how we can empower them just by putting them together.

  • 02:37

    And then Luca will cover expressive inputs.
    And then Luca will cover expressive inputs.

  • 02:40

    This is about how AR can help unlock
    This is about how AR can help unlock

  • 02:42

    authentic and natural understanding for our users.
    authentic and natural understanding for our users.

  • 02:48

    So let's start about context-driven superpowers.
    So let's start about context-driven superpowers.

  • 02:52

    What this really means is using AR technologies
    What this really means is using AR technologies

  • 02:54

    to deeply understand the context of a device,
    to deeply understand the context of a device,

  • 02:57

    and then build experiences that directly leverage that context.
    and then build experiences that directly leverage that context.

  • 03:02

    And there's two parts to an AR context.
    And there's two parts to an AR context.

  • 03:04

    One is visual understanding, and the other
    One is visual understanding, and the other

  • 03:06

    is physical understanding.
    is physical understanding.

  • 03:09

    With ARCore, this gives your phone
    With ARCore, this gives your phone

  • 03:11

    the ability to understand and sense
    the ability to understand and sense

  • 03:12

    its environment physically.
    its environment physically.

  • 03:15

    But through computer vision and machine learning,
    But through computer vision and machine learning,

  • 03:17

    we can make sense of the world visually.
    we can make sense of the world visually.

  • 03:19

    And by combining these results, we
    And by combining these results, we

  • 03:21

    get an authentic understanding of the scene,
    get an authentic understanding of the scene,

  • 03:23

    which is a natural building block of magical AR.
    which is a natural building block of magical AR.

  • 03:30

    So let's start with visual understanding.
    So let's start with visual understanding.

  • 03:32

    The prototyping community has done some awesome explorations
    The prototyping community has done some awesome explorations

  • 03:35

    here, and we've done a few of our own
    here, and we've done a few of our own

  • 03:36

    that we're excited to share.
    that we're excited to share.

  • 03:43

    To start, we wondered if we could
    To start, we wondered if we could

  • 03:45

    trigger custom experiences from visual signals in the world.
    trigger custom experiences from visual signals in the world.

  • 03:50

    Traditional apps today leverage all kinds of device
    Traditional apps today leverage all kinds of device

  • 03:52

    signals to trigger experiences.
    signals to trigger experiences.

  • 03:54

    GPS, the IMU, et cetera.
    GPS, the IMU, et cetera.

  • 03:57

    So could we use visual input as a signal as well?
    So could we use visual input as a signal as well?

  • 04:01

    We built a really basic implementation of this concept.
    We built a really basic implementation of this concept.

  • 04:03

    This uses ARCore and the Google Cloud Vision
    This uses ARCore and the Google Cloud Vision

  • 04:05

    API that detects any kind of snowman
    API that detects any kind of snowman

  • 04:08

    in the scene, which triggers a particle system that
    in the scene, which triggers a particle system that

  • 04:10

    starts to snow.
    starts to snow.

  • 04:12

    So through visual understanding, we
    So through visual understanding, we

  • 04:14

    were able to tailor an experience to specific cues
    were able to tailor an experience to specific cues

  • 04:17

    in the environment for users.
    in the environment for users.

  • 04:19

    This enables adaptable and context aware applications.
    This enables adaptable and context aware applications.

  • 04:23

    So even though this example is a simple one,
    So even though this example is a simple one,

  • 04:27

    the concept can be extended so much further.
    the concept can be extended so much further.

  • 04:29

    For example, yesterday we announced the augmented images
    For example, yesterday we announced the augmented images

  • 04:32

    API for ARCore.
    API for ARCore.

  • 04:34

    So if you use this, you can make something
    So if you use this, you can make something

  • 04:36

    like an experience that reacts relative to device movement
    like an experience that reacts relative to device movement

  • 04:41

    around an image in the scene, or even
    around an image in the scene, or even

  • 04:43

    from a known distance to an object in the world.
    from a known distance to an object in the world.

  • 04:46

    If you think this concept is interesting,
    If you think this concept is interesting,

  • 04:48

    I highly recommend checking out the AR VR demo tent.
    I highly recommend checking out the AR VR demo tent.

  • 04:51

    They have some amazing augmented images demos there.
    They have some amazing augmented images demos there.

  • 04:58

    The next thing we wanted to know is
    The next thing we wanted to know is

  • 05:00

    if we could bridge the gap between digital and physical,
    if we could bridge the gap between digital and physical,

  • 05:03

    and, for example, bring some of the most delightful features
    and, for example, bring some of the most delightful features

  • 05:06

    of e-readers to physical books.
    of e-readers to physical books.

  • 05:09

    The digital age has brought all kinds of improvements
    The digital age has brought all kinds of improvements

  • 05:11

    to some traditional human behaviors,
    to some traditional human behaviors,

  • 05:14

    and e-readers have brought lots of cool new things to reading.
    and e-readers have brought lots of cool new things to reading.

  • 05:17

    But if you're like me, sometimes you just
    But if you're like me, sometimes you just

  • 05:19

    missed the tactility in holding a great book in your hands.
    missed the tactility in holding a great book in your hands.

  • 05:24

    So we wanted to know if we could bridge that gap.
    So we wanted to know if we could bridge that gap.

  • 05:26

    In this prototype, users highlight a passage or word
    In this prototype, users highlight a passage or word

  • 05:29

    with their finger and they instantly
    with their finger and they instantly

  • 05:31

    get back a definition.
    get back a definition.

  • 05:33

    This is a great example of a short-form-focused interaction
    This is a great example of a short-form-focused interaction

  • 05:37

    that required no setup for users.
    that required no setup for users.

  • 05:39

    It was an easy win only made possible
    It was an easy win only made possible

  • 05:41

    by visual understanding.
    by visual understanding.

  • 05:43

    But as soon as we tried this prototype,
    But as soon as we tried this prototype,

  • 05:45

    there were two downfalls that we noticed,
    there were two downfalls that we noticed,

  • 05:47

    and they became immediately apparent when we used it.
    and they became immediately apparent when we used it.

  • 05:50

    The first is that it was really difficult to aim your finger
    The first is that it was really difficult to aim your finger

  • 05:53

    at a small moving target on a phone,
    at a small moving target on a phone,

  • 05:55

    and maybe the page is moving as well,
    and maybe the page is moving as well,

  • 05:57

    and you're trying to target this little word.
    and you're trying to target this little word.

  • 05:59

    That was really hard.
    That was really hard.

  • 06:00

    And the second was that when you're highlighting a word,
    And the second was that when you're highlighting a word,

  • 06:03

    your finger is blocking the exact thing
    your finger is blocking the exact thing

  • 06:04

    that you're trying to see.
    that you're trying to see.

  • 06:07

    Now, these are easily solvable with a follow-up UX iteration,
    Now, these are easily solvable with a follow-up UX iteration,

  • 06:10

    but they illustrate a larger lesson.
    but they illustrate a larger lesson.

  • 06:13

    And that's that with any kind of immersive computing,
    And that's that with any kind of immersive computing,

  • 06:15

    you really have to try it before you can judge it.
    you really have to try it before you can judge it.

  • 06:20

    An interaction might sound great when you talk about it
    An interaction might sound great when you talk about it

  • 06:22

    and it might even look good in a visual mock,
    and it might even look good in a visual mock,

  • 06:24

    but until you have it in your hand
    but until you have it in your hand

  • 06:26

    and you can feel it and try it, you're
    and you can feel it and try it, you're

  • 06:28

    not going to know if it works or not.
    not going to know if it works or not.

  • 06:30

    You really have to put it in a prototype
    You really have to put it in a prototype

  • 06:32

    so you can create your own facts.
    so you can create your own facts.

  • 06:38

    Another thing we think about a lot
    Another thing we think about a lot

  • 06:40

    is, can we help people learn more effectively?
    is, can we help people learn more effectively?

  • 06:42

    Could we use AR to make learning better?
    Could we use AR to make learning better?

  • 06:45

    There's many styles of learning, and if you
    There's many styles of learning, and if you

  • 06:47

    combine these styles of learning,
    combine these styles of learning,

  • 06:49

    it often results in faster and higher-quality learning.
    it often results in faster and higher-quality learning.

  • 06:52

    In this prototype, we combined visual, oral, verbal,
    In this prototype, we combined visual, oral, verbal,

  • 06:56

    and kinesthetic learning to teach people how
    and kinesthetic learning to teach people how

  • 06:58

    to make the perfect espresso.
    to make the perfect espresso.

  • 07:01

    The videos explain--
    The videos explain--

  • 07:02

    I'm sorry.
    I'm sorry.

  • 07:03

    We placed videos around the espresso machine
    We placed videos around the espresso machine

  • 07:06

    in the physical locations where that step occurs.
    in the physical locations where that step occurs.

  • 07:08

    So if you were learning how to use the grinder,
    So if you were learning how to use the grinder,

  • 07:10

    the video for the grinder is right next to it.
    the video for the grinder is right next to it.

  • 07:14

    Now, for users to trigger that video,
    Now, for users to trigger that video,

  • 07:16

    they move their phone to the area
    they move their phone to the area

  • 07:17

    and then they can watch the lesson.
    and then they can watch the lesson.

  • 07:19

    That added physical component of the physical proximity
    That added physical component of the physical proximity

  • 07:22

    of the video and the actual device
    of the video and the actual device

  • 07:25

    made a huge difference in general understanding.
    made a huge difference in general understanding.

  • 07:27

    In our studies, users who had never used an espresso machine
    In our studies, users who had never used an espresso machine

  • 07:31

    before easily made an espresso after using this prototype.
    before easily made an espresso after using this prototype.

  • 07:35

    So for some kinds of learning, this
    So for some kinds of learning, this

  • 07:37

    can be really beneficial for users.
    can be really beneficial for users.

  • 07:39

    Now, unfortunately for our prototype,
    Now, unfortunately for our prototype,

  • 07:41

    one thing that we learned here was
    one thing that we learned here was

  • 07:43

    that it's actually really hard to hold your phone
    that it's actually really hard to hold your phone

  • 07:45

    and make an espresso at the same time.
    and make an espresso at the same time.

  • 07:48

    So you need to be really mindful of the fact
    So you need to be really mindful of the fact

  • 07:51

    that your users might be splitting
    that your users might be splitting

  • 07:52

    their physical resources between the phone and the world.
    their physical resources between the phone and the world.

  • 07:56

    And so as it applies to your use case,
    And so as it applies to your use case,

  • 07:57

    try building experiences that are really
    try building experiences that are really

  • 08:00

    snackable and hands-free.
    snackable and hands-free.

  • 08:06

    Speaking of combining learning and superpowers together,
    Speaking of combining learning and superpowers together,

  • 08:09

    we wondered if AR could help us learn
    we wondered if AR could help us learn

  • 08:11

    from hidden information that's layered in the world
    from hidden information that's layered in the world

  • 08:13

    all around us.
    all around us.

  • 08:15

    This is a prototype that we built
    This is a prototype that we built

  • 08:17

    that's an immersive language learning app.
    that's an immersive language learning app.

  • 08:20

    We showed translations roughly next to objects of interest
    We showed translations roughly next to objects of interest

  • 08:23

    and positioned these labels by taking a point cloud
    and positioned these labels by taking a point cloud

  • 08:26

    sample from around the object and putting the label sort
    sample from around the object and putting the label sort

  • 08:29

    of in the middle of the points.
    of in the middle of the points.

  • 08:31

    Users found this kind of immersive learning really fun,
    Users found this kind of immersive learning really fun,

  • 08:34

    and we saw users freely exploring
    and we saw users freely exploring

  • 08:36

    the world looking for other things to learn about.
    the world looking for other things to learn about.

  • 08:39

    So we found that if you give people
    So we found that if you give people

  • 08:41

    the freedom to roam and tools that are simple and flexible,
    the freedom to roam and tools that are simple and flexible,

  • 08:44

    the experiences that you build for them
    the experiences that you build for them

  • 08:45

    can create immense value.
    can create immense value.

  • 08:51

    We now have physical understanding.
    We now have physical understanding.

  • 08:53

    This is AR's ability to extract and infer
    This is AR's ability to extract and infer

  • 08:56

    information and meaning from the world around you.
    information and meaning from the world around you.

  • 08:59

    When a device knows exactly where it is, not only in space,
    When a device knows exactly where it is, not only in space,

  • 09:02

    but also relative to other devices,
    but also relative to other devices,

  • 09:04

    we can start to do things that really
    we can start to do things that really

  • 09:06

    feel like you have superpowers.
    feel like you have superpowers.

  • 09:10

    For example, we can start to make
    For example, we can start to make

  • 09:12

    interactions that are extremely physical, natural,
    interactions that are extremely physical, natural,

  • 09:14

    and delightful.
    and delightful.

  • 09:16

    Humans have been physically interacting
    Humans have been physically interacting

  • 09:17

    with each other for a really long time,
    with each other for a really long time,

  • 09:19

    but digital life has abstracted some of those interactions.
    but digital life has abstracted some of those interactions.

  • 09:22

    We wondered if we could swing the pendulum
    We wondered if we could swing the pendulum

  • 09:24

    back the other direction a little bit using AR.
    back the other direction a little bit using AR.

  • 09:28

    So in this prototype, much like a carnival milk bottle game,
    So in this prototype, much like a carnival milk bottle game,

  • 09:32

    you fling a baseball out of the top of your phone
    you fling a baseball out of the top of your phone

  • 09:34

    and it hits milk bottles that are shown on other devices.
    and it hits milk bottles that are shown on other devices.

  • 09:38

    You just point the ball where you want to go, and it goes.
    You just point the ball where you want to go, and it goes.

  • 09:42

    We did this by putting multiple devices
    We did this by putting multiple devices

  • 09:44

    in a shared coordinate system, which
    in a shared coordinate system, which

  • 09:46

    you could do using the new Google Cloud Anchors API
    you could do using the new Google Cloud Anchors API

  • 09:49

    that we announced for ARCore yesterday.
    that we announced for ARCore yesterday.

  • 09:52

    And one thing you'll notice here is
    And one thing you'll notice here is

  • 09:54

    that we aren't even showing users past their camera.
    that we aren't even showing users past their camera.

  • 09:57

    Now, we did that deliberately because we really
    Now, we did that deliberately because we really

  • 09:59

    wanted to stretch and see how far we
    wanted to stretch and see how far we

  • 10:00

    could take this concept of physical interaction.
    could take this concept of physical interaction.

  • 10:04

    And one thing we learned was that once people learned
    And one thing we learned was that once people learned

  • 10:06

    to do it, they found it really natural
    to do it, they found it really natural

  • 10:08

    and actually had a lot of fun with it.
    and actually had a lot of fun with it.

  • 10:10

    But almost every user that tried it had to be not only
    But almost every user that tried it had to be not only

  • 10:14

    told how to do it, but shown how to do it.
    told how to do it, but shown how to do it.

  • 10:17

    People actually had to flip this mental switch
    People actually had to flip this mental switch

  • 10:19

    of the expectations they have for how a 2D smartphone
    of the expectations they have for how a 2D smartphone

  • 10:22

    interaction works.
    interaction works.

  • 10:24

    So you really need to be mindful of the context that people
    So you really need to be mindful of the context that people

  • 10:26

    are bringing in and the mental models they have
    are bringing in and the mental models they have

  • 10:28

    for 2D smartphone interactions.
    for 2D smartphone interactions.

  • 10:35

    We also wanted to know if we could help someone visualize
    We also wanted to know if we could help someone visualize

  • 10:38

    the future in a way that would let them make better decisions.
    the future in a way that would let them make better decisions.

  • 10:43

    Humans pay attention to the things that matter to us.
    Humans pay attention to the things that matter to us.

  • 10:45

    And in a literal sense, the imagery
    And in a literal sense, the imagery

  • 10:47

    that appears in our peripheral vision
    that appears in our peripheral vision

  • 10:48

    takes a lower cognitive priority than the things
    takes a lower cognitive priority than the things

  • 10:50

    we're focused on.
    we're focused on.

  • 10:52

    Would smartphone AR be any different?
    Would smartphone AR be any different?

  • 10:55

    In this experiment, we overlaid the architectural mesh
    In this experiment, we overlaid the architectural mesh

  • 10:59

    of the homeowner's remodel on top of the active construction
    of the homeowner's remodel on top of the active construction

  • 11:01

    project.
    project.

  • 11:03

    The homeowner could visualize in context
    The homeowner could visualize in context

  • 11:05

    what the changes to their home was going to look like.
    what the changes to their home was going to look like.

  • 11:09

    Now, at the time that this prototype was created,
    Now, at the time that this prototype was created,

  • 11:11

    we had to do actual manual alignment of this model
    we had to do actual manual alignment of this model

  • 11:14

    on top of the house.
    on top of the house.

  • 11:15

    You could do it today.
    You could do it today.

  • 11:16

    If I rebuilt it, I would use the augmented images API
    If I rebuilt it, I would use the augmented images API

  • 11:19

    that we announced yesterday.
    that we announced yesterday.

  • 11:20

    It would be much easier to put a fixed
    It would be much easier to put a fixed

  • 11:22

    image in a location, the house, and sync them together.
    image in a location, the house, and sync them together.

  • 11:25

    But even with that initial friction for the UX,
    But even with that initial friction for the UX,

  • 11:27

    the homeowner got tremendous value out of this.
    the homeowner got tremendous value out of this.

  • 11:30

    In fact, they went back to their architect after seeing this
    In fact, they went back to their architect after seeing this

  • 11:34

    and changed the design of their new home
    and changed the design of their new home

  • 11:35

    because they found out that they weren't going
    because they found out that they weren't going

  • 11:37

    to have enough space in the upstairs bathroom-- something
    to have enough space in the upstairs bathroom-- something

  • 11:39

    they hadn't noticed in the plans before.
    they hadn't noticed in the plans before.

  • 11:42

    So the lesson is that if you provide people high-quality,
    So the lesson is that if you provide people high-quality,

  • 11:45

    personally relevant content, you can create ways
    personally relevant content, you can create ways

  • 11:51

    that people will find really valuable and attention grabbing
    that people will find really valuable and attention grabbing

  • 11:53

    experiences.
    experiences.

  • 11:58

    But when does modifying the real environment
    But when does modifying the real environment

  • 12:00

    start to break down?
    start to break down?

  • 12:01

    You may be familiar with the uncanny valley.
    You may be familiar with the uncanny valley.

  • 12:04

    It's a concept that suggests when
    It's a concept that suggests when

  • 12:05

    things that are really familiar to humans
    things that are really familiar to humans

  • 12:08

    are almost right but just a little bit off,
    are almost right but just a little bit off,

  • 12:10

    it makes us feel uneasy.
    it makes us feel uneasy.

  • 12:12

    Subtle manipulations of the real environment in AR
    Subtle manipulations of the real environment in AR

  • 12:14

    can sometimes feel similar.
    can sometimes feel similar.

  • 12:16

    It can be difficult to get right.
    It can be difficult to get right.

  • 12:19

    In this specific example, we tried
    In this specific example, we tried

  • 12:20

    removing things from the world.
    removing things from the world.

  • 12:22

    We created this AR invisibility cloak for the plant.
    We created this AR invisibility cloak for the plant.

  • 12:26

    What we did was we created a point cloud around the object,
    What we did was we created a point cloud around the object,

  • 12:29

    attached little cubes to the point cloud,
    attached little cubes to the point cloud,

  • 12:31

    applied a material to those points,
    applied a material to those points,

  • 12:33

    and extracted the texture from the surrounding environment.
    and extracted the texture from the surrounding environment.

  • 12:36

    That worked pretty well in uniform environments,
    That worked pretty well in uniform environments,

  • 12:38

    but unfortunately, the world doesn't have too many of those.
    but unfortunately, the world doesn't have too many of those.

  • 12:41

    It's made up of dynamic lighting and subtle patterns,
    It's made up of dynamic lighting and subtle patterns,

  • 12:44

    so this always ended up looking a little bit weird.
    so this always ended up looking a little bit weird.

  • 12:47

    Remember to be thoughtful about the way that you add
    Remember to be thoughtful about the way that you add

  • 12:49

    or remove things from the environment.
    or remove things from the environment.

  • 12:51

    People are really perceptive, and so you
    People are really perceptive, and so you

  • 12:53

    need to strive to build experiences
    need to strive to build experiences

  • 12:55

    that align with their expectations,
    that align with their expectations,

  • 12:56

    or at the very least, don't defy them.
    or at the very least, don't defy them.

  • 13:01

    But is physical understanding always critical?
    But is physical understanding always critical?

  • 13:04

    All points in the section have their place,
    All points in the section have their place,

  • 13:05

    but, ultimately, you have to be guided by your critical user
    but, ultimately, you have to be guided by your critical user

  • 13:08

    journeys.
    journeys.

  • 13:09

    In this example, we wanted to build
    In this example, we wanted to build

  • 13:11

    a viewer for this amazing 3D model by Damon [INAUDIBLE]..
    a viewer for this amazing 3D model by Damon [INAUDIBLE]..

  • 13:15

    It was important that people could see the model in 3D
    It was important that people could see the model in 3D

  • 13:17

    and move around to discover the object.
    and move around to discover the object.

  • 13:19

    A challenge, though, was that the camera feed
    A challenge, though, was that the camera feed

  • 13:21

    was creating a lot of visual noise and distraction.
    was creating a lot of visual noise and distraction.

  • 13:24

    People were having a hard time appreciating
    People were having a hard time appreciating

  • 13:26

    the nuances of the model.
    the nuances of the model.

  • 13:28

    We adopted concepts from filmmaking and guided users
    We adopted concepts from filmmaking and guided users

  • 13:31

    by using focus and depth of field,
    by using focus and depth of field,

  • 13:34

    all which were controlled by the user's motion.
    all which were controlled by the user's motion.

  • 13:36

    This resulted in people feeling encouraged to explore,
    This resulted in people feeling encouraged to explore,

  • 13:39

    and they really stopped getting distracted
    and they really stopped getting distracted

  • 13:41

    by the physical environment.
    by the physical environment.

  • 13:44

    So humans are already great at so many things.
    So humans are already great at so many things.

  • 13:46

    AR really allows us to leverage those existing capabilities
    AR really allows us to leverage those existing capabilities

  • 13:49

    to make interactions feel invisible.
    to make interactions feel invisible.

  • 13:52

    If we leverage visual and physical understanding
    If we leverage visual and physical understanding

  • 13:56

    together, we can build experiences that
    together, we can build experiences that

  • 13:58

    really give people superpowers.
    really give people superpowers.

  • 14:00

    With that, Ellie is going to talk to you
    With that, Ellie is going to talk to you

  • 14:02

    about special opportunities we have in shared augmentations.
    about special opportunities we have in shared augmentations.

  • 14:05

    ELLIE NATTINGER: Thanks, Chris.
    ELLIE NATTINGER: Thanks, Chris.

  • 14:10

    So I'm Ellie Nattinger.
    So I'm Ellie Nattinger.

  • 14:12

    I'm a software engineer and prototyper
    I'm a software engineer and prototyper

  • 14:14

    on Google's VR and AR team.
    on Google's VR and AR team.

  • 14:17

    Chris has talked about the kinds of experiences
    Chris has talked about the kinds of experiences

  • 14:19

    you start to have when your devices can understand
    you start to have when your devices can understand

  • 14:22

    the world around you, and I'm going
    the world around you, and I'm going

  • 14:24

    to talk about what happens when you can share those experiences
    to talk about what happens when you can share those experiences

  • 14:27

    with the people around you.
    with the people around you.

  • 14:30

    We're interested not only in adding AR augmentations
    We're interested not only in adding AR augmentations

  • 14:34

    to your own reality, but also in sharing those augmentations.
    to your own reality, but also in sharing those augmentations.

  • 14:38

    If you listened to the developer keynote yesterday,
    If you listened to the developer keynote yesterday,

  • 14:42

    you know that shared AR experiences
    you know that shared AR experiences

  • 14:44

    is a really big topic for us these days.
    is a really big topic for us these days.

  • 14:48

    For one thing, a shared reality lets
    For one thing, a shared reality lets

  • 14:51

    people be immersed in the same experience.
    people be immersed in the same experience.

  • 14:55

    Think about a movie theater.
    Think about a movie theater.

  • 14:56

    Why do movie theaters exist?
    Why do movie theaters exist?

  • 14:58

    Everybody's watching a movie that they could probably
    Everybody's watching a movie that they could probably

  • 15:01

    watch at home on their television or their computer
    watch at home on their television or their computer

  • 15:04

    by themselves much more comfortably not
    by themselves much more comfortably not

  • 15:06

    having to go anywhere, but it feels qualitatively
    having to go anywhere, but it feels qualitatively

  • 15:10

    different to be in a space with other people sharing
    different to be in a space with other people sharing

  • 15:15

    that experience.
    that experience.

  • 15:17

    And beyond those kinds of shared passive experiences,
    And beyond those kinds of shared passive experiences,

  • 15:20

    having a shared reality lets you collaborate, lets you learn,
    having a shared reality lets you collaborate, lets you learn,

  • 15:25

    lets you build and play together.
    lets you build and play together.

  • 15:27

    We think you should be able to share your augmented realities
    We think you should be able to share your augmented realities

  • 15:30

    with your friends, and your families, and your colleagues,
    with your friends, and your families, and your colleagues,

  • 15:33

    so we've done a variety of explorations
    so we've done a variety of explorations

  • 15:35

    about how do you build those kinds of shared
    about how do you build those kinds of shared

  • 15:38

    realities in AR.
    realities in AR.

  • 15:40

    First, there's kind of a technical question.
    First, there's kind of a technical question.

  • 15:42

    How do you get people aligned in a shared AR space?
    How do you get people aligned in a shared AR space?

  • 15:47

    There's a number of ways we've tried.
    There's a number of ways we've tried.

  • 15:49

    If you don't need a lot of accuracy,
    If you don't need a lot of accuracy,

  • 15:52

    you could just start your apps with all the devices
    you could just start your apps with all the devices

  • 15:55

    in approximately the same location.
    in approximately the same location.

  • 15:57

    You could use markers or augmented images
    You could use markers or augmented images

  • 16:00

    so multiple users can all point their devices at one picture
    so multiple users can all point their devices at one picture

  • 16:04

    and get a common point of reference--
    and get a common point of reference--

  • 16:07

    cures the zero, zero, zero of my virtual world.
    cures the zero, zero, zero of my virtual world.

  • 16:11

    And you can even use the new ARCore Cloud Anchors API
    And you can even use the new ARCore Cloud Anchors API

  • 16:15

    that we just announced yesterday to localize
    that we just announced yesterday to localize

  • 16:17

    multiple devices against the visual features
    multiple devices against the visual features

  • 16:20

    of a particular space.
    of a particular space.

  • 16:23

    In addition to the technical considerations,
    In addition to the technical considerations,

  • 16:26

    we've found three axes of experience
    we've found three axes of experience

  • 16:28

    that we think are really useful to consider
    that we think are really useful to consider

  • 16:30

    when you're designing these kinds of shared
    when you're designing these kinds of shared

  • 16:32

    augmented experiences.
    augmented experiences.

  • 16:34

    First of those is co-located versus remote.
    First of those is co-located versus remote.

  • 16:38

    Are your users in the same physical space
    Are your users in the same physical space

  • 16:41

    or different physical spaces?
    or different physical spaces?

  • 16:44

    Second is, how much precision is required, or is it optional?
    Second is, how much precision is required, or is it optional?

  • 16:50

    Do you have to have everybody see the virtual bunny
    Do you have to have everybody see the virtual bunny

  • 16:54

    at exactly the same point in the world,
    at exactly the same point in the world,

  • 16:56

    or do you have a little bit of flexibility about that?
    or do you have a little bit of flexibility about that?

  • 16:59

    And the third is whether your experience
    And the third is whether your experience

  • 17:02

    is synchronous or asynchronous.
    is synchronous or asynchronous.

  • 17:04

    Is everybody participating in this augmented experience
    Is everybody participating in this augmented experience

  • 17:07

    at exactly the same time, or at slightly different times?
    at exactly the same time, or at slightly different times?

  • 17:11

    And we see these not as necessarily binary axes,
    And we see these not as necessarily binary axes,

  • 17:14

    but more of a continuum that you can
    but more of a continuum that you can

  • 17:17

    consider when you're designing these multi-person AR
    consider when you're designing these multi-person AR

  • 17:20

    experiences.
    experiences.

  • 17:22

    So let's talk about some prototypes and apps that
    So let's talk about some prototypes and apps that

  • 17:24

    fall on different points of the spectrum and the lessons
    fall on different points of the spectrum and the lessons

  • 17:26

    we've learned from them.
    we've learned from them.

  • 17:28

    To start with, we've found that when
    To start with, we've found that when

  • 17:30

    you've got a group that's interacting
    you've got a group that's interacting

  • 17:32

    with the same content in the same space,
    with the same content in the same space,

  • 17:35

    you really need shared, precise, spatial registration.
    you really need shared, precise, spatial registration.

  • 17:39

    For example, let's say you're in a classroom.
    For example, let's say you're in a classroom.

  • 17:42

    Imagine if a group of students who
    Imagine if a group of students who

  • 17:44

    are doing a unit on the solar system could all look at
    are doing a unit on the solar system could all look at

  • 17:47

    and walk around the globe, or an asteroid field,
    and walk around the globe, or an asteroid field,

  • 17:51

    or look at the sun.
    or look at the sun.

  • 17:53

    In Expeditions AR, one of Google's initial AR
    In Expeditions AR, one of Google's initial AR

  • 17:56

    experiences, all the students can point their devices
    experiences, all the students can point their devices

  • 18:00

    to a marker, they calibrate themselves
    to a marker, they calibrate themselves

  • 18:02

    against a shared location, they see the object
    against a shared location, they see the object

  • 18:05

    in the same place, and then what this allows
    in the same place, and then what this allows

  • 18:09

    is for a teacher to be able to point out
    is for a teacher to be able to point out

  • 18:11

    particular parts of the object.
    particular parts of the object.

  • 18:13

    Oh, if you all come over and look at this side of the sun,
    Oh, if you all come over and look at this side of the sun,

  • 18:17

    you see a cut-out into its core.
    you see a cut-out into its core.

  • 18:19

    Over here on the Earth, you can see a hurricane.
    Over here on the Earth, you can see a hurricane.

  • 18:22

    Everybody starts get a spatial understanding
    Everybody starts get a spatial understanding

  • 18:25

    of the parts of the object and where they are in the world.
    of the parts of the object and where they are in the world.

  • 18:29

    So when does it matter that your shared
    So when does it matter that your shared

  • 18:31

    space has a lot of precision?
    space has a lot of precision?

  • 18:33

    When you have multiple people who
    When you have multiple people who

  • 18:34

    are all in the same physical space
    are all in the same physical space

  • 18:37

    interacting with or looking at the exact same
    interacting with or looking at the exact same

  • 18:40

    augmented objects at the same time.
    augmented objects at the same time.

  • 18:45

    We were also curious--
    We were also curious--

  • 18:46

    how much can we take advantage of people's existing spatial
    how much can we take advantage of people's existing spatial

  • 18:50

    awareness when you're working in high-precision shared spaces?
    awareness when you're working in high-precision shared spaces?

  • 18:55

    We experimented with this in this multi-person construction
    We experimented with this in this multi-person construction

  • 18:58

    application, where you've got multiple people who
    application, where you've got multiple people who

  • 19:01

    are all building onto a shared AR object in the same space.
    are all building onto a shared AR object in the same space.

  • 19:05

    Adding blocks to each other, everybody's
    Adding blocks to each other, everybody's

  • 19:08

    being able to coordinate.
    being able to coordinate.

  • 19:10

    And you want to be able to tell what part of the object someone
    And you want to be able to tell what part of the object someone

  • 19:13

    is working on.
    is working on.

  • 19:14

    Have your physical movement support that collaboration.
    Have your physical movement support that collaboration.

  • 19:17

    Like, if Chris is over here and he's
    Like, if Chris is over here and he's

  • 19:19

    placing some green blocks in the real world,
    placing some green blocks in the real world,

  • 19:22

    I'm not going to step in front of him
    I'm not going to step in front of him

  • 19:24

    and start putting yellow blocks there instead.
    and start putting yellow blocks there instead.

  • 19:27

    We've got a natural sense of how to collaborate, how to arrange,
    We've got a natural sense of how to collaborate, how to arrange,

  • 19:32

    how to coordinate ourselves in space.
    how to coordinate ourselves in space.

  • 19:35

    People already have that sense.
    People already have that sense.

  • 19:36

    So we can keep that in a shared AR
    So we can keep that in a shared AR

  • 19:39

    if we've got our virtual objects precisely lined up enough.
    if we've got our virtual objects precisely lined up enough.

  • 19:44

    We also found it helpful to notice
    We also found it helpful to notice

  • 19:46

    that because you can see both the digital object but also
    that because you can see both the digital object but also

  • 19:50

    the other people through the pass-through camera,
    the other people through the pass-through camera,

  • 19:52

    you are able to get a pretty good sense of what people
    you are able to get a pretty good sense of what people

  • 19:55

    were looking at as well as what they were interacting with.
    were looking at as well as what they were interacting with.

  • 20:01

    We've also wondered what would it
    We've also wondered what would it

  • 20:03

    feel like to have a shared AR experience for multiple people
    feel like to have a shared AR experience for multiple people

  • 20:07

    in the same space, but who aren't necessarily interacting
    in the same space, but who aren't necessarily interacting

  • 20:10

    with the same things?
    with the same things?

  • 20:12

    So think of this more like an AR LAN party.
    So think of this more like an AR LAN party.

  • 20:17

    Where we're all in the same space,
    Where we're all in the same space,

  • 20:18

    or maybe could be different spaces,
    or maybe could be different spaces,

  • 20:20

    we're seeing connected things, and we're
    we're seeing connected things, and we're

  • 20:23

    having a shared experience.
    having a shared experience.

  • 20:24

    So this prototype's a competitive quiz guessing game
    So this prototype's a competitive quiz guessing game

  • 20:28

    where you look at the map and you
    where you look at the map and you

  • 20:30

    have to figure out where on the globe you think is represented
    have to figure out where on the globe you think is represented

  • 20:33

    and stick your pushpin in, get points
    and stick your pushpin in, get points

  • 20:35

    depending on how close you are.
    depending on how close you are.

  • 20:38

    We've got the state synced, so we know who's winning.
    We've got the state synced, so we know who's winning.

  • 20:40

    But the location of where that globe is
    But the location of where that globe is

  • 20:43

    doesn't actually need to be synchronized.
    doesn't actually need to be synchronized.

  • 20:45

    And maybe you don't want it to be synchronized because I don't
    And maybe you don't want it to be synchronized because I don't

  • 20:48

    want anybody to get a clue based on where I'm sticking
    want anybody to get a clue based on where I'm sticking

  • 20:51

    my pushpin into the globe.
    my pushpin into the globe.

  • 20:53

    It's fun to be together, even when we're not looking
    It's fun to be together, even when we're not looking

  • 20:55

    at exactly the same AR things.
    at exactly the same AR things.

  • 20:59

    And do we always need our spaces to align exactly?
    And do we always need our spaces to align exactly?

  • 21:03

    Sometimes it's enough just to be in the same room.
    Sometimes it's enough just to be in the same room.

  • 21:06

    This prototype example's of an AR boat race.
    This prototype example's of an AR boat race.

  • 21:09

    You blow on the microphone of your phone,
    You blow on the microphone of your phone,

  • 21:12

    and it creates the wind that propels your boat
    and it creates the wind that propels your boat

  • 21:15

    down the little AR track.
    down the little AR track.

  • 21:18

    By us being next to each other when we start the app
    By us being next to each other when we start the app

  • 21:21

    and spawn the track, we get a shared physical experience
    and spawn the track, we get a shared physical experience

  • 21:24

    even though our AR worlds might not perfectly align.
    even though our AR worlds might not perfectly align.

  • 21:27

    We get to keep all the elements of the social game play--
    We get to keep all the elements of the social game play--

  • 21:31

    talking to each other, our physical presence--
    talking to each other, our physical presence--

  • 21:34

    but we're not necessarily touching the same objects.
    but we're not necessarily touching the same objects.

  • 21:39

    Another super interesting area we've been playing with
    Another super interesting area we've been playing with

  • 21:42

    is how audio can be a way to include
    is how audio can be a way to include

  • 21:45

    multiple people in a single device AR experience.
    multiple people in a single device AR experience.

  • 21:49

    If you think of the standard Magic Window device AR,
    If you think of the standard Magic Window device AR,

  • 21:53

    it's a pretty personal experience.
    it's a pretty personal experience.

  • 21:54

    I'm looking at this thing through my phone.
    I'm looking at this thing through my phone.

  • 21:57

    But now, imagine you can leave a sound in AR that
    But now, imagine you can leave a sound in AR that

  • 22:02

    has a 3D position like any other virtual thing,
    has a 3D position like any other virtual thing,

  • 22:06

    and now you start to be able to hear it,
    and now you start to be able to hear it,

  • 22:08

    even if you're not necessarily looking at it.
    even if you're not necessarily looking at it.

  • 22:10

    And other people can hear the sound
    And other people can hear the sound

  • 22:12

    from your device at the same time.
    from your device at the same time.

  • 22:14

    So for an example, let's say you could leave little notes
    So for an example, let's say you could leave little notes

  • 22:18

    all over your space.
    all over your space.

  • 22:18

    Might look something like this.
    Might look something like this.

  • 22:25

    I'm a plant.
    I'm a plant.

  • 22:26

    I'm a plant.
    I'm a plant.

  • 22:28

    I'm a plant.
    I'm a plant.

  • 22:31

    I'm elephant.
    I'm elephant.

  • 22:33

    I'm elephant.
    I'm elephant.

  • 22:34

    I'm elephant.
    I'm elephant.

  • 22:37

    This is a chair.
    This is a chair.

  • 22:38

    This is a chair.
    This is a chair.

  • 22:39

    This is a chair.
    This is a chair.

  • 22:41

    I'm a plant.
    I'm a plant.

  • 22:42

    I'm a plant.
    I'm a plant.

  • 22:43

    I'm elephant.
    I'm elephant.

  • 22:44

    I'm elephant.
    I'm elephant.

  • 22:45

    This is a chair.
    This is a chair.

  • 22:45

    This is a chair.
    This is a chair.

  • 22:49

    So notice, you don't have to be the one with a phone
    So notice, you don't have to be the one with a phone

  • 22:51

    to get a sense of where these audio annotations start
    to get a sense of where these audio annotations start

  • 22:54

    to live in physical space.
    to live in physical space.

  • 22:59

    Another question we've asked--
    Another question we've asked--

  • 23:01

    if you have a synchronous AR experience with multiple people
    if you have a synchronous AR experience with multiple people

  • 23:04

    who are in different places, what kind of representation
    who are in different places, what kind of representation

  • 23:08

    do you need of the other person?
    do you need of the other person?

  • 23:10

    So let's imagine you have maybe a shared AR photos app
    So let's imagine you have maybe a shared AR photos app

  • 23:13

    where multiple people can look at photos
    where multiple people can look at photos

  • 23:15

    that are arranged in space.
    that are arranged in space.

  • 23:17

    So I'm taking pictures in one location,
    So I'm taking pictures in one location,

  • 23:20

    I'm viewing them arranged around me in AR,
    I'm viewing them arranged around me in AR,

  • 23:22

    and then I want to share my AR experience
    and then I want to share my AR experience

  • 23:24

    with Luca, who comes in and joins me
    with Luca, who comes in and joins me

  • 23:26

    from a remote location.
    from a remote location.

  • 23:28

    What we found-- we needed a couple of things to make
    What we found-- we needed a couple of things to make

  • 23:31

    us feel like we were connected and sharing the same AR
    us feel like we were connected and sharing the same AR

  • 23:34

    experience, even though we were in different places.
    experience, even though we were in different places.

  • 23:37

    We needed to have a voice connection so we could actually
    We needed to have a voice connection so we could actually

  • 23:40

    talk about the pictures, and we needed
    talk about the pictures, and we needed

  • 23:42

    to know where the other person was looking.
    to know where the other person was looking.

  • 23:44

    See which picture you're paying attention to when
    See which picture you're paying attention to when

  • 23:46

    you're talking about it.
    you're talking about it.

  • 23:47

    But what was interesting is we didn't actually
    But what was interesting is we didn't actually

  • 23:50

    need to know where the other person was, as long as we had
    need to know where the other person was, as long as we had

  • 23:53

    that shared frame of reference.
    that shared frame of reference.

  • 23:55

    We're all here, here's what I'm looking at,
    We're all here, here's what I'm looking at,

  • 23:57

    here's what Luca's looking at.
    here's what Luca's looking at.

  • 24:00

    We've also been curious about asymmetric experiences.
    We've also been curious about asymmetric experiences.

  • 24:04

    What happens when users share the same space
    What happens when users share the same space

  • 24:08

    and the same augmentations, but they've got different roles
    and the same augmentations, but they've got different roles

  • 24:10

    in the experience?
    in the experience?

  • 24:12

    So for instance, in this prototype,
    So for instance, in this prototype,

  • 24:14

    Chris is using his phone as a controller to draw in space,
    Chris is using his phone as a controller to draw in space,

  • 24:18

    but he's not actually seeing the AR annotations he's drawing.
    but he's not actually seeing the AR annotations he's drawing.

  • 24:21

    The other person sees the same AR content
    The other person sees the same AR content

  • 24:23

    and uses their phone to take a video.
    and uses their phone to take a video.

  • 24:25

    They're playing different roles in the same experience.
    They're playing different roles in the same experience.

  • 24:28

    Kind of artist versus cinematographer.
    Kind of artist versus cinematographer.

  • 24:30

    And we found there could be some challenges
    And we found there could be some challenges

  • 24:32

    to asymmetric experiences if there's
    to asymmetric experiences if there's

  • 24:35

    a lack of information about what the other person is
    a lack of information about what the other person is

  • 24:37

    experiencing.
    experiencing.

  • 24:38

    For instance, Chris can't tell what Luca's filming
    For instance, Chris can't tell what Luca's filming

  • 24:41

    or see how his drawing looks from far away.
    or see how his drawing looks from far away.

  • 24:47

    So as we mentioned previously, these kinds
    So as we mentioned previously, these kinds

  • 24:49

    of different combinations of space,
    of different combinations of space,

  • 24:51

    and time, and precision are relevant for multi-person AR
    and time, and precision are relevant for multi-person AR

  • 24:55

    experiences, and they have different technical and
    experiences, and they have different technical and

  • 24:58

    experiential needs.
    experiential needs.

  • 25:00

    If you have multiple people in the same space
    If you have multiple people in the same space

  • 25:02

    with the same augmentations at the same time,
    with the same augmentations at the same time,

  • 25:06

    then you need a way of sharing.
    then you need a way of sharing.

  • 25:07

    You need a way of common localization.
    You need a way of common localization.

  • 25:09

    That's why we created the new Cloud Anchors API.
    That's why we created the new Cloud Anchors API.

  • 25:13

    If you've got multiple people in the same space
    If you've got multiple people in the same space

  • 25:16

    with different augmentations at the same time,
    with different augmentations at the same time,

  • 25:18

    the kind of AR LAN party model, you
    the kind of AR LAN party model, you

  • 25:21

    need some way to share data.
    need some way to share data.

  • 25:23

    And if you've got multiple people
    And if you've got multiple people

  • 25:24

    in different spaces interacting with the same augmentations
    in different spaces interacting with the same augmentations

  • 25:28

    at the same time, you need sharing
    at the same time, you need sharing

  • 25:30

    in some kind of representation of that interaction.
    in some kind of representation of that interaction.

  • 25:34

    So shared AR experiences is a big area.
    So shared AR experiences is a big area.

  • 25:37

    We've explored some parts of the space.
    We've explored some parts of the space.

  • 25:38

    We'd love to see what you all come up with.
    We'd love to see what you all come up with.

  • 25:41

    So Chris has talked about examples
    So Chris has talked about examples

  • 25:43

    where your device understands your surroundings
    where your device understands your surroundings

  • 25:45

    and gives you special powers, I talked about examples
    and gives you special powers, I talked about examples

  • 25:48

    where you've got multiple people who
    where you've got multiple people who

  • 25:49

    can collaborate and interact.
    can collaborate and interact.

  • 25:51

    Now Luca will talk about what happens when your devices have
    Now Luca will talk about what happens when your devices have

  • 25:53

    a better understanding of you and allow
    a better understanding of you and allow

  • 25:55

    for more expressive inputs.
    for more expressive inputs.

  • 25:57

    Luca?
    Luca?

  • 26:00

    LUCA PRASSO: Thank you, Ellie.
    LUCA PRASSO: Thank you, Ellie.

  • 26:03

    My name is Luca Prasso, and I'm a prototyper
    My name is Luca Prasso, and I'm a prototyper

  • 26:06

    and a technical artist working in the Google AR and VR team.
    and a technical artist working in the Google AR and VR team.

  • 26:10

    So let's talk about the device that you
    So let's talk about the device that you

  • 26:12

    carry with you every day and the ones that are all around you,
    carry with you every day and the ones that are all around you,

  • 26:16

    and how they can provide the meaningful and authentic
    and how they can provide the meaningful and authentic

  • 26:18

    signals that we can use in our augmented experiences.
    signals that we can use in our augmented experiences.

  • 26:23

    So ARCore tracks the device motion
    So ARCore tracks the device motion

  • 26:25

    as we move to the real world and provides some understanding
    as we move to the real world and provides some understanding

  • 26:29

    of the environment.
    of the environment.

  • 26:31

    And these signals can be used to create powerful, and creative,
    And these signals can be used to create powerful, and creative,

  • 26:35

    and expressive tools, and offer new ways for us
    and expressive tools, and offer new ways for us

  • 26:39

    to interact with digital content.
    to interact with digital content.

  • 26:42

    So the data represents who we are, what we know,
    So the data represents who we are, what we know,

  • 26:45

    and what we have.
    and what we have.

  • 26:47

    And we were interested in understanding
    And we were interested in understanding

  • 26:49

    if the user can connect more deeply if the data is displayed
    if the user can connect more deeply if the data is displayed

  • 26:53

    around them in 3D, and through AR and physical aspirations,
    around them in 3D, and through AR and physical aspirations,

  • 26:58

    they can look at this data.
    they can look at this data.

  • 27:00

    So we took a database of several thousand world cities,
    So we took a database of several thousand world cities,

  • 27:03

    and we mapped it in an area that's
    and we mapped it in an area that's

  • 27:04

    wide as a football field.
    wide as a football field.

  • 27:06

    We assign a dot to every city and we scale the dot based
    We assign a dot to every city and we scale the dot based

  • 27:11

    on the population of the city.
    on the population of the city.

  • 27:14

    And each country has a different color.
    And each country has a different color.

  • 27:16

    So now you can walk to this data field.
    So now you can walk to this data field.

  • 27:21

    And as ARCore tracks the motion of the user,
    And as ARCore tracks the motion of the user,

  • 27:26

    we play footsteps in sync.
    we play footsteps in sync.

  • 27:28

    You take a step and you hear a step.
    You take a step and you hear a step.

  • 27:31

    And [INAUDIBLE] sound fields surrounds the user
    And [INAUDIBLE] sound fields surrounds the user

  • 27:34

    and enhances the experience and the sense of exploration
    and enhances the experience and the sense of exploration

  • 27:37

    of this data forest.
    of this data forest.

  • 27:39

    And flight paths are displayed up in the sky.
    And flight paths are displayed up in the sky.

  • 27:43

    And the pass-through camera is heavily tinted
    And the pass-through camera is heavily tinted

  • 27:46

    so that we can allow the user to focus on the data
    so that we can allow the user to focus on the data

  • 27:49

    and then still give a sense of presence.
    and then still give a sense of presence.

  • 27:52

    And what happens is the user, as he walks to the physical space,
    And what happens is the user, as he walks to the physical space,

  • 27:56

    he starts mapping, and pairing, and creating
    he starts mapping, and pairing, and creating

  • 27:58

    this mental map between the data and the physical location.
    this mental map between the data and the physical location.

  • 28:02

    And starts understanding better, in this particular case,
    And starts understanding better, in this particular case,

  • 28:06

    the relative distance between the places.
    the relative distance between the places.

  • 28:09

    And what we discover is also that the gestures that
    And what we discover is also that the gestures that

  • 28:11

    are a part of our digital life every day, a pinch to zoom,
    are a part of our digital life every day, a pinch to zoom,

  • 28:16

    it's now in AR something more traditional.
    it's now in AR something more traditional.

  • 28:20

    It's actually moving closer to the digital object
    It's actually moving closer to the digital object

  • 28:23

    and inspecting it like we do with a real object.
    and inspecting it like we do with a real object.

  • 28:26

    And pan and drag means taking a couple of steps
    And pan and drag means taking a couple of steps

  • 28:29

    to the right to look at the information.
    to the right to look at the information.

  • 28:33

    So physical exploration like this is very fascinating,
    So physical exploration like this is very fascinating,

  • 28:37

    but we need to take into account all the different users
    but we need to take into account all the different users

  • 28:39

    and provide the alternative move and affordances.
    and provide the alternative move and affordances.

  • 28:42

    So in AR, a user can move everywhere,
    So in AR, a user can move everywhere,

  • 28:45

    but what if he cannot or he doesn't want to move?
    but what if he cannot or he doesn't want to move?

  • 28:49

    What if he's sitting?
    What if he's sitting?

  • 28:50

    So in this particular case, we allow the user to simply point
    So in this particular case, we allow the user to simply point

  • 28:53

    the phone everywhere they want to go, tap on the screen
    the phone everywhere they want to go, tap on the screen

  • 28:56

    anywhere, and the application will move the point of view
    anywhere, and the application will move the point of view

  • 29:01

    in that direction.
    in that direction.

  • 29:02

    At the same time, we still have to provide audio, haptics,
    At the same time, we still have to provide audio, haptics,

  • 29:06

    and color effects to enhance the sense of physical space
    and color effects to enhance the sense of physical space

  • 29:11

    the user has to have while traveling.
    the user has to have while traveling.

  • 29:14

    And so we found that this is a powerful mechanism
    And so we found that this is a powerful mechanism

  • 29:17

    to explore a certain type of data that makes sense in the 3D
    to explore a certain type of data that makes sense in the 3D

  • 29:21

    space and to allow the user to discover hidden patterns.
    space and to allow the user to discover hidden patterns.

  • 29:26

    But can we go beyond the pixels that you
    But can we go beyond the pixels that you

  • 29:28

    can find on your screen?
    can find on your screen?

  • 29:30

    We're fascinated by the spatial audio and a way
    We're fascinated by the spatial audio and a way

  • 29:35

    to incorporate audio into an AR experience.
    to incorporate audio into an AR experience.

  • 29:39

    So we combine ARCore and the Google Resonance SDK.
    So we combine ARCore and the Google Resonance SDK.

  • 29:45

    And Resonance is this very powerful spatial audio engine
    And Resonance is this very powerful spatial audio engine

  • 29:48

    that recently Google open-sourced.
    that recently Google open-sourced.

  • 29:50

    And you should check it out because it's great.
    And you should check it out because it's great.

  • 29:53

    And so now I can take audio sources
    And so now I can take audio sources

  • 29:55

    and place them into the 3D locations,
    and place them into the 3D locations,

  • 29:59

    and animate them, and describe the properties of the walls,
    and animate them, and describe the properties of the walls,

  • 30:02

    and the ceilings, and the floor, and all the obstacles.
    and the ceilings, and the floor, and all the obstacles.

  • 30:06

    And now as the ARCore moves the point of view,
    And now as the ARCore moves the point of view,

  • 30:09

    it carries with it the digital ears,
    it carries with it the digital ears,

  • 30:11

    the Resonance used to render accurately
    the Resonance used to render accurately

  • 30:14

    the sounds in the scene.
    the sounds in the scene.

  • 30:17

    So what can we do with this?
    So what can we do with this?

  • 30:19

    So we imagine, what if I can sit next to a performer
    So we imagine, what if I can sit next to a performer

  • 30:23

    during an acoustic concert, or a classical concert, or a jazz
    during an acoustic concert, or a classical concert, or a jazz

  • 30:27

    performance?
    performance?

  • 30:28

    What if I can be onstage with actors,
    What if I can be onstage with actors,

  • 30:31

    and listen to their play, and be there?
    and listen to their play, and be there?

  • 30:35

    So we took two amazing actors, Chris and Ellie,
    So we took two amazing actors, Chris and Ellie,

  • 30:39

    and we asked them to record separately
    and we asked them to record separately

  • 30:42

    lines from Shakespeare.
    lines from Shakespeare.

  • 30:44

    And we placed these audio sources a few feet apart
    And we placed these audio sources a few feet apart

  • 30:48

    and we surrounded the environment
    and we surrounded the environment

  • 30:51

    with an ambisonic sound field of a rain forest, of the raining.
    with an ambisonic sound field of a rain forest, of the raining.

  • 30:56

    And then later on, we switched to a room with a lot of reverb
    And then later on, we switched to a room with a lot of reverb

  • 31:01

    into the walls.
    into the walls.

  • 31:06

    CHRIS KELLEY: Thou told'st me they were stolen unto this
    CHRIS KELLEY: Thou told'st me they were stolen unto this

  • 31:08

    wood, and here I am, and wode within this wood,
    wood, and here I am, and wode within this wood,

  • 31:11

    because I cannot meet my Hermia.
    because I cannot meet my Hermia.

  • 31:12

    Hence, get thee gone, and follow me no more.
    Hence, get thee gone, and follow me no more.

  • 31:16

    ELLIE NATTINGER: You draw me, you hard-hearted adamant,
    ELLIE NATTINGER: You draw me, you hard-hearted adamant,

  • 31:18

    but yet you draw not iron, for my heart is true as steel.
    but yet you draw not iron, for my heart is true as steel.

  • 31:23

    Leave you your power to draw, and I shall
    Leave you your power to draw, and I shall

  • 31:26

    have no power to follow you.
    have no power to follow you.

  • 31:28

    CHRIS KELLEY: Do I entice you?
    CHRIS KELLEY: Do I entice you?

  • 31:29

    Do I speak to you fair?
    Do I speak to you fair?

  • 31:30

    Or rather, do I not in plainest truth tell you, I do not,
    Or rather, do I not in plainest truth tell you, I do not,

  • 31:34

    nor I cannot love you?
    nor I cannot love you?

  • 31:37

    LUCA PRASSO: So now the user can walk around,
    LUCA PRASSO: So now the user can walk around,

  • 31:39

    maybe with his eyes closed, a nice pair of headphones,
    maybe with his eyes closed, a nice pair of headphones,

  • 31:43

    and it's like being on stage with these actors.
    and it's like being on stage with these actors.

  • 31:46

    So we took this example and we extended it.
    So we took this example and we extended it.

  • 31:49

    We observed that we can build in real-time a 2D map of where
    We observed that we can build in real-time a 2D map of where

  • 31:54

    the user has been so far with his phone
    the user has been so far with his phone

  • 31:56

    as he's walking around.
    as he's walking around.

  • 31:58

    And so at any given time when the user hits a button,
    And so at any given time when the user hits a button,

  • 32:02

    we can programmatically place audio recording in space
    we can programmatically place audio recording in space

  • 32:06

    where we know that the user can reach with the phone
    where we know that the user can reach with the phone

  • 32:09

    and with their ears.
    and with their ears.

  • 32:11

    [MUSIC PLAYING]
    [MUSIC PLAYING]

  • 32:30

    And suddenly, the user becomes the human mixer
    And suddenly, the user becomes the human mixer

  • 32:34

    of this experience.
    of this experience.

  • 32:35

    And different instruments can populate
    And different instruments can populate

  • 32:38

    your squares, and your rooms, and your schools.
    your squares, and your rooms, and your schools.

  • 32:42

    And this opens the door to an amazing amount
    And this opens the door to an amazing amount

  • 32:44

    of opportunities with AR audio-first experiments.
    of opportunities with AR audio-first experiments.

  • 32:49

    So let's go back to visual understanding.
    So let's go back to visual understanding.

  • 32:51

    Chris mentioned that the computer vision and machine
    Chris mentioned that the computer vision and machine

  • 32:54

    learning can interpret the things that are around us,
    learning can interpret the things that are around us,

  • 32:57

    and this is also important to understand the body in turning
    and this is also important to understand the body in turning

  • 33:01

    into an expressive controller.
    into an expressive controller.

  • 33:03

    So in real life, we are surrounded
    So in real life, we are surrounded

  • 33:05

    by a lot of sound sources for all of the places.
    by a lot of sound sources for all of the places.

  • 33:08

    And naturally, our body and our head
    And naturally, our body and our head

  • 33:11

    moves to mix and focus on what we like
    moves to mix and focus on what we like

  • 33:15

    and what we want to listen to.
    and what we want to listen to.

  • 33:17

    So can we take this intuition into the way we watch movies
    So can we take this intuition into the way we watch movies

  • 33:23

    or play video games on a mobile device?
    or play video games on a mobile device?

  • 33:25

    So what we did, we took the phone camera signal,
    So what we did, we took the phone camera signal,

  • 33:30

    fed it to Google Mobile Vision.
    fed it to Google Mobile Vision.

  • 33:33

    That gave us a head position and head orientation.
    That gave us a head position and head orientation.

  • 33:37

    And we fed it to Google Resonance SDK.
    And we fed it to Google Resonance SDK.

  • 33:41

    And we said, OK, you're watching a scene in which actors
    And we said, OK, you're watching a scene in which actors

  • 33:44

    are in a forest, and they're all around you, and it's raining.
    are in a forest, and they're all around you, and it's raining.

  • 33:48

    So now as I leave my phone far away from my head,
    So now as I leave my phone far away from my head,

  • 33:52

    I hear the forest.
    I hear the forest.

  • 33:53

    As I'm taking the phone closer to my face,
    As I'm taking the phone closer to my face,

  • 33:56

    I start hearing the actors playing.
    I start hearing the actors playing.

  • 34:00

    I warn you, this is an Oscar performance.
    I warn you, this is an Oscar performance.

  • 34:05

    [THUNDER RUMBLES]
    [THUNDER RUMBLES]

  • 34:10

    ELLIE NATTINGER: Our company here.
    ELLIE NATTINGER: Our company here.

  • 34:14

    CHRIS KELLEY: My man, according to the script.
    CHRIS KELLEY: My man, according to the script.

  • 34:16

    ELLIE NATTINGER: Here is the scroll of every man's name
    ELLIE NATTINGER: Here is the scroll of every man's name

  • 34:18

    which is thought fit through all Athens to play in our interlude
    which is thought fit through all Athens to play in our interlude

  • 34:21

    before the duke and the duchess on his [INAUDIBLE]
    before the duke and the duchess on his [INAUDIBLE]

  • 34:24

    LUCA PRASSO: So now what is interesting is
    LUCA PRASSO: So now what is interesting is

  • 34:26

    that the tiny little motions that we
    that the tiny little motions that we

  • 34:28

    can do when we're watching and we're playing this experience,
    can do when we're watching and we're playing this experience,

  • 34:31

    it can be turned into subtle changes in the user
    it can be turned into subtle changes in the user

  • 34:34

    experience that we can control.
    experience that we can control.

  • 34:38

    So we talk about how the changes in poses
    So we talk about how the changes in poses

  • 34:40

    can become a trigger to drive interaction.
    can become a trigger to drive interaction.

  • 34:44

    In this Google Research app called [INAUDIBLE],,
    In this Google Research app called [INAUDIBLE],,

  • 34:47

    we actually exploit the opposite--
    we actually exploit the opposite--

  • 34:49

    the absence of motion.
    the absence of motion.

  • 34:50

    And when the user--
    And when the user--

  • 34:52

    in this case, my kids--
    in this case, my kids--

  • 34:53

    stop posing, the app takes a picture.
    stop posing, the app takes a picture.

  • 34:57

    And so the simple mechanism that is triggered by computer vision
    And so the simple mechanism that is triggered by computer vision

  • 35:02

    creates the incredible, delightful opportunities
    creates the incredible, delightful opportunities

  • 35:04

    that, apparently, my kids love.
    that, apparently, my kids love.

  • 35:09

    And Research is doing incredible progress
    And Research is doing incredible progress

  • 35:12

    in looking at an RGB image and understanding where
    in looking at an RGB image and understanding where

  • 35:15

    the body pose and skeleton is.
    the body pose and skeleton is.

  • 35:18

    And you should check out the Google Research blog post
    And you should check out the Google Research blog post

  • 35:22

    because their post estimation research is amazing.
    because their post estimation research is amazing.

  • 35:26

    So we took Ellie's video and we fed it to the machine computer
    So we took Ellie's video and we fed it to the machine computer

  • 35:31

    algorithm.
    algorithm.

  • 35:32

    And we got back, a bunch of 3D poses and segmentation
    And we got back, a bunch of 3D poses and segmentation

  • 35:35

    masks of Ellie.
    masks of Ellie.

  • 35:37

    And this opens the door to a lot of variety
    And this opens the door to a lot of variety

  • 35:39

    of experiments with creative filters
    of experiments with creative filters

  • 35:43

    that we can apply to this.
    that we can apply to this.

  • 35:45

    But what's more interesting for us
    But what's more interesting for us

  • 35:46

    is that it also allows us to understand better
    is that it also allows us to understand better

  • 35:49

    the intent and the context of the user.
    the intent and the context of the user.

  • 35:54

    So we took this pose estimation technology
    So we took this pose estimation technology

  • 35:58

    and we added a digital character.
    and we added a digital character.

  • 36:01

    Now it tries to mimic what the human character is doing.
    Now it tries to mimic what the human character is doing.

  • 36:05

    And this allows [INAUDIBLE] now to bring your family
    And this allows [INAUDIBLE] now to bring your family

  • 36:08

    and friends--
    and friends--

  • 36:09

    in this case, my son, Noah--
    in this case, my son, Noah--

  • 36:11

    into the scene so that he can act and create a nice video.
    into the scene so that he can act and create a nice video.

  • 36:16

    But this also, like Ellie mentioned before,
    But this also, like Ellie mentioned before,

  • 36:22

    we should consider the situation,
    we should consider the situation,

  • 36:25

    because this is an asymmetric experience.
    because this is an asymmetric experience.

  • 36:28

    What you don't see here is how frustrated
    What you don't see here is how frustrated

  • 36:30

    my son was after a few minutes because he
    my son was after a few minutes because he

  • 36:33

    couldn't see what was going on.
    couldn't see what was going on.

  • 36:34

    I was the one having fun taking picture and video him,
    I was the one having fun taking picture and video him,

  • 36:38

    and he didn't see much.
    and he didn't see much.

  • 36:39

    He could only hear the lion roaring.
    He could only hear the lion roaring.

  • 36:42

    So we need to be extremely mindful
    So we need to be extremely mindful

  • 36:44

    as the developer about this unbalance of delight.
    as the developer about this unbalance of delight.

  • 36:47

    And so maybe I should have passed the image of the phone
    And so maybe I should have passed the image of the phone

  • 36:52

    to a nearby TV so I can make my son first-class citizen
    to a nearby TV so I can make my son first-class citizen

  • 36:57

    in this experience.
    in this experience.

  • 37:00

    So all this AR technology and the physical
    So all this AR technology and the physical

  • 37:03

    and the visual understanding are ingredients
    and the visual understanding are ingredients

  • 37:05

    that allow us to unlock all kinds of new expressive input
    that allow us to unlock all kinds of new expressive input

  • 37:09

    mechanisms.
    mechanisms.

  • 37:10

    And we are still exploring.
    And we are still exploring.

  • 37:11

    We're just at the beginning of this journey.
    We're just at the beginning of this journey.

  • 37:14

    But we are excited to hear what you think
    But we are excited to hear what you think

  • 37:16

    and what you want to come up with.
    and what you want to come up with.

  • 37:18

    So to summarize, we shared a bunch
    So to summarize, we shared a bunch

  • 37:21

    of ways in which we think about AR and various aspirations
    of ways in which we think about AR and various aspirations

  • 37:25

    that we have done.
    that we have done.

  • 37:28

    We talked about expanding our definition of AR.
    We talked about expanding our definition of AR.

  • 37:32

    Putting content into the world, but also pulling information
    Putting content into the world, but also pulling information

  • 37:35

    from the world.
    from the world.

  • 37:37

    And these are all ingredients that we
    And these are all ingredients that we

  • 37:39

    use to create these magical AR superpowers to enhance
    use to create these magical AR superpowers to enhance

  • 37:43

    the social interactions and to express yourself
    the social interactions and to express yourself

  • 37:47

    in this new digital medium.
    in this new digital medium.

  • 37:49

    So we combined ARCore capabilities
    So we combined ARCore capabilities

  • 37:53

    with different Google technologies,
    with different Google technologies,

  • 37:56

    and this gives us the opportunity
    and this gives us the opportunity

  • 37:57

    to explore all these new interaction models.
    to explore all these new interaction models.

  • 38:00

    And we encourage you, developers,
    And we encourage you, developers,

  • 38:02

    to stretch your definition of AR.
    to stretch your definition of AR.

  • 38:06

    But we want to do this together.
    But we want to do this together.

  • 38:08

    We're going to keep exploring, but what
    We're going to keep exploring, but what

  • 38:10

    we want to hear what tickled you,
    we want to hear what tickled you,

  • 38:12

    what tickled your curiosity.
    what tickled your curiosity.

  • 38:15

    So we can wait to see what you build next.
    So we can wait to see what you build next.

  • 38:17

    Thank you very much for coming.
    Thank you very much for coming.

  • 38:19

    [MUSIC PLAYING]
    [MUSIC PLAYING]

All phrase
so much for
//

phrase

indicating that one has finished talking about something.

Exploring AR interaction (Google I/O '18)

31,484 views

Video Language:

  • English

Caption Language:

  • English (en)

Accent:

  • English (US)

Speech Time:

94%
  • 36:24 / 38:41

Speech Rate:

  • 169 wpm - Fast

Category:

  • Science & Technology

Intro:

[MUSIC PLAYING]. CHRIS KELLEY: Thank you so much for joining us.. My name is Chris.. I'm a designer and prototyper working. on immersive prototyping at Google,. and I'm joined by Ellie and Luca.. And today, we're going to talk about exploring AR interaction.
It's really awesome to be here.. We explore immersive computing through rapid prototyping
of AR and VR experiments.. Often, that's focused on use case exploration or app ideas.
We work fast, which means we fail fast,. but that means that we learn fast.. We spend a week or two on each prototyping sprint,. and at the end of the sprint, we end. with a functional prototype starting. from a tightly scoped question.. And then we put that prototype in people's hands. and we see what we can learn.. So this talk is going to be about takeaways we have

Video Vocabulary

/ˈtītlē/

adverb

In a manner that doesn't allow movement.

/THro͞o/

adjective adverb preposition

continuing or valid to final destination. From the beginning of something until the end. Allowing you to pass between, or to.

/ˈprōdəˌtīp/

noun verb

preliminary version of machine or vehicle. make prototype of.

/dəˈzīnər/

adjective noun

Describing something that has been designed. person who plans look or workings of something prior to it being made.

/ˌekspləˈrāSH(ə)n/

noun

action of traveling in or through unfamiliar area.

/ˌôɡˈmentəd/

adjective verb

having been made greater in size or value. To make something larger, stronger, more effective.

/ˈfəNG(k)SH(ə)n(ə)l/

adjective

of or having special activity or purpose.

/join/

verb

To bring something close to another, to become one.

/iˌmərsiv/

adjective

(of computer display or system) generating three-dimensional image which appears to surround user.

/kəmˈpyo͞odiNG/

noun verb

use or operation of computers. To find out by calculating or estimating.

/ˈfōkəs/

adjective verb

With your attention, concentration on one thing. To see clearly by adjusting your eyes or a camera.

/ˈôsəm/

adjective adverb

extremely impressive or daunting. extremely well.

/ikˈsplôr/

verb

To examine something in detail to learn about it.

/ˈvərCH(o͞o)əl/

adjective

almost or nearly as described, but not completely.