Jaclyn Konzelmann

San Francisco, California, United States Contact Info
8K followers 500+ connections

Join to view profile

About

As a Director of Product Management at Google Labs, I lead the development and launch of…

Contributions

Activity

Join now to see all activity

Experience & Education

  • Google

View Jaclyn’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Patents

  • Assigning priority for an automated assistant according to a dynamic user queue and/or multi-modality presence detection

    Issued US 20220159340A1

    Implementations relate to an automated assistant that provides and manages output from one or more elements of output hardware of a computing device. The automated assistant manages dynamic adjustment of access permissions to the computing device according to, for example, a detected presence of one or more users. An active-user queue can be established each time a unique user enters a viewing window of a camera of the computing device when, up to that point, no user was considered active…

    Implementations relate to an automated assistant that provides and manages output from one or more elements of output hardware of a computing device. The automated assistant manages dynamic adjustment of access permissions to the computing device according to, for example, a detected presence of one or more users. An active-user queue can be established each time a unique user enters a viewing window of a camera of the computing device when, up to that point, no user was considered active. Multiple image frames can be captured via the camera and processed to determine whether an initial user remains in the viewing window and/or whether another user has entered the viewing window. The initial user can be considered active as long as they are exclusively detected in the viewing window. Restricted content associated with the user may be rendered by the computing device whilst the user is active.

    See patent
  • Methods and systems for reducing latency in automated assistant interactions

    Issued US 20220351720A1

    Implementations described herein relate to reducing latency in automated assistant interactions. In some implementations, a client device can receive audio data that captures a spoken utterance of a user. The audio data can be processed to determine an assistant command to be performed by an automated assistant. The assistant command can be processed, using a latency prediction model, to generate a predicted latency to fulfill the assistant command. Further, the client device (or the automated…

    Implementations described herein relate to reducing latency in automated assistant interactions. In some implementations, a client device can receive audio data that captures a spoken utterance of a user. The audio data can be processed to determine an assistant command to be performed by an automated assistant. The assistant command can be processed, using a latency prediction model, to generate a predicted latency to fulfill the assistant command. Further, the client device (or the automated assistant) can determine, based on the predicted latency, whether to audibly render pre-cached content for presentation to the user prior to audibly rendering content that is responsive to the spoken utterance. The pre-cached content can be tailored to the assistant command and audibly rendered for presentation to the user while the content is being obtained, and the content can be audibly rendered for presentation to the user subsequent to the pre-cached content.

    See patent
  • Enrollment with an automated assistant

    Issued US11238142B2

    Systems and method for controlling a device in a home automation system based on a speaker-dependent command may include receiving a voice command for controlling the device connected to the home automation system, performing a voice recognition analysis to determine a speaker identity of the received voice command, and performing a speech recognition analysis to identify the device in the home automation system that is intended to be controlled. The systems and methods may include determining…

    Systems and method for controlling a device in a home automation system based on a speaker-dependent command may include receiving a voice command for controlling the device connected to the home automation system, performing a voice recognition analysis to determine a speaker identity of the received voice command, and performing a speech recognition analysis to identify the device in the home automation system that is intended to be controlled. The systems and methods may include determining a permission status to control the identified device, whereby the determined permission status is based on the determined speaker identity and the identified device. The systems and methods may include controlling the identified device in the home automation system based on the determined status.

    See patent
  • Regulating assistant responsiveness according to characteristics of a multi-assistant environment

    Issued US 11037562B2

    Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For…

    Implementations set forth herein relate to employing dynamic regulations for governing responsiveness of multiple automated assistant devices, and specifically the responsiveness an automated assistant to a given spoken utterance that has been acknowledged by two or more of the assistant devices. The dynamic regulations can be context-dependent and adapted over time in order that the automated assistant can accommodate assistant interaction preferences that may vary from user to user. For instance, a spoken utterance such as “stop,” may be intended to affect different assistant actions based on a context in which the user provided the spoken utterance. The context can refer to a location of the user relative to other rooms in a home, a time of day, a user providing the spoken utterance, an arrangement of the assistant devices within a home, and/or a state of each device in the home.

    See patent
  • Pinning a callout animation

    Issued US 10162502B2

    Animation for the attachment of content items to a location on a content surface in a user interface is provided. A user interface showing a content surface may be displayed on a computer. The content surface may include a content item at an initial position above the content surface. The content surface may display content formatted for display over an area comprising a totality of the content surface. The computer may then receive in the user interface a request to attach the content item to…

    Animation for the attachment of content items to a location on a content surface in a user interface is provided. A user interface showing a content surface may be displayed on a computer. The content surface may include a content item at an initial position above the content surface. The content surface may display content formatted for display over an area comprising a totality of the content surface. The computer may then receive in the user interface a request to attach the content item to a final position on the content surface. The computer may then display an animation of the content item moving, from the initial position, across the content surface until the final position has been reached. The computer may then attach the content item to the content surface at the final position.

    See patent
  • Display screen with graphical user interface

    Issued US D705243

    Other inventors
    See patent
  • Display screen with User Interface

    Issued US D687456

    Other inventors
    See patent
  • Enabling natural conversations with soft endpointing for an automated assistant

    Filed US 20230053341A1

    As part of a dialog session between a user and an automated assistant, implementations can process, using a streaming ASR model, a stream of audio data that captures a portion of a spoken utterance to generate ASR output, process, using an NLU model, the ASR output to generate NLU output, and cause, based on the NLU output, a stream of fulfillment data to be generated. Further, implementations can further determine, based on processing the stream of audio data, audio-based characteristics…

    As part of a dialog session between a user and an automated assistant, implementations can process, using a streaming ASR model, a stream of audio data that captures a portion of a spoken utterance to generate ASR output, process, using an NLU model, the ASR output to generate NLU output, and cause, based on the NLU output, a stream of fulfillment data to be generated. Further, implementations can further determine, based on processing the stream of audio data, audio-based characteristics associated with the portion of the spoken utterance captured in the stream of audio data. Based on the audio-based characteristics and/the stream of NLU output, implementations can determine whether the user has paused in providing the spoken utterance or has completed providing of the spoken utterance. If the user has paused, implementations can cause natural conversation output to be provided for presentation to the user.

    See patent
  • Enabling natural conversations with soft endpointing for an automated assistant

    Filed EP 4158621A1

    As part of a dialog session between a user and an automated assistant, implementations can process, using a streaming ASR model, a stream of audio data that captures a portion of a spoken utterance to generate ASR output, process, using an NLU model, the ASR output to generate NLU output, and cause, based on the NLU output, a stream of fulfillment data to be generated. Further, implementations can further determine, based on processing the stream of audio data, audio-based characteristics…

    As part of a dialog session between a user and an automated assistant, implementations can process, using a streaming ASR model, a stream of audio data that captures a portion of a spoken utterance to generate ASR output, process, using an NLU model, the ASR output to generate NLU output, and cause, based on the NLU output, a stream of fulfillment data to be generated. Further, implementations can further determine, based on processing the stream of audio data, audio-based characteristics associated with the portion of the spoken utterance captured in the stream of audio data. Based on the audio-based characteristics and/the stream of NLU output, implementations can determine whether the user has paused in providing the spoken utterance or has completed providing of the spoken utterance. If the user has paused, implementations can cause natural conversation output to be provided for presentation to the user.

    See patent

Recommendations received

More activity by Jaclyn

View Jaclyn’s full profile

  • See who you know in common
  • Get introduced
  • Contact Jaclyn directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses