menu
Item request has been placed!
×
Item request cannot be made.
×

Multimedia communications with synchronized graphical user interfaces
Item request has been placed!
×
Item request cannot be made.
×

- Publication Date:June 28, 2022
- معلومة اضافية
- Patent Number: 11372,524
- Appl. No: 17/010022
- Application Filed: September 02, 2020
- نبذة مختصرة : Techniques for performing multimedia communications including audio and video (including still-frame video) are described. During a multimedia communication involving two devices, one of the devices may receive a user input to display digital book covers. In response, each device may display live video (captured by each device of the multimedia communication) and still-frame video corresponding to digital book covers. Thereafter, in response to a further user input, each device may display the live video and still-frame video corresponding to a specific page of one of the digital books. Each device may receive a user input for controlling the still-frame video displayed.
- Inventors: Amazon Technologies, Inc. (Seattle, WA, US)
- Assignees: Amazon Technologies, Inc. (Seattle, WA, US)
- Claim: 1. A computer-implemented method comprising: establishing a video communication session between a first device and a second device; receiving, from the first device, a first user input requesting content be displayed during the video communication session; determining a profile identifier associated with the first device; sending, to a first content component, the profile identifier and a first request for first content; receiving, from the first content component, first image data corresponding to a first plurality of images comprising a first image and a second image; receiving first video data being output by at least a first camera associated with the first device; receiving second video data being output by at least a second camera associated with the second device; sending, to the first device, third video data comprising the first video data, the second video data, first still-frame video data corresponding to the first image, and second still-frame video data corresponding to the second image; and sending, to the second device, fourth video data comprising the first video data, the second video data, third still-frame video data corresponding to the first image, and fourth still-frame video data corresponding to the second image.
- Claim: 2. The computer-implemented method of claim 1 , further comprising: after sending the third video data and the fourth video data, receiving, from the first device, display coordinates corresponding to a graphical user interface (GUI) user input; determining the display coordinates correspond to a location on a display where the first still-frame video data is being displayed; sending, to the first content component, a second request for second content corresponding to the first image; receiving, from the first content component, second image data representing a third image corresponding to the second content; sending, to the first device, fifth video data comprising the first video data, the second video data, and fifth still-frame video data corresponding to the third image; and sending, to the second device, sixth video data comprising the first video data, the second video data, and sixth still-frame video data corresponding to the third image.
- Claim: 3. The computer-implemented method of claim 1 , further comprising: after sending the third video data and the fourth video data, receiving, from the first device, display coordinates corresponding to a graphical user interface (GUI) user input; determining the display coordinates correspond to a graphical element to display additional content; sending, to the first content component, a second request for additional content; receiving, from the first content component, second image data corresponding to a second plurality of images comprising a third image and a fourth image; sending, to the first device, fifth video data comprising the first video data, the second video data, fifth still-frame video data corresponding to the third image, and sixth still-frame video data corresponding to the fourth image; and sending, to the second device, sixth video data comprising the first video data, the second video data, seventh still-frame video data corresponding to the third image, and eighth still-frame video data corresponding to the fourth image.
- Claim: 4. The computer-implemented method of claim 1 , further comprising: after sending the third video data and the fourth video data, receiving, from the first device, display coordinates corresponding to a graphical user interface (GUI) user input; determining the display coordinates correspond to a graphical element to cease displaying content during the video communication session; and in response to determining the display coordinates correspond to the graphical element, causing the video communication session to be performed as a peer-to-peer video communication session wherein: the first device displays the first video data and the second video data, and the second device displays the first video data and the second video data.
- Claim: 5. A computer-implemented method comprising: establishing a first multimedia communication session between a first device and a second device; receiving, from the first device, a first user input requesting a book be displayed during the first multimedia communication session; determining first image data corresponding to a first book; receiving first video data being output by at least one camera associated with the first device; receiving second video data being output by at least one a first camera associated with the second device; sending, to the first device, third video data comprising the first video data, the second video data, and first still-frame video data corresponding to the first image data; and sending, to the second device, fourth video data comprising the first video data, the second video data, and second still-frame video data corresponding to the first image data.
- Claim: 6. The computer-implemented method of claim 5 , further comprising: generating display location data representing display coordinates corresponding to a location of the first still-frame video data on a display associated with the first device; and sending the display location data to the first device.
- Claim: 7. The computer-implemented method of claim 5 , further comprising: receiving, from the first device, a graphical user interface (GUI) user input corresponding to the first still-frame video data; determining second image data corresponding to a page of the first book; and sending, to the first device, fifth video data comprising the first video data, the second video data, and third still-frame video data corresponding to the page of the first book.
- Claim: 8. The computer-implemented method of claim 5 , further comprising: receiving, from the first device, a graphical user interface (GUI) user input corresponding to a graphical element to display additional content; determining second image data corresponding to a second book; and sending, to the first device, fifth video data comprising the first video data, the second video data, and third still-frame video data corresponding to the second book.
- Claim: 9. The computer-implemented method of claim 5 , further comprising: prior to receiving the first user input, performing the first multimedia communication session as a peer-to-peer multimedia communication session between the first device and the second device; and after receiving the first user input, by a distributed system: receiving the first video data from the second device, generating the second video data, and sending the second video data to the first device.
- Claim: 10. The computer-implemented method of claim 9 , further comprising: after sending the third video data to the first device and the fourth video data to the second device, receiving, from one of the first device or the second device, a second user input to cease display of content during the first multimedia communication session; and based at least in part on receiving the second user input, causing the first multimedia communication session to revert to the peer-to-peer multimedia communication session.
- Claim: 11. The computer-implemented method of claim 5 , further comprising: establishing a second multimedia communication session between the first device and the second device; receiving, from the first device, a second user input request a video be displayed during the second multimedia communication session; determining second image data corresponding to a cover of a first movie; determining third image data corresponding to a cover of a second movie; receiving fifth video data being output by the at least one camera associated with the first device; receiving sixth video data being output by the at least one camera associated with the second device; sending, to the first device, seventh video data comprising the fifth video data the sixth video data, third still-frame video data corresponding to the cover of the first movie, and fourth still-frame video data corresponding to the cover of the second movie; and sending, to the second device, eighth video data comprising the fifth video data, the sixth video data, fifth still-frame video data corresponding to the cover of the first movie, and sixth still-frame video data corresponding to the cover of the second movie.
- Claim: 12. A computing system, comprising: at least one processor; and at least one memory comprising instructions that, when executed by the at least one processor, cause the computing system to: establish a multimedia communication session between a first device and a second device; receive, from the first device, a first user input requesting content be displayed during the multimedia communication session; send, to a first content component, a request for first content; receive, from the first content component, first image content data from the first content component comprising a first image and a second image; receive first video data being output by at least one camera associated with the first device; receive second video data being output by at least one camera associated with the second device; send, to the first device, third video data comprising the first video data the second video data, first still-frame video data corresponding to the first image, and second still-frame video data corresponding to the second image; and send, to the second device, fourth video data comprising the first video data, the second video data, third still-frame video data corresponding to the first image, and fourth still-frame video data corresponding to the second image.
- Claim: 13. The computing system of claim 12 , wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computing system to: generate display location data representing display coordinates corresponding to a location of the first still-frame video data on a display associated with the first device; and send the display location data to the first device.
- Claim: 14. The computing system of claim 12 , wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computing system to: receive, from the first device, a graphical user interface (GUI) user input corresponding to the first still-frame video data; determine second image data associated with the first image data, the second image data corresponding to a third image; and send, to the first device, fifth video data comprising the first video data, the second video data, and fifth still-frame video data corresponding to the third image.
- Claim: 15. The computing system of claim 12 , wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computing system to: receive, from the first device, a graphical user interface (GUI) user input corresponding to a graphical element to display additional content; determine second image data corresponding to third image and a fourth image; and send, to the first device, fifth video data comprising the first video data, the second video data, fifth still-frame video data corresponding to the third image, and sixth still-frame video data corresponding to the fourth image.
- Claim: 16. The computing system of claim 12 , wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computing system to: prior to receiving the first user input, perform the multimedia communication session as a peer-to-peer multimedia communication session between the first device and the second device; and after receiving the first user input, by a distributed system: receive the first video data from the second device, generate the second video data, and send the second video data to the first device.
- Claim: 17. The computing system of claim 16 , wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computing system to: after sending the third video data to the first device and the fourth video data to the second device, receive, from one of the first device or the second device, a second user input to cease display of content during the multimedia communication session; and based at least in part on receiving the second user input, cause the multimedia communication session to revert to the peer-to-peer multimedia communication session.
- Claim: 18. The computing system of claim 12 , wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computing system to: receive, from the first device, a graphical user interface (GUI) user input corresponding to the first still-frame video data; send, to the first content component, a request for second content associated with the first content data image; receive, from the first content component, fifth video data associated with the first image; and send, to the first device, sixth video data comprising the first video data, the second video data, and the fifth video data.
- Patent References Cited: 2004/0008635 January 2004 Nelson
2007/0199076 August 2007 Rensin
2008/0016156 January 2008 Miceli
2008/0178230 July 2008 Eyal
2009/0010485 January 2009 Lamb
2010/0064334 March 2010 Blackburn
2010/0138746 June 2010 Zarom
2013/0328997 December 2013 Desai
2014/0368734 December 2014 Hoffert
2016/0349965 December 2016 Griffin
2017/0279867 September 2017 Morton
2017/0293458 October 2017 Poel
2019/0037173 January 2019 Lee - Other References: Facebook Portal, “A whole new way to share stories”, webpage retrieved on Sep. 4, 2020 via https://portal.facebook.com/features/story-time/, 9 pages. cited by applicant
Caribu, https://caribu.com, webpage retrieved on Sep. 4, 2020, 19 pages. cited by applicant
Webwise.ie “Explained: What is Twitch?”, webpage retrieved on Sep. 4, 2020 via https://www.webwise.ie/parents/explained-what-is-twitch/, 6 pages. cited by applicant - Primary Examiner: Hailu, Tadesse
- Attorney, Agent or Firm: Pierce Atwood LLP
- الرقم المعرف: edspgr.11372524
- Patent Number:
حقوق النشر© 2024، دائرة الثقافة والسياحة جميع الحقوق محفوظة Powered By EBSCO Stacks 3.3.0 [353] | Staff Login

حقوق النشر © دائرة الثقافة والسياحة، جميع الحقوق محفوظة
No Comments.