Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Leveraging user signals for improved interactions with digital personal assistant

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Publication Date:
    October 31, 2017
  • معلومة اضافية
    • Patent Number:
      9,807,559
    • Appl. No:
      14/315049
    • Application Filed:
      June 25, 2014
    • نبذة مختصرة :
      Systems, methods, apparatuses, and computer program products are described for implementing a digital personal assistant. The digital personal assistant is capable of determining that a user has asked a question or made a statement that is intended to engage with a persona of the digital personal assistant. In response to determining that the user has asked such a question or made such a statement, the digital personal assistant provides a response thereto by displaying or playing back a multimedia object associated with a popular culture reference within or by a user interface of the digital personal assistant. Additionally or alternatively, in response to determining that the user has asked such a question or made such a statement, the digital personal assistant provides the response thereto by generating or playing back speech that comprises an impersonation of a voice of a person associated with the popular culture reference.
    • Inventors:
      Microsoft Corporation (Redmond, WA, US)
    • Assignees:
      Microsoft Technology Licensing, LLC (Redmond, WA, US)
    • Claim:
      1. A method performed by a digital personal assistant comprising a software agent implemented on at least one computing device, comprising: obtaining one or more first signals related to an availability of a user; obtaining one or more second signals related to a mental or emotional state of the user; determining whether a particular time is an appropriate time to attempt to initiate a conversation between the software agent and the user based at least on the first signal(s) and the second signal(s); and in response to a determination that the particular time is an appropriate time to attempt to initiate the conversation between the software agent and the user; querying the user to determine if he or she is available to converse; and in response to receiving a positive response to the query, selecting a conversation topic, and initiating a conversation between the software agent and the user about the selected conversation topic.
    • Claim:
      2. The method of claim 1 , wherein the first signal(s) comprise one or more of: calendar information associated with the user; daily habits information associated with the user; and information associated with a current activity of the user.
    • Claim:
      3. The method of claim 1 , wherein the second signal(s) comprise one or more of: facial expressions of the user; voice characteristics of the user; a location of the user; a rate at which the user is turning on and off a mobile device; keystroke and/or gesture metadata associated with the user; written and/or spoken content of the user; application interaction metadata associated with the user; accelerometer, compass, and/or gyroscope output; degree of exposure to light; temperature; weather conditions; traffic conditions; pollution and/or allergen levels; activity level of the user; heart rate and heart rate variability of the user; electrodermal activity of the user; device and/or network connection information for a device associated with the user; and battery and/or charging information for a device associated with the user.
    • Claim:
      4. The method of claim 1 , wherein the second signal(s) comprise one or more signals identified by a machine learner as being determinative of the mental or emotional state of the user.
    • Claim:
      5. The method of claim 1 , wherein the machine learner is trained by one or more of a test population and the user.
    • Claim:
      6. The method of claim 1 , wherein selecting the conversation topic comprises: selecting the conversation topic based on one or more of a set of current events and a set of interests of the user.
    • Claim:
      7. The method of claim 6 , wherein the set of current events are stored in one or more first databases, and wherein the set of interests of the user are stored in one or more second databases.
    • Claim:
      8. A system, comprising: at least one processor; and a memory that stores computer program logic for execution by the at least one processor, the computer program logic including one or more components configured to perform operations when executed by the at least one processor, the one or more components including: a digital personal assistant comprising a software agent configured to obtain one or more signals related to an availability of a user, to obtain one or more second signals related to a mental or emotional state of the user, to determine whether a particular time is an appropriate time to attempt to initiate a conversation between the software agent and the user based at least on the first signal(s) and the second signal(s), and in response to a determination that the particular time is an appropriate time to attempt to initiate the conversation between the software agent and the user: query the user to determine if he or she is available to converse; and in response to receiving a positive response to the query, select a conversation topic, and initiate a conversation with the user about the selected conversation topic.
    • Claim:
      9. The system of claim 8 , wherein the first signal(s) comprise one or more of: calendar information associated with the user; daily habits information associated with the user; and information associated with a current activity of the user.
    • Claim:
      10. The system of claim 8 , wherein the second signal(s) comprise one or more of: facial expressions of the user; voice characteristics of the user; a location of the user; a rate at which the user is turning on and off a mobile device; keystroke and/or gesture metadata associated with the user; written and/or spoken content of the user; application interaction metadata associated with the user; accelerometer, compass, and/or gyroscope output; degree of exposure to light; temperature; weather conditions; traffic conditions; pollution and/or allergen levels; activity level of the user; heart rate and heart rate variability of the user; electrodermal activity of the user; device and/or network connection information for a device associated with the user; and battery and/or charging information for a device associated with the user.
    • Claim:
      11. The system of claim 8 , wherein the second signal(s) comprise one or more signals identified by a machine learner as being determinative of the mental or emotional state of the user.
    • Claim:
      12. The system of claim 8 , wherein the machine learner is trained by one or more of a test population and the user.
    • Claim:
      13. The system of claim 8 , wherein the software agent is configured to select the conversation topic based on one or more of a set of current events and a set of interests of the user.
    • Claim:
      14. The system of claim 13 , wherein the set of current events are stored in one or more first databases, and wherein the set of interests of the user are stored in one or more second databases.
    • Claim:
      15. A computer program product comprising a computer-readable memory device having computer program logic recorded thereon that when executed by at least one processor causes the at least one processor to perform a method of operating a digital personal assistant comprising a software agent, the method comprising: obtaining from a mobile device operated by a user one or more first signals related to an availability of the user; obtaining from the mobile device one or more second signals related to a mental or emotional state of the user; determining whether a particular time is an appropriate time to attempt to initiate a conversation between the software agent and the user based at least on the first signal(s) and the second signal(s); and in response to a determination that the particular time is an appropriate time to attempt to initiate the conversation between the software agent and the user: querying the user to determine if he or she is available to converse; and in response to receiving a positive response to the query, selecting a conversation topic, and initiating a conversation between the software agent and the user about the selected conversation topic.
    • Claim:
      16. The computer program product of claim 15 , wherein the first signal(s) comprise one or more of: calendar information associated with the user; daily habits information associated with the user; and information associated with a current activity of the user.
    • Claim:
      17. The computer program product of claim 15 , wherein the second signal(s) comprise one or more of: facial expressions of the user; voice characteristics of the user; a location of the user; a rate at which the user is turning on and off a mobile device; keystroke and/or gesture metadata associated with the user; written and/or spoken content of the user; application interaction metadata associated with the user; accelerometer, compass, and/or gyroscope output; degree of exposure to light; temperature; weather conditions; traffic conditions; pollution and/or allergen levels; activity level of the user; heart rate and heart rate variability of the user; electrodermal activity of the user; device and/or network connection information for a device associated with the user; and battery and/or charging information for a device associated with the user.
    • Claim:
      18. The computer program product of claim 15 , wherein the second signal(s) comprise one or more signals identified by a machine learner as being determinative of the mental or emotional state of the user.
    • Claim:
      19. The computer program product of claim 15 , wherein the machine learner is trained by one or more of a test population and the user.
    • Claim:
      20. The computer program product of claim 15 , wherein selecting the conversation topic comprises selecting the conversation topic based on one or more of a set of current events and a set of interests of the user.
    • Patent References Cited:
      6731307 May 2004 Strubbe
      7974849 July 2011 Begole
      8954372 February 2015 Biehl
      2002/0029203 March 2002 Pelland et al.
      2003/0046401 March 2003 Abbott
      2005/0054381 March 2005 Lee et al.
      2006/0129405 June 2006 Elfanbaum
      2007/0121882 May 2007 Timmins et al.
      2007/0300225 December 2007 Macbeth
      2008/0208015 August 2008 Morris
      2009/0106848 April 2009 Coley
      2009/0318777 December 2009 Kameyama
      2010/0099955 April 2010 Thomas
      2010/0223341 September 2010 Manolescu
      2010/0262487 October 2010 Edwards
      2011/0044431 February 2011 Klemm
      2011/0283190 November 2011 Poltorak
      2013/0110520 May 2013 Cheyer et al.
      2013/0110895 May 2013 Valentino
      2013/0212501 August 2013 Anderson et al.
      2013/0302766 November 2013 Gold
      2014/0214994 July 2014 Rueckert
      2093966 August 2009
      2011110727 September 2011


    • Other References:
      International Search Report & Written Opinion Received for PCT Application No. PCT/US2015/036858, Mail Sep. 30, 2015, 11 Pages. cited by applicant
      “Second Written Opinion Issued in PCT Application No. PCT/US2015/036858”, Mailed Date: Apr. 29, 2016, 06 Pages. cited by applicant
      “International Preliminary Report on Patentability Issued in PCT Application No. PCT/US2015/036858”, Mailed Date: Oct. 13, 2016, 7 Pages. cited by applicant
    • Primary Examiner:
      Arevalo, Joseph
    • Attorney, Agent or Firm:
      Fiala & Weaver P.L.L.C.
    • الرقم المعرف:
      edspgr.09807559