Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Image data encoding/decoding method and apparatus

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Publication Date:
    October 22, 2024
  • معلومة اضافية
    • Patent Number:
      12126,786
    • Appl. No:
      18/664676
    • Application Filed:
      May 15, 2024
    • نبذة مختصرة :
      A method for decoding a 360-degree image includes: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; combining the generated prediction image with a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Here, generating the prediction image includes: checking, from the syntax information, prediction mode accuracy for a current block to be decoded; determining whether the checked prediction mode accuracy corresponds to most probable mode (MPM) information obtained from the syntax information; and when the checked prediction mode accuracy does not correspond to the MPM information, reconfiguring the MPM information according to the prediction mode accuracy for the current block.
    • Inventors:
      B1 INSTITUTE OF IMAGE TECHNOLOGY, INC. (Seoul, KR)
    • Assignees:
      B1 INSTITUTE OF IMAGE TECHNOLOGY, INC. (Seoul, KR)
    • Claim:
      1. A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image by adding a residual image for the extended 2-dimensional image to a predicted image for the extended 2-dimensional image, the residual image being obtained by performing inverse quantization for residual information, wherein a size of the extension region to be padded is determined based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, both the first width information and the second width information being obtained from the bitstream, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods.
    • Claim:
      2. The method of claim 1 , wherein the padding method is selected from the plurality of padding methods based on selection information obtained from the bitstream.
    • Claim:
      3. The method of claim 1 , wherein the plurality of padding methods includes at least a first padding method which copies sample values of the face for the sample values of the extension region.
    • Claim:
      4. The method of claim 2 , wherein the first padding method horizontally copies the sample values of the face to the sample values of the extension region.
    • Claim:
      5. The method of claim 1 , wherein the plurality of padding methods includes at least a second padding method which changes sample values of the face for the sample values of the extension region.
    • Claim:
      6. A method for encoding a 360-degree image, the method comprising: obtaining a 2-dimensional image projected from an image with a 3-dimensional projection structure and including at least one face; obtaining an extended 2-dimensional image including the 2-dimensional image and a predetermined extension region; and encoding a residual image for the extended 2-dimensional image into a bitstream in which the 360-degree image is encoded, the residual image being obtained by subtracting a predicted image for the extended 2-dimensional image from the extended 2-dimensional image, the residual image being encoded by performing quantization for the residual image, wherein a size of the extension region to be padded is encoded based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, both the first width information and the second width information being encoded into the bitstream, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods.
    • Claim:
      7. A non-transitory computer-readable recording medium storing a bitstream that is generated by a method for encoding a 360-degree image, the method comprising: obtaining a 2-dimensional image projected from an image with a 3-dimensional projection structure and including at least one face; obtaining an extended 2-dimensional image including the 2-dimensional image and a predetermined extension region; and encoding a residual image for the extended 2-dimensional image into a bitstream in which the 360-degree image is encoded, the residual image being obtained by subtracting a predicted image for the extended 2-dimensional image from the extended 2-dimensional image, the residual image being encoded by performing quantization for the residual image, wherein a size of the extension region to be padded is encoded based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, both the first width information and the second width information being encoded into the bitstream, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods.
    • Patent References Cited:
      5440346 August 1995 Alattar et al.
      5448297 September 1995 Alattar et al.
      5724451 March 1998 Shin et al.
      6314209 November 2001 Kweon et al.
      7623682 November 2009 Park et al.
      9554138 January 2017 Lee et al.
      9721393 August 2017 Dunn et al.
      10015512 July 2018 Song et al.
      10115031 October 2018 Pashintsev et al.
      10225577 March 2019 Chen
      10306258 May 2019 Bankoski et al.
      10779004 September 2020 Huang
      20060034529 February 2006 Park et al.
      20070230803 October 2007 Ueno et al.
      20070280348 December 2007 Tokumitsu et al.
      20090028245 January 2009 Vieron et al.
      20090265748 October 2009 Dotchevski
      20120046716 February 2012 Dougal
      20120243608 September 2012 Yu et al.
      20120243797 September 2012 Di Venuto Dayer
      20130051452 February 2013 Li et al.
      20130077884 March 2013 Ikai
      20130128971 May 2013 Guo et al.
      20130136184 May 2013 Suzuki et al.
      20130182775 July 2013 Wang et al.
      20130259193 October 2013 Packard et al.
      20130266064 October 2013 Zhang et al.
      20140079332 March 2014 Zheng
      20140126645 May 2014 Lim et al.
      20140133581 May 2014 Naito
      20140177720 June 2014 Zhang et al.
      20150124867 May 2015 Jaeger et al.
      20150201217 July 2015 Lu et al.
      20150222897 August 2015 Park
      20150301777 October 2015 Jang
      20160112721 April 2016 An et al.
      20160219241 July 2016 Korneliussen et al.
      20160219280 July 2016 Korneliussen et al.
      20160227214 August 2016 Rapaka et al.
      20160277739 September 2016 Puri et al.
      20160323561 November 2016 Jin et al.
      20160328824 November 2016 Kim et al.
      20170013261 January 2017 Lin
      20170076429 March 2017 Russell
      20170208336 May 2017 Li et al.
      20170230668 August 2017 Lin et al.
      20170280126 September 2017 Van der Auwera et al.
      20170280143 September 2017 Xu et al.
      20170318287 November 2017 Lee et al.
      20180020202 January 2018 Xu et al.
      20180070106 March 2018 Han
      20180070110 March 2018 Chuang
      20180160113 June 2018 Jeong et al.
      20180176596 June 2018 Jeong et al.
      20180176601 June 2018 Jeong et al.
      20180184082 June 2018 Yoo et al.
      20180213264 July 2018 Zhang et al.
      20180288410 October 2018 Park et al.
      20190082184 March 2019 Hannuksela
      20190141318 May 2019 Li
      20190208200 July 2019 Galpin et al.
      20190215532 July 2019 He et al.
      20190222862 July 2019 Shin et al.
      20190297350 September 2019 Lin et al.
      20190387229 December 2019 Oh et al.
      20200007862 January 2020 Lin et al.
      20200029092 January 2020 Rath et al.
      20200288136 September 2020 Fang et al.
      10-2006-0050350 May 2006
      10-2007-0103347 October 2007
      10-2013-0027975 March 2013
      10-2014-0008503 January 2014
      10-2015-0068299 June 2015
      10-2016-0032909 March 2016
      WO 2017/123980 July 2017



    • Other References:
      International Search Report, PCT/KR2017/011138, dated Feb. 6, 2018, 10pgs. cited by applicant
      Kim, Non-Final Office Action, U.S. Appl. No. 16/372,287, Apr. 1, 2019, 6pgs. cited by applicant
      Francois et al., CE6b: Mode ranking for remaining mode coding with 2 or 3 MPMs' Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JCTVC-G242, Nov. 21-30, 2011, 5 pgs. cited by applicant
      Kim, Notice of Action, KR-10-2019-7011757, Jun. 30, 2020, 5pgs. cited by applicant
    • Primary Examiner:
      Rahman, Mohammad J
    • Attorney, Agent or Firm:
      NSIP Law
    • الرقم المعرف:
      edspgr.12126786