MM 111202

Views:
 
Category: Education
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Slide1: 

Digital Video

Slide2: 

Video Video comes from a camera, which records what it sees as a sequence of images Image frames comprise the video Frame rate = presentation of successive frames minimal image change between frames Frequency of frames is measured in frames per second [fps]. Sequencing of still images creates the illusion of movement andgt; 16 fps is 'smooth' Standards: 29.97 is NTSC, 24 for movies, 25 is PAL, 60 is HDTV Standard Definition Broadcast TV, NTSC, 15 bits/pixel of color depth, and 525 lines of resolution with 4:3 aspect ratio. Scanning practices leave a smaller safe region. Display scan rate is different monitor refresh rate 60 - 70 Hz (= 1/s) Interlacing: half the scan lines at a time (-andgt; flicker)

The Video Data Firehose: 

The Video Data Firehose To play one SECOND of uncompressed 16-bit color, 640 X 480 resolution, digital video requires approximately 18 MB of storage. One minute would require about 1 GB. A CD-ROM can only hold about 600MB and a single-speed (1x) player can only transfer 150KB per second. Data storage and transfer problems increase proportionally with 24-bit color playback. Without compression, digital video would not be possible with current storage technology.

Storage/Transmission Issues: 

Storage/Transmission Issues The storage/transmission requirements for video is determined by: Video Source Data * Compression = Storage The amount of required storage is determined by how much and what type of video data is in the uncompressed signal and how much the data can be compressed. In other words, the original video source and the desired playback parameters dramatically affect the final storage/transmission needs.

Video Compression: 

Video Compression The person recording video to be digitized can drastically affect the later compression steps. Video in which backgrounds are stable (or change slowly), for a period of time will yield a high compression rate. Scenes in which only a person's face from the shoulders upward is captured against a solid background will result in excellent compression. This type of video is often referred to as a 'talking head'.

Filtering: 

Filtering A filtering step does not achieve compression, but may be necessary to minimize artifacts of compression. Filtering is a preprocessing step performed on video frame images before compression. Essentially it smoothes the sharp edges in an image where a sudden shift in color or luminance has occurred. The smoothing is performed by averaging adjacent groups of pixel values. Without filtering, decompressed video exhibits aliasing (jagged edges), and moiré patterns.

Data Reduction through Scaling: 

Data Reduction through Scaling The easiest way to save memory is to store less, e.g. through size scaling. Original digital video standards only stored a video window of 160 X 120 pixels. A reduction of 1/16th the size of a 640 X 480 window. A 320 X 240 digital video window size is currently about standard, yielding a 4 to 1 data reduction. A further scaling application involves time instead of space. In temporal scaling the number of frames per second (fps), is reduced from 30 to 24. If the fps is reduced below 24 the reduction becomes noticeable in the form of jerky movement.

Compression through Transformation: 

Compression through Transformation Codecs (COmpression/DECompression algorithms) transform a two-dimensional spatial representation of an image into another dimension space (usually frequency). Since most natural images are composed of low frequency information, the high frequency components can be discarded. [What are high frequency components?] This results in a softer picture in terms of contrast. Most commonly, the frequency information is represented as 64 coefficients due to the underlying DCT (Discrete Cosine Transform), algorithm which operates upon 8 X 8 pixel grids. Low frequency terms occur in one corner of the grid, with high frequency terms occurring in the opposite corner of the grid.

Compression through Quantization: 

Compression through Quantization The lossy quantization step of digital video uses fewer bits to represent larger quantities. The 64 frequency coefficients of the DCT transformation are treated as real numbers. These are quantified into 16 different levels. The high frequency components (sparse in real-world images), are represented with only 0, 1 or 2 bits. The zero mapped frequencies drop out and are lost.

Frame Compaction: 

Frame Compaction The last step in compressing individual frames (intraframe compression) is a sequence of three standard text file compression schemes. Run-length encoding (RLE), Huffman coding, and arithmetic coding. RLE replaces sequences of identical values with the number of times the value occurs followed by the value (e.g., 11111000011111100000 ==andgt;andgt; 51406150). Huffman coding replaces the most frequently occurring values|strings with the smallest codes. Arithmetic coding, similar to Huffman coding, codes the commonly occurring values|strings using fractional bit codes.

Interframe Compression (MPEG style): 

Interframe Compression (MPEG style) Interframe compression takes advantage of minimal changes from one frame to the next to achieve dramatic compression. Instead of storing complete information about each frame only the difference information between frames is stored. MPEG stores three types of frames: The first type I-frame, stores all of the interframe compression information using no frame differencing. The second type P-frame is a predicted frame two or four frames in the future. This is compared with the corresponding actual future frame and the differences are stored (error signal). The third type B-frames, are bidirectional interpolative predicted frames that fill in the jumped frames.

Slide12: 

Streaming Video Access disk fast enough RAIDs Don’t download everything first Play as you start to download Keep a buffer for variable network speed equivalent to sampling a CD’s faster and filling a buffer Drop frames/packets when you fall behind (not TCP) Adjust the bandwidth dynamically need multiple encoding formats RTSP, QT, MS ASF, H.323 (video conferencing)

Slide13: 

Webcasting LIVE Encode fast enough Stream to multiple users connected at the same time Only time-synchronous viewing

Video Data Rates: 

Video Data Rates

Slide15: 

MPEG: Motion Picture Experts Group MPEG-1 (1992) Compression for Storage 1.5Mbps Frame-based Compression MPEG-2 (1994) Digital TV 6.0 Mbps Frame-based Compression MPEG-4 (1998) Multimedia Applications, digital TV, synthetic graphics Lower bit rate Object based compression MPEG-7 Multimedia Content Description Interface, XML-based MPEG-21 Digital identification, IP rights management

Slide16: 

MPEG-1 System Layer Combines one or more data streams from the video and audio parts with timing information to form a single stream suited to digital storage or transmission.

Slide17: 

MPEG-1 Video Layer a coded representation that can be used for compressing video sequences - both 625-line and 525-lines - to bitrates around 1.5 Mbit/s. Developed to operate from storage media offering a continuous transfer rate of about 1.5 Mbit/s. Different techniques for video compression: Select an appropriate spatial resolution for the signal. Use block-based motion compensation to reduce the temporal redundancy. Motion compensation is used for causal prediction of the current picture from a previous picture, for non-causal prediction of the current picture from a future picture, or for interpolative prediction from past and future pictures. The difference signal, the prediction error, is further compressed using the discrete cosine transform (DCT) to remove spatial correlation and is then quantised. Finally, the motion vectors are combined with the DCT information, and coded using variable length codes. When storing differences MPEG actually compares a block of pixels (macroblock) and if a difference is found it searches for the block in nearby regions. This can be used to alleviate slight camera movement to stabilize an image. It is also used to efficiently represent motion by storing the movement information (motion vector), for the block.

Slide18: 

MPEG-1 Video Layer

Slide19: 

MPEG-1 I,B,P Frames Choice of audio encoding Picture size, bitrate is variable No closed-captions, etc. Group of Pictures one I frame in every group 10-15 frames per group P depends only on I, B depends on both I and P B and P are random within GoP

Slide20: 

MPEG-1 Audio Layer Compress audio sequences in mono or stereo. Encoding creates a filtered and subsampled representation of the input audio stream. A psychoacoustic model creates data to control the quantiser and coding. The quantiser and coding block creates coding symbols from the mapped input samples. The block 'frame packing' assembles the actual bitstream from the output data of the other blocks and adds other information (e.g. error correction) if necessary.

Slide21: 

MPEG-1 Audio Layer

MPEG Streaming in variable networks: 

MPEG Streaming in variable networks Problem: available bandwidth Slightly too low, varying Shared by other users/applications Target application: Informedia MPEG movie database (terabytes) http://www.cineflo.com CMU spinoff startup company for adaptive MPEG-1 video transmission

System Overview: 

System Overview Application-aware network Network-aware application

Architecture: 

Architecture Maintain two connections control connection: TCP data connection: UDP Fits with the JAVA security model

Congestion Analysis and Feedback: 

Congestion Analysis and Feedback Client notices changes in loss rate and notifies filter ... Variable-size sliding window and two thresholds Filter modifies rate by clever manipulation of data stream Client is less aggressive in recapturing bandwidth

Filter: 

Filter Acts as mediator between client and upstream MPEG Video format dependent Performs on-the-fly low-cost computational modifications to data stream Paces data stream

MPEG-1 Systems Stream: 

MPEG-1 Systems Stream Padding Audio[0] Audio[1] Pack layer Packet layer

MPEG Sensitivity to Network Losses: 

MPEG Sensitivity to Network Losses

MPEG Video Filtering: 

MPEG Video Filtering

MPEG System Sensitive Video Filtering: 

MPEG System Sensitive Video Filtering Reduce network traffic by filtering frames on-the-fly andamp; low-cost ! Maintain smoothness Maintain synchronization data Adjust Packet Layer

Evaluation: 

Evaluation Constant heavy competing load

Streaming based on estimated need: 

Streaming based on estimated need Smarter Streaming for interactivity Break apart I, P, B frames Client decides which are more likely to be needed and requests those from server for the client cache Differential weights on frames based on need Also weighting based on type of frame (I,P,B) since you can’t decode a B frame without the I and P. Can only achieve savings of ~ 30% over raw MPEG-1

Slide33: 

MPEG-2 Digital Television (4 - 9 Mb/s) Satellite dishes, digital cable video Larger data size includes CC More complex encoding ('long time') almost HDTV

Slide34: 

HDTV 2x horizontal and vertical resolution SDTV: 480 line, 720 pixels per line, 29.97 frames per second x 16 bits/pixel = 168 Mbits/sec uncompressed MPEG-1 brings this to 1.5Mbits/sec at VHS quality HDTV: expanded to 1080 lines, 1920 pixels per line, 60 fps x 16 bits/pixel = 1990 Mbits/sec uncompressed MPEG-II like encoding, different audio encoding HDTV Audio Compression is based on the Dolby AC-3 system with sampling rate 48kHz and perceptually coded

Why HDTV?: 

Why HDTV? Higher-resolution picture Wider picture Digital surround sound. Additional data Easy to interface with computers

Current TV Standards: 

Current TV Standards NTSC: National Television Systems Committee PAL: Phase Alternation Line SECAM: Séquential Couleur Avec Mèmoire

HDTV and NTSC Specifications: 

HDTV and NTSC Specifications

Analog bandwidth of HDTV signals?: 

Analog bandwidth of HDTV signals? HDTV image size of 1050 by 600 at 30 frames per sec, the bandwidth required to carry that image quality using the analog transmission system is 18MHz. However, it will require more bandwidth to transmit it in digital format. With the MPEG-2 compression, the bit rate is compressed from more than 1 Gbps to about 20 Mbps, which transmit digitally only require bandwidth 6MHz

Architecture of HDTV Receivers: 

Architecture of HDTV Receivers Display Processor Audio Decoder Image Decoder Demodulator Demultiplexer Decoded video signals Decoded audio signals Display format video signals audio signals digital signals analog carrier + digital signals

Aspect ratio of movies vs. HDTV?: 

Aspect ratio of movies vs. HDTV? Aspect ratio of HDTV is 16:9 However, movies have many different aspect ratios: 'Movies are always shot so they can be displayed in several aspect ratios at different types of movie theaters, from the shoebox-sized foreign movie houses to the ultra big screen Star Wars jobs.' ----- Franco Vitaliano http://www.vxm.com/21R.107.html

Original Timeline of HDTV: 

Original Timeline of HDTV First began in 60’s at NHK, the Japan Broadcasting Corporation. In 1993, FCC suggested an alliance that could create the best possible system November 1998: HDTV transmissions begin at 27 stations in the top 10 markets May 1999: network affiliates in the top 10 markets must show at least 50% digital programming November 1999: digital broadcasts in the next 20 largest markets May 2002: remaining commercial stations must convert 2003: public stations must convert to digital broadcasts 2004: stations must simulcast at least 75% of their analog programming on HDTV 2005: stations must simulcast 100% of their analog programming 2006: stations relinquish their current analog spectrum NTSC TV sets will no longer be able to pick up broadcast signals

Spring 2001 Status: 

Spring 2001 Status 18 digital TV formats are approved by FCC More than 27 digital channels being broadcast by ABC, CBS, FOX, NBC DirecTV has one HDTV channel Cox is broadcasting two HDTV channels

Hardware Requirements: 

Hardware Requirements Digital Decoder converts digital signals to analog allow current TV set to work Digital-Ready TV set Wide-screen format progressive scanning HDTV set Wide-screen format can receive 18 digital input format

Comparison: 

Comparison Current TV HDTV

Comparison (current TV): 

Comparison (current TV)

Comparison (HDTV): 

Comparison (HDTV)

Slide47: 

Video vs. computer (ROM) formats Single (R) and multiple (RAM) recordings possible Up to 17 GB of data 12 cm optical disc format data storage medium Replaces optical media such as the laserdisc audio CD, and CD-ROM. Will also replace VHS tape as a distribution format for movies MPEG-2 encoding Digital Video Disc (DVD)

Slide48: 

DVD Features Language choice (for automatic selection of video scenes, audio tracks, subtitle tracks, and menus). Optional Special effects playback: freeze, step, slow, fast, and scan (no reverse play or reverse step). Parental lock (for denying playback of discs or scenes with objectionable material). Optional Programmability (playback of selected sections in a desired sequence). Random play and repeat play. Digital audio output (PCM stereo and Dolby Digital). Compatibility with audio CDs Digital Zoom Six channel audio

Slide49: 

MPEG-4 MPEG 2 plus Interactive Graphics Applications Interactive multimedia (WWW), networked distribution

Slide50: 

MPEG-4 Bitrates from 5kb/s to 10Mb/s Several extension 'profiles' Very high quality video Better compression than MPEG-1 Low delay audio and error resilience Support for 'objects' Face Animation Support for efficient streaming Limited industry activity at this point

Slide51: 

MPEG-4 from: http://mpeg.telecomitalialab.com/standards/mpeg-4/mpeg-4.htm

Slide52: 

MPEG-4

Slide53: 

MPEG-4 Example Difficulty is in separating foreground from background automatically http://www.dbvision.net Object Vision codec by Diamondback Vision (startup .com)

Slide54: 

MPEG-7 Data + Multimedia Content Description Scheme Description Definition Language (XML-based) Still not ‘final’, but close Does not deal with data, but meta-data transmission Description Scheme + Content Description, e.g: Table of content Still Images Summaries links etc. How does the Description data get generated? How is it used?

Mpeg-7 Examples: 

Mpeg-7 Examples  andlt;VideoText id='VideoText1' textType='Superimposed'andgt; andlt;MediaTimeandgt; andlt;MediaTimePointandgt; T0:0:0:0 andlt;/MediaTimePointandgt; andlt;MediaDurationandgt; PT6S andlt;/MediaDurationandgt; andlt;/MediaTimeandgt; andlt;Text xml:lang='en-us'andgt;CNN World Newsandlt;/Textandgt; andlt;/VideoTextandgt; andlt;TextPropertyandgt; andlt;FreeText xml:lang='en'andgt; World Today andlt;/FreeTextandgt; andlt;SyncTimeandgt; andlt;MediaRelTimePointandgt;PT01N30F andlt;/MediaRelTimePointandgt; andlt;MediaDurationandgt; PT2S andlt;/MediaDurationandgt; andlt;/SyncTimeandgt; andlt;/TextPropertyandgt; andlt;Placeandgt;

Mpeg-7 Examples Cont’d: 

Mpeg-7 Examples Cont’d andlt;Name xml:lang='en'andgt;Kabulandlt;/Nameandgt; andlt;GPSCoordinates type='latlon'andgt;69.137E 34.531N andlt;/GPSCoordinatesandgt; andlt;Countryandgt;Afghanistanandlt;/Countryandgt; andlt;Regionandgt;Velayatandlt;/Regionandgt; andlt;AdministrativeUnit type='city'andgt; Kabul andlt;/AdministrativeUnitandgt; andlt;/Placeandgt;

Slide57: 

MPEG-7

MPEG-21 (Draft)http://mpeg.telecomitalialab.com/standards/mpeg-21/mpeg-21.htm: 

MPEG-21 (Draft) http://mpeg.telecomitalialab.com/standards/mpeg-21/mpeg-21.htm

Video Compression Styles: 

Video Compression Styles Symmetric codecs require inverse operations to decompress the format. Asymmetric codecs use different compression|decompression methods. More processing time is spent in compressing to achieve low storage to allow for shorter decompression time.

Other Compression Schemes: 

Other Compression Schemes Quicktime (Apple), Video for Windows Open architecture allowing different codecs Motion JPEG – no interframe compression Cinepak is an asymmetric codec designed for 24-bit video in a 320 X 240 window for single-speed CD-ROM drives. Compression typically takes 300 times longer than decompression. Indeo asymmetric codec (Intel). Playback can take place on a Intel 486 processor without any hardware assistance. Less efficient than Cinepak DVI Digital Video Interactive requires off-line supercomputer processing power for the compression.

Slide61: 

QuickTime An ISO standard for digital media Created by Apple Computer Inc., 1993 Audio, animation, video, and interactive capabilities for PC Allows integration of MPEG technology into QuickTime. QuickTime is available for MS Windows/NT as well QuickTime movies have file extension .qt and .mov. Description: http://www.apple.com/quicktime/specifications.html ftp://ftp.intel.com/pub/IAL/multimedia/indeo/utilities/smartv.exe converts quicktime to avi and back

Slide62: 

Video Players for your PC To play a movie on your computer, you need a multimedia player e.g. an MPEG player, WindowsMediaPlayer, RealPlayer or QuickTime player. These players are also called decoders because they decode the MPEG or QuickTime, RealNetworks, etc. compressed codes. Some software allows you to both encode and decode multimedia files, e.g. to make and play the files. You’ll use both for your digital video homework assignment. Some software only allows you to play back multimedia files. When digitizing from a VCR, then the quality of the videotape recording and playback process limits the quality the digital video capturing system can achieve. Consumer grade recorders used should at least be SVHS, or Hi-8, to give adequate quality of the computer representation.

References: 

References http://www.cato.org/pubs/regulation/reg16n4b.html http://web-star.com/hdtv/faq.html http://web-star.com/hdtv/perspective.html http://bock.bushwick.com/hdtv_ppt/ http://web-star.com/hdtv/history.html http://www.cnn.com/TECH/computing/9910/26/pc.hdtv.idg/ http://money.cnn.com/services/tickerheadlines/bw/222470357.htm

References: 

References MPEG-1 System Layer MPEG-1 Video Layer MPEG-1 Audio Layer Definition of Video Terms

Slide65: 

Digital Video That’s all for today

authorStream Live Help