Capture and Compression of Multi-Viewpoint Video

Allen Klinger

Background

Panoramic and multi-viewpoint images enable three-dimensional reconstruction of solid models of scenes. Digital computer programs would process constituent images to build realistic models. In many cases such programs are made efficient by special purpose hardware particularly for implementing video capture/processing. High value for resultant three-dimensional information, and low cost of imaging devices causes research here to be likely to have application benefits.

Most panoramic techniques are either fisheye or use adjoined images (stitching-based). There are significant technical challenges in capturing and processing the huge amount of video data in the existing systems for acquiring multi-viewpoint data. The result is that current technology can only produce still images: it cannot handle time-varying or video data.

This proposal proceeds in cooperation with a commercial company that owns proprietary technology. That technology includes techniques to capture both panoramic and multi-viewpoint imagery. The company, Reality Commerce Corporation (RCC), will supply technical knowledge, information and practical experience. That information will be based on their systems - see description below (System Architecture).

We will be conducting joint research regarding enabling new methods for handling video imagery. That activity will be based on specific RCC technology, including proprietary real-time data compression.

System Architecture

The main activity will involve combining two RCC hardware patents pending with a similar software item resulting from the investigator's research. The improvement of overall functioning for display of a sequence of images derived from multiple viewpoints is the central objective. A list of the three basic technologies, each a patent-pending item, follows:

1. Parallel Multi-Viewpoint Video Capturing and Compression (PMVCC): Method and Apparatus, U.S. Ser. No. 60/191,721. (RCC)

2. Subject Video Streaming: Methods and Systems, U.S. Ser. No. 60/191,754. (RCC)

3. Executing Remote Procedures in a Remote Processor From a Client Process Executed in a Local Processor, U.S. Ser. No. 09/786,286. (Klinger/Darrah)

Algorithmic Processing

The core issue is accelerating construction of either a sequence of images or a solid construction obtained by combining multiple views. Procedures suitable for teaching and entertainment applications are needed for both still image and moving pictures.

We plan to investigate commercial media processors that are capable of capturing and compressing one or two channels of real-time video images. The result of an in-depth study of their technical and economic nature will inform our first objective. That is the determination of design and implementation issues concerning a new multiple input multiple data (MIMD) video parallel processing system.

The proposal involves extending and increasing existing capabilities. The methods involve parallel and hence massive information handling. This is needed for practical three-dimensional imaging situations. We will investigate related compression algorithms for such systems. (The generic term for hardware and software creating reality models is immersion systems.)

Applications

Online Entertainment — interactive multi-viewpoint 3D video images of live performances, sports, concerts and theatre presentations.

E-commerce Industry — e-commerce platforms for businesses and consumers through web host designers, service providers and large retailers.

Training and Distance Learning — interactive 3D techniques; education tailored to customers.

Image Support - Facilities for disabled; directorial tools; synthesis for ground modeling.

 

Research Objectives:

Design and analyze MIMD video parallel processing system architecture.

Design and analyze compression algorithm.

Compare the solution we proposed with other completing technologies

Research Team

Dr. Allen Klinger, Professor of computer science department, UCLA

Mr. Ping Liu, M.Sc, Post Engineer in UCLA, Director of Hardware Development, Reality Commerce Corp,.

(if resume is needed please let me know)

Research Schedule

Month 1: more investigation on related research

Month 2~4: Media Parallel Processing System Hardware Architecture Design and analysis

Month 5~7: Parallel video compression algorithm design and analysis

Month 8~9: Research project reporting

Research Project Cost

Desk-top Computer: $2,500.00

Travel : $8,000.00

Literature and industrial standard purchasing: $1,000.00

Video capturing card: $1,000.00

Two Video Camera: $1,000.00($500 each)

Video utility software: $3,000.00

Total: $16,500.00

Immersion Systems

1. National Tele-immersion Initiative Web site:

http://http://www. advanced.org/teleimmersion.html

2. Tele-immersion at Brown University:

http://www. cs.brown.edu/~lsh/telei.html

Andries van Dam, Loring Holden, Robert C. Zeleznik

http://www. cs.brown.edu/~lsh/telei.html

3. Tele-immersion at the University of North Carolina at Chapel Hill:

http://www.cs.unc.edu/Research/stc/teleimmersion/

Team Members: Henry Fuchs, Herman Towles, Greg Welch, Wei-Chao Chen, Ruigang Yang, Sang-Uok Kum, Andrew Nashel, Srihari Sukumaran

http://www. cs.unc.edu/Research/stc/ teleimmersion/

4. Tele-immersion at the University of Pennsylvania:

http://www.cis.upenn.edu/~sequence/teleim1.html

Ruzena Bajcsy, Kostas Daniilidis, Jane Mulligan, Ibrahim Volkan Isler

http://www. cis.upenn.edu/~sequence/teleim2.html

5. Tele-immersion site at Internet2:

http://www.internet2.edu/html/tele-immersion.html

6. Advanced Networks and Services:

http://www.advanced.org/teleimmersion.html

Jaron Lanier, Amela Sadagic