DEMO SESSION

Chair: Christian Timmerer, Klagenfurt University

Email: christian.timmerer(at)itec(dot)uni-klu(dot)ac(dot)at

Live demonstrations provide an opportunity for companies and universities to show their latest work to the growing number of people interested in MPEG, and also an opportunity for those people to learn about emerging technologies through discussion of technical details with the developers themselves.

Demonstrations will be presented by the developers who will focus on technical details and be able to answer technical questions. Come along and get a flavour of the demonstrations at the demo session on Monday, July 17 between 3 p.m. and 8 p.m. in the main Aula of the university. A light dinner will be provided around 6 p.m.

Each demonstration will be briefly introduced by its presenter(s), so you'll know what to look forward to at the demonstration itself:

The demo session is sponsored by

FTRD - France Telecom Research & Development

DANAE - Dynamic and distributed Adaptation of scalable multimedia coNtent in a context-Aware Environment (FP6-2002-IST-1 507113)

Multimedia Lab (MMLab) at Ghent University
Adactus - Personalized Mobile Experience
Siemens AG - Corporate Technology IC 2
ON DEMAND Microelectronics AG

M3-Systems Research Lab

Topics: Mobile MultiMedia to different end devices, adaptive video streaming, metadata service discovery

 

 

ON DEMAND Microelectronics AG

DEMO 1: MPEG-4 AVC/H.264

ODM shows a software solution for decoding MPEG-4 AVC/H.264 streams in CIF and D1 resolution. The coded stream is transmitted from the PC to the demo board via a TCP/IP connection. At the demo board the stream is met by a Single Board Computer (SBC) which forwards the stream to the MPEG-4 AVC/H.264 decoder. The MPEG-4 AVC/H.264 decoder consists of several Vector Signal Processor (VSPs) and an FPGA. In the FPGA is the entropy decoder, on the VSPs there are the computing intensive reconstruction and post processing parts. The entropy decoder parses the stream and always passes on whole frames to the reconstruction and post processing part where IQ, IDCT, inter/intra prediction and de-blocking is performed.
After a frame is decoded it is temporarily stored in a frame buffer and is used as reference for the reconstruction. The SBC picks up the decoded frames from the frame buffer and sends them back to the PC via the TCP/IP connection.

DEMO 2: SHDSL

ODM shows a software solution for DSL communication standards. The demonstration is configured to drive an SHDSL compliant connection up to 2312 kbit/s using Trellis-coded pulse-amplitude-modulation. The system setup consists of two notebooks, which are connecting via ODM software modems. The modem consists of 5 VSP cores computing SHDSL signal processing algorithms. Status information of the modem like timing recovery, filter coefficients and signal-to-noise ratio are displayed in real-time, while the video clips are transported over the line.

SHDSL is a communication recommendation by the ITU (G.991.2) which allows payload data rates from 192 kbit/s to 2312 kbit/s on a single copper pair. The startup of an SHDSL connection begins with a handshake procedure (G.994.1). After the first handshake phase, the modems agree to run the same standard.
Note that upstream and downstream use a different set of tones. The line probing is used, to test the conditions of the line. During, the second handshake phase the modems must agree a common data rate for the SHDSL activation. If the agreement fails, the startup process will be aborted.

 

DANAE - Dynamic and distributed Adaptation of scalable multimedia coNtent in a context-Aware Environment (FP6-2002-IST-1 507113)

DEMO 1: DANAE Distributed Adaptation Framework (DAF)

In today’s multimedia content delivery architecture, adaptation becomes more and more important. Content providers aspire towards serving a plethora of heterogeneous end devices and networks without neglecting economical principles, i.e., if multiple versions of the same content are maintained. Therefore, a single high quality multimedia resource is stored on the server and adapted according to the usage environment on demand. The MPEG-21 Digital Item Adaptation standard specifies normative description tools (e.g. the generic Bitstream Syntax Description (gBSD) and the Adaptation Quality of Service description (AQoS)) which enable the construction of device and coding-format independent adaptation engines in an interoperable way. However, it is not realistic that a single adaptation node (or module) could cope with all kinds of usage environments. As a consequence, different adaptation nodes distributed over the whole network are employed, specifically for serving different access networks. Interoperability among these nodes is guaranteed through standardized media and metadata formats. This requires that the metadata associated with the multimedia content needs to be transported to such adaptation nodes in order to steer the actual adaptation process there.

The DANAE Distributed Adaptation Framework demo showcases a first demonstrator for the distributed, codec-agnostic adaptation of multimedia content, as described above.

DEMO 2: Session Mobility

In the scope of the DANAE project, IMEC-Ghent University has worked on the development of tools for the support of Session Mobility scenarios. The seamless handover of sessions between different devices is enabled through the use of MPEG-21 technologies. After a session transfer in a context-aware environment, content playback is resumed on the target device and the developed tools provide the best Quality of Experience possible; i.e., after selection of the most appropriate resource, the resource is dynamically adapted according to the varying user environment characteristics. MPEG-21 peers can be extended with this Session Mobility technology.

 

Adactus

DEMO: MPEG-21- based multimedia delivery chain

Adactus will present an MPEG-21- based multimedia delivery chain, developed for TV2 in Norway.

Broadcasters today face a lot of challenges when extending their workflow to enable cross- platform publishing. For a content consumer having i.e. a mobile phone, it is rather difficult to retrieve content on his device, and the content received is often adapted to a general mobile presentation. As these mobile devices now are very multimedia capable, but very different, they deserve a more optimized content delivery mechanism as well as individually adapted content.

An MPEG-21- based solution developed for TV2 Norway will be presented. Content is adapted to a large set of multimedia- enabled mobile phones in Norway using MPEG-21 DIA tools. Here the atomic descritptors of each device are the basis for the adaptation of the delivered content, and the presentation of the content along with its metadata.

Demos of the service on mobile phones will be presented.


Department of Information Technology (ITEC)

DEMO 1: Language Supported Parallelization of Advanced Video Coding

In the last decade many CPU manufacturers have changed the design of their CPUs from a single-core to a multi-core architecture. For many software developers this change has a big impact - in order to improve the performance of their applications, not only advanced instruction sets (e.g. MMX/SSE) have to be considered, but also the architecture of the CPU. For a multi-core architecture the performance of the application can be improved by using a multithreaded scheme. However, manually converting sequential programs into parallel ones is a difficult and error-prone task which often leads to an unreadable code.

In our demo session we present a very simple H.264/AVC decoder, which is able to decode I-frames, that can be automatically parallelized by using slight language extensions to the C# programming language. Our results show that just one line of code has to be changed in order to achieve a 2.82x speed-up on a 4-CPU SMP system.

DEMO 2: Visualizing Similarity Measures for Semantic Metadata

The usage of ontologies and high level metadata like the MPEG-7 Semantic Description Scheme to describe multimedia content allows the design and implementation of semantic search engines. Retrieval mechanisms exploit semantic information attached to multimedia data to compute similarity between multimedia documents. Using multidimensional scaling different mechanisms for pairwise similarity calculations can be visualized. Emir, which is the retrieval part of the Caliph & Emir project (see http://www.semanticmetadata.net for details), provides a 2 dimensional visualization of semantically enriched MPEG-7 documents. The Emir demo demonstrates the visualization capabilities of Caliph & Emir by showing the landscape visualization of multimedia repositories.

DEMO 3: Multiple Source Streaming Demo

Multiple Source Streaming (MSS) is a powerful mechanism to improve the quality of real-time MPEG media streams over low bandwidth network connections. In order to reduce the negative affect of bursty packet losses, subsequent content parts are streamed from different locations. Additionally, we have implemented a forward error correction mechanism in order to reconstruct damaged or lost I and P frames. The quality of the media stream improves with every part, the highest quality is available if all streams are fully received. The positive effects of applying Multiple Source Streaming and Forward Error Correction can be seen in this demo simulating different scenarios like streaming over Asymmetric Digital Subscriber Lines (ADSL).