Each of the technical talks from the FascinatE training day were recorded and can now be viewed from our video gallery.
The 5 talks are as follows:
- FascinatE introduction: Georg Thallinger (JRS) introduces us to the FascinatE project, it’s history, goals and acheivments.
- FascinatE delivery: Jean-Francois Macq (ALU) and Omar Niamut (TNO) guide us through the principles of delivering the large amounts of data to end users over a network with different bandwidth constraints.
- Virtual Director: Rene Kaiser (JRS) explains the principles and workings of the FascinatE virtual director and how it makes automatic production choices based on salient content.
The 30th May 2013 marked the final demonstration of the full FascinatE system. The project has developed a complete end-to-end future broadcast system which combines ultra-high definition panoramic video, 3D ambisonic and object-based audio, new methods for delivery of interactive AV content and new interfaces and methods to interact with the AV media at the user end. All of these aspects were on show for a live demonstration of the complete integrated system.
The demonstration event was hosted by The University of Salford at their MediaCityUK building which is one of the few places that had the complete set of facilities to support what we were trying to do, the infrastructure of the building was actually designed for this kind of thing and was tested to the limit by the FascinatE team for this event. FascinatE partners worked through the nights and the fantastic tech support team at MediaCityUK pulled out all the stops to make it happen smoothly.
The event featured a live performance in the Digital Performance Lab (DPL) which was recorded and streamed live to a conference suite (The Egg) on the other side of the building. The live demonstration was based around the premiere performance of 'Deeper than all roses', the latest large-scale music composition from Stephen Davismoon, featuring 'rock' band Bears?Bears! and live performance artists Joseph Lau and Shona Roberts in celebration of the works of the celebrated American poet E.E. Cummings.
Visitors were invited to view the performance being relayed live to the Egg presentation room. The Egg was equipped with a large data projector and a ambisonics spatial audio rendering system consisting of 17 loudspeakers. The live demonstration was divided into three phases, allowing specific aspects of the technology to be highlighted (full bandwidth interactive experience, content streaming on a lower bandwidth network and the virtual director). Visitors were also invited to look around the many offline demos which were on display in the foyer of the MediaCityUK building.
The performance was captured with the OmniCam panoramic camera, using the latest version comprising of 10 HD cameras capable of giving a 10k x 2k 360-degree panoramic image, of which 5 cameras were used for the demo to produce a 180-degree live video panorama. A manned HD broadcast camera placed close to the OmniCam captured close-ups of particular areas of interest. The video from this was ingested into an IP-based production framework, running a plug-in that generated camera metadata describing pan, tilt and zoom derived by tracking background features in the image.
Audio was captured using an Eigenmike to generate a 4th order Ambisonic representation of the ambient sound in the performance room. Six additional audio objects were also captured using mics on the guitar amplifiers, drum kit and the PA speakers used by the singers. The DPL also housed a WiFi access point, providing network access to a set of tablet computers that visitors could use to explore the live panoramic image.
All of this AV data was streamed in real time to a range of displays from TV sets, large projectors, a Christie tile wall in our building foyer and to iOS and Android mobile devices. People at the demo could freely pan and zoom around the panorama, controlling their own virtual camera by swipes on tablets or by intuitive hand gestures using Kinect for larger displays.
On the 30th May we will be holding our final demonstration event at the University of Salford building in MediaCityUK. During this public demonstration we will be showcasing the work that has taken place during the FascinatE project. During the demonstration visitors will get the opportunity to witness the production, transmission, rendering of and interaction with 180 degree ultra-high definition video content with adaptive spatial audio output of a music and dance performance. Also on display are a series of off-line demonstrations showing different aspects of the FascinatE project including the Virtual Director, gesture interaction, adaptive wave field synthesis and higher order ambisonics rendering, content streaming and interaction on portable devices. Additionally the demonstration event will be a fantastic time to talk with the project partners and share ideas.
For more information and to register for this event please click here
The FascinatE training day took place at MediaCityUK, Salford on 5th March and featured several project demonstrations and also some talks covering many of the major aspects of the project. As the event was hosted by the University of Salford it was covered on the student TV station, Quays TV. This video shows the feature they recorded on the project:
For more FascinatE videos can be viewed here
09:00 – 09:15 Welcome session. Chairs introduce the day’s agenda, format and aims.
09:15 – 09:30 Participants introduce themselves in Barcamp style, i.e. only by their name, affiliation, role, and three #hashtags(three terms describing their interest in the workshop well)).
09:30 – 10:15 Invited keynote by Wei Tsang Ooi, Department of Computer Science, National University of Singapore.
After the 30min talk, questions and discussion follow.
Keynote title: The Best Interactive System is A Non-Interactive System
10:15 – 10:30 Elevator pitches (á 2min) to kickstart the poster/demo session.
10:30 – 11:00 — coffee break —
11:00 – 11:45 Poster + demo session (7 in total). In parallel, posters and demos are exhibited and discussed.
Mike Matton, Bob De Wit, Peter Versieren and Koen Willaert, An Interactive Second Screen Platform for Broadcasting
Chen Wang, Pablo Cesar, Erik Geelhoed, Ian Biscoe and Phil Stenton, Sensing Audience Response – Beyond One Way Streaming of Live Performances
Cynthia C. S. Liem, Ron van der Sterren, Marcel van Tilburg, Álvaro Sarasúa, Juan J. Bosch, Jordi Janer, Mark Melenhorst, Emilia Gómez and Alan Hanjalic, Innovating the Classical Music Experience in the PHENICX Project: Use Cases and Initial User Feedback
Carlos A. Navarrete P. and José Tiberio Hernández P. Controlling interactive contents using mobile devices
Axel Kochale, J. Ruiz Hidalgo and Malte Borsum. Gesture Controlled Interactive Rendering in a Panoramic Scene
Rene Kaiser and Wolfgang Weiss, Virtual Director for Automating Camera Selection Behavior in the Domains of Sports and Performance
Martin J. Prins, Arjen Veenhuizen and Emmanuel Thomas, HAS-based tiled streaming system for immersive media interaction
11:45 – 12:30 Interactive discussion in fishbowl discussion format, part 1.
How does the fishbowl work? The room’s chairs are re-arranged such that there is a limited amount of seats in an inner circle, and outer circular layers of chairs around them. The people currently occupying a chair in the inner circle discuss and share their view on the current topic. If they feel they have nothing to contribute, they ought to move back to take a seat in the outer layers, enabling other people to step into the middle and become active. The WSICC chairs will moderate this session to make sure WSICC’s underlying topics are properly covered, and document results at the same time.
12:30 – 14:00 — lunch break —
14:00 – 15:30 Four talks based on full papers á 20min including questions. Questions will be allowed during the talks.
Britta Meixner and Harald Kosch, Creating and Presenting Interactive Non-linear Video Stories with the SIVA Suite
Sara Kepplinger, Judith Liebetrau, Jeremy Foss and Alexandra I. Cristea, Roadmap for a comprehensive Evaluation Approach on QoE of interactive and personalized TV
Patrice Rondao Alface, Jean-Francois Macq, Felix Lee and Werner Bailer, Adaptive Coding of High-resolution Panoramic Video Using Visual Saliency Gang Ren and Eamonn O’Neill Enhancing 3D Content Consumption on Interactive TV with Freehand Gesture
15:30 – 16:00 — coffee break —
16:00 – 16:45 Interactive discussion in fishbowl discussion format, part 2. Covering aspects not discussed in the earlier fishbowl session.
16:45 – 17:00 Concluding session. The group will revisit what has been collected throughout the day. Conclusions will be summarized.
17:30 — EuroITV main conference welcome drink —
Call For Papers
This workshop focuses on novel forms of interactive content consumption. It will explore the shifting balance between lean-back passive TV consumption and lean-forward interactivity; this shift is especially relevant considering the explosion of companion screen interaction, multi-modal gesture/voice control and advanced audiovisual content interaction such as free viewpoint video and user-controlled shot selection. New media formats and consumption paradigms have emerged that allow for new types of interactivity.
The workshop’s objective is to provide a highly interactive discussion forum that will allow to capture a comprehensive view on this research area. During the workshop, an overview on new content interaction concepts, research activities and future challenges in this area will be concluded and documented. An interdisciplinary view on the topic shall be compiled by contributions from technical research, user-centric studies, and industry developments. Part of the discussions is fueled by technical demonstrations of interactive content consumption forms. The workshop aims to examine and evaluate new forms of content interaction by discussing the field along three axes:
- Recent technological advances that provide new forms of audiovisual content interaction;
- User studies that evaluate new types of audiovisual content interaction;
- Technologies that supports the user in finding the balance between passive consumption and lean-forward interaction.
The following will sketch areas and aspects that are considered within the scope of the workshop. We seek technological research on more active interaction with audiovisual content, e.g. immersive pan-tilt-zoom (PTZ) navigation, game-like interfaces to media, or intelligent storytelling and narrative engines. The workshop will deal with both recorded and live media access. Both mobile and domestic consumption may be investigated. Beyond entertainment, other domains of interactive TV are sought. Topics are not limited to these examples, but should remain within the overall scope of the workshop. Below, some of the questions that the workshop aims to answer are listed:
- How can content personalization be enhanced through interactivity, and at which abstraction level do users want to interact?
- Which new requirements for content capture and production, user-generated, professional and prosumer, do new forms of interactive media imply?
- Do trends in content consumption behavior influence technical research by revealing new challenges?
- What do studies on interaction with content in the realm of social media sharing reveal?
- How can forms of (inter-)active media access be designed to be interwoven with passive consumption modes?
- How can interactivity enhance the experience of people watching together, even when they are in disjoint locations?
- Can 3D free viewpoint video lead to an interactivity revolution in the media industry?
- Which technical advances are needed to allow the industry to offer more interactive media services?
- How does the balance between active and passive consumption affect the Quality of Experience?
The full day workshop will be an active forum to discuss research challenges, methodologies and results. More than half the time will be reserved for discussion. The chairs aim to establish an informal atmosphere, inspired by Barcamps. There will be a keynote and a Fishbowl discussion. In an active moderating role, the chairs will make sure the workshop’s questions will be answered and documented, yet will allow some flexibility where appropriate to meet the interest of the audience. Results will be collected on flipcharts along multiple questions which will emerge throughout the day, e.g. what are the latest innovations in that field? Which research activities exist to tackle unsolved challenges? How could we combine different interaction technologies to the benefit of the user? Throughout the day, the audience will be encouraged to contribute, and especially to comment on existing inputs (I’d love to collaborate on this! … This has already been solved in my project!). Live comments by both the local and remote audience through Twitter will be promoted and collected. A detailed program will be published later.
April 1st 7th (extended): submission deadline April 29th: notification May 10th: final version due
- June 24th: workshop @ EuroITV2013 in Como, Italy
The workshop is seeking 3 types of submissions:
- Workshop papers: 4-6 pages, to be presented in as talk
- Posters and demos: max. 2 page description and background
We are proud to welcome Wei Tsang Ooi, Department of Computer Science National University of Singapore, as keynote speaker of the WSICC workshop.
OrganizersRene Kaiser, JOANNEUM RESEARCH, Graz, Austria.
Omar Aziz Niamut, TNO, Delft, The Netherlands.
Goranka Zoric, The Interactive Institute, Stockholm, Sweden.
Graham Thomas, BBC, London, UK.
Pablo Cesar, CWI, Netherlands
Teresa Chambel, University of Lisbon, Portugal
Arvid Engström, The Interactive Institute, Sweden
Rene Kaiser, JOANNEUM RESEARCH, Austria
Britta Meixner, University of Passau, Germany
Omar Niamut, TNO, Netherlands
Johan Oomen, Netherlands Institute for Sound and Vision, Netherlands
Erika Reponen, Nokia, Finland Patrice
Rondao Alface, Alcatel-Lucent, Belgium
Graham Thomas, BBC, UK
Marian Ursu, Goldsmiths, University of London, UK
Elizabeth Valentine-House, BBC, UK
Goranka Zoric, The Interactive Institute, Sweden
This workshop is supported by the FascinatE project.
This document is the third iteration of the first deliverable of WP4, Task 4.1; Business models for network-based services. According to the FascinatE proposal this task is concerned with identifying viable business models and associated services. In the previous iterations of this deliverable, we focused mainly on the non-financial aspects of the business model canvas. In this document all aspects of the business model will be discussed. The main objective of this deliverable is to view from the perspective of the companies that want to adopt the technology and understand their position, interests and way of working. The central questions in this deliverable are:
• Do third parties recognize the selected PMC as viable for their own business?
• How do these parties analyse
This document reports on the second public demonstration of FascinatE technology, at the concert hall Arena in Berlin in May 2012, during the production of a dance project by the Compagnie Sasha Waltz & Guests and the Education Programme by the Berlin Philharmonic Orchestra conducted by Sir Simon Rattle performing a choreography of the Carmen-Suite by Rodion Schtschedrin. Invited journalists and other guests were able to see live stitching of a panorama from the new OMNICAM equipped with ALEXA-M cameras at the Arena, as well as audio capture using two Eigenmike® microphones. Some of the latest results from other parts of the project were shown in a demonstration room at HHI’s premises nearby. These demonstrations resulted in several press publications, which are also listed in this deliverable.