5:30pm
– $30 at the door – package deals available, go to www.interfacesmontreal.org
[Audio
and video IP] A linear explosion
Leveraging the power of interactivity, digital audio and video applications
have had an explosive effect on linear reality. The improved capacities of processors,
data storage systems and IP networks have expanded the reach of traditional
broadcast methods to include all screens, from PCs to telephones, and from movie
screens to television monitors
Speakers
Éric Bourbeau – TELUS
Jean-François Gagnon – LVL*Studio
Jean-Charles Grégoire – INRS-EMT, Telecommunications
Jeremy Cooperstock – McGill University, Centre for intelligent
machine
Simon Piette – [SAT]
[Éric
Bourbeau] TELUS
IP television
The mass adoption of the Internet and the TCP/IP protocol in the ‘90s
created a profound paradigm shift in global communications. Since then, we have
witnessed a rapid evolution of IP applications. Originally transporting text
messages, they mutated to transport voice, and now video. They have also broadened
their reach. Originally designed for the scientific and business communities,
these IP applications now touch every segment of society, offering not only
business applications but pure entertainment.
The advent of these
new technologies, combined with a global deregulation of the telecom industry,
is having a profound impact on the consumer communications market. Traditionally
owned by telecommunications companies (telcos), the residential telephony market
is now being sought after by the cable industry (cablecos) as they implement
voice over IP (VoIP) telephony solutions. Conversely, the home entertainment
market, traditionally owned by cablecos, is now being sought after by the telcos
offering IP television.
In this context,
TELUS, along with other Canadian telcos, is preparing the launch of its IP television
offer named TELUS TV. The conference will therefore address IP television, its
applications and its future. Eric Bourbeau is heading up the TELUS TV project
in Quebec.
[Jean-Charles
Grégoire] INRS-EMT, Telecommunications
Audio/Video IP: which quality, and at what price?
Telecommunication services convergence, that is, their joint deployment over
a unique infrastructure, is an old goal of the industry and finally seems to
come of age. IP networks offer a unique foundation which should support
combined offers of services of various nature. IP technology, however, intrinsically
doesn¹t offer any performance
guarantees, and hence application quality. Residential (Internet) access operators
currently only offer discrimination based on the size of the “access pipe”
to the home, hence throughput indistinctively of active applications. In the
commercial world, various levels of services are used to guarantee compliance
to performance requirements of such applications as voice over IP or videoconference.
Network operators offer inter-networking services to bridge corporate networks
and preserve such quality constraints for a price. However, we more and more
often see home A/V applications with a quality users can apparently live well
with, without special cost and network requirements, beyond high speed. Between
these extremes, we have a large spectre of possibilities which we explore in
a project from the perspective of management, or cost, as well as user satisfaction
(quality). This presentation will present the issues involved and our investigation
of the subject.
[Jeremy Cooperstock]
McGill University, Centre for intelligent machine (CIM)
From Videoconferencing to shared reality
Anyone who has used videoconferencing tools, ranging from simple desktop applications
such as CU-SeeMe or Netmeeting, to high fidelity professional systems, quickly
realizes that videoconferencing is not the same as physical presence, nor even
to a telephone call. While the conversants can see video images of each other,
these are often of limited quality. Worse, the latency in the audio signal results
in an unnatural “turn-taking” style of conversation that diminishes
the quality of interaction and exagerates the sense of distance. This is all
the more serious for tasks that demand synchrony, for example, distributed music
performance applications.
In order to overcome
these problems, we are developing a new research facility known as the Shared
Reality Environment. The primary goal of this environment is to support the
exchange of very low-delay and high-fidelity audio and video streams between
multiple users in different locations, in order to engender users with a rich
sense of co-presence. Part of this development effort has led to the release
of our Ultra-Videoconferencing system, which provides a flexible, low-latency
IP transport for audio, video, and most recently, vibrosensory data. We have
used this system for a range of demanding applications including live concert
streaming (1999), remote mixing (2000), collaborative performance (2001), distance
masters classes over SDI (2002), and remote video interpreting of sign language
using three simultaneous DV streams (2003). The presentation will demonstrate
several aspects of this research, applications to distributed collaboration,
as well as the associated research challenges.
[Simon
Piette] Society for Arts and Technology
Developments on the webdiffusion axis of SAT’s Open Territories (TOT)
research project
Simon Piette will briefly present the research areas of the TOT projects and
describe the developments that have been realized to date, and specifically
the capability for multi-channel streaming. Finally, the planned future developments
will be set out and discussed.
The mandate of the TOT project is to develop software tools for the needs of
digital artists, which in turn creates new artistic needs while responding to
those existing needs. The project addresses several areas of focus, this one
being the area of web diffusion. The goal of web diffusion is to produce software
that enables distribution on the Internet in ways that surpass those that are
currently available for multichannel audio, in order to offer artists software
tools for either performance or distribution. Ultimately, this software will
be available under a free, open license.
In the first two
stages of this project, the nSlam software was developed using PureData, a visual
multimedia composition environment. Using nSlam, artists can easily integrate
the products of their developmental work into their artistic practice. The first
path that will be followed will be streaming on http. This method, already used
by numerous Internet-based radio stations, has been proved and is appropriate
for widespread public distribution and use wherever broadband access is available.
By using Ogg/Vorbis, it is possible to stream content to up to 256 channels.
These channels will use approximately 2 Mbps, and we can achieve a latency level
of 15 seconds, which is largely determined by the user.
In the next stage,
the project focused on point-to-point communications, with the main objective
to diminish or control latency videolan. To realize this objective the compressor
and the *media container: “were eliminated, which created a latency experience
of only milliseconds. The needs in a broadband environment are indeed much larger
: sending 8 channels of 16 bits, 44, 1 Khz will create a debit of approximately
5.5 Mbps. In order to make the diffusion possible without the availability of
significant high-speed access (from an artist’s point of view), the redistribution
towards other clients of already-received packets will be integrated.