It is possible that each participant (in case of more than two) is located in different locations. In such a scenario, the synchronization between all the participating units has to be done. There are two ways this may be done. First, each unit makes an individual direct link to every other link and maintains connection with them throughout the session. This method is particularly overburdening for any network equipment and incurs significant network bandwidth and costs. The apparent advantage of this method is the selectivity that can be provided to each user for provisioning ad-hoc connections. Also, there is no single point of failure; if a link between two participants, say A and B breaks, then the topology being like that of a mesh, would not disturb any other connections. The video relayed between points is of better quality because of the absence of any central manager throttling bandwidth. This decentralized multipoint architecture uses H.323 standards.
The other architecture that takes the load off the terminal equipment uses a Multipoint Control Unit (MCU). An MCU performs the function of a bridge, interconnecting the calls from different sources. The terminal equipment may call the MCU or the MCU may initiate the connection to all the parties. Thus, the topology now changes to that of a star. This MCU can purely be software, or a combination of hardware and software. It can be logically divided into two main modules: A Multipoint Controller and a Multipoint Processor. The controller works on the signalling plane and controls the conference creation, closing etc., negotiates with every unit in the conference and controls resources. The mixing and handling of media from each terminal is done by the Media Processor which resides in the Media Plane. It creates the data stream for each terminal and redirects it to the destination end point. The presence of a central manager can help shaping the bandwidth used up on each link.
When bandwidth is at a premium a technique called Voice activated Switch (VAS) can be used. So in a conference, when one party at a location is speaking, only that party is made visible to other participants. But, problems may arise if more than one person starts talking simultaneously. In that case, it becomes a contingency problem for the one with the loudest voice will be given preference. The other mode is Continuous Presence Mode, where the MCU combines the video streams from all the end points into a common stream and transmits it to all the end points. This way, every participant can see everyone else simultaneously. These unified images are often called ‘layouts’. However in both the cases the voice is transmitted to every endpoint in a full duplex mode.
Even with the best available networks and bandwidths, it would be impractical to send video in its uncompressed form. So, some kind of video compression has to be in place to compress the video to reduce the size of the bit stream to be transmitted. This is achieved with the use of codec (Coder-Decoder). Now, video compression can be done through two approaches. First, to find the information that is repetitive, and then, replace the repetitive information by a shortcut before transmission. This shortcut can then be replaced by original information at the receiving end restoring the video to its original form. (Just like a macro in programming) Other approach involves elimination of unimportant data from the frames, so that only the information perceptible to the human eyes is visible. This can drastically reduce the size of digital data to be transmitted, but can result in very poor quality video. Two major methods have been employed to achieve video compression to minimize losses and size at the same time:
- Block Based Compression: Each frame of the video, which is a single image, is divided into small blocks of information called pixels and the algorithm then keeps a track of how the values at each pixel varies with each frame and time.
- Object Based Compression: More advanced Codec algorithm classify the objects in the frames and keep track of movable and stationary objects. Thus less data may be used to store the information of stationary objects, and more detail of the moving objects be provided. Such techniques are more efficient that the simpler block based compression methods.
In order to standardize the compression methods, the Moving Picture Experts Group has come up with several standards like the MPEG-4 standard.