A Smart Microphone that Detects and Moves in the Direction of Sound Source
Table of Contents:
Detect (Speaker) and Direct a Microphone.
Design a module which enables better communication by directing the microphone towards the current speaker.
Initial Proposition for the module:
The module has two microphones for localization fixed opposite to each other (15cm apart) and a third one which is the actual microphone to be directed to the speaker. This is fixed on a rotating column.
Figure 1: Flowchart
The problem solution consists of three stages:
1. Sound source detection:-
We use a simple sensor for detection of sound signal, in our case a simple diaphragm based microphone. Though a grid of sensors like multiple microphones can be used but to begin with we focus on solving the problem in an efficient way so we choose to use two microphones. In fact, this is the way our ears solve this problem. Later, it will be seen that the best solution is obtained by using a single microphone.
2. Acoustic Localization:-
The first task is to determine the direction of sound source, in our case the speaker in a conference. Since the microphone’s movement is around a fixed axis, it is sufficient to determine the angle of the sound wave’s propagation from a fixed axis in a plane perpendicular to the axis of rotation. This simplifies the case in which we have to determine the direction in a 3-D space.
Figure 2: Microphone sound cones
3. Moving the microphone:-
Once the location of sound generated is determined, we start to move the main microphone in its direction. This is not a one step process. The mic is moved in stages at a certain angle (say 10º). After this, the whole process is repeated from finding the source and moving the mic by 10º. Thus the mic is moved in stages. This methodology reduces the probability of error and we can be sure of the speaker’s position.
Design possibilities for Acoustic Localization
1. Intensity variation at the detection points
The basic property that intensity of sound decreases with distance will be exploited here. Hence the intensity will be different at the two detection mics. The one detecting higher intensity has the source closer to it.
However this method fails for usage in practical purposes. This is because the difference in intensity detected at the two mics is quite nominal, since the separation between them is very less (<15cm)
2. Phase difference at the detection points
Since wave has to travel different distances to reach the 2 mics thus there will be a phase difference of sound signal at the two points. This phase difference forms ground for difference in location.
The problem here is the complexity of sensors to be used for phase difference detection.
3. Arrival time at the detection points
With the two mics system we have, the sound signal will take different time to reach the two mics. Thus, there is arrival time difference of the signal for the two mics. The more this time gap the more the signal is “tilted” towards that microphone. One way is to successively do this process after moving the microphone by some small angle. This process continues until microphone is aligned with the signal. The other way is to actually calculate the angle by the time difference and appropriately rotate the microphone.
The problem here is that there is substantial difference in time of arrival at the first instant (just after sound generation). Afterwards, sound signal is always present at both the detections points because the speaker is speaking continuously. So no difference can be detected after the first instance.
Thus after careful considerations, the final solution accepted is as follows:
Using a single high sensitivity directional microphone
· This solution exploits the basic principle of working of radar.
· We use a high sensitivity directional microphone mounted on a rotating platform.
· This platform rotates continuously taking periodic samples and sending them to the MPU (Main Processing Unit).
· The sound samples taken are compared with respect to their energy content and the maximum is computed.
· Finally, the microphone is directed towards the speaker.
The implementation is divided into 2 major segments:
1. Using digital implementation
2. Replacing various blocks with their analog equivalents and thus making an analog based solution.