Close or Esc Key

Arduino Projects   |   Raspberry Pi   |   Electronic Circuits   |   AVR   |   PIC   |   8051   |   Electronic Projects

CDMA Technology

Written By: 

Preeti Jain

 

Primary requirements for any wireless mode of communication include high quality of service and data secrecy. Realizing that these two factors are to be fulfilled in the most optimized ways without the costs going too high, CDMA, a spread spectrum based technology came into existence. CDMAInitially restricted to the armed forces, this technology was commercially launched in 1995 by Qualcomm Telecommunications and now, as per Q4 of the year 2011, CDMA has over 8 billion voice and data customers in the 122 countries that it operates.

 

What CDMA Means?
Code: It refers to the string of binary sequence that the transmitter and the receiver share. This code encodes the information into a low frequency signal before it is transmitted over a channel. This same code is used by the receiver to decode the information. The receiver attains the code with the help of the nearest base station.
 
Division:  In CDMA a single channel is divided into numerous slots which can be used by multiple users. This is possible because of the use of unique code.
 
Multiple Accesses: Due to code based communication, multiple users can communicate and access the same channel simultaneously without any undesirable interference and loses.
 
Why we need CDMA?
CDMA is regarded as an improvement over GSM technology whose need can be easily understood by taking a simple example. Consider 5 couples that have their respective partners in different rooms. The partners are permitted to communicate only to each other and each is provided with a communication instrument for the purpose. The instruments are aided with a medium that facilitates the communication. The medium can be wired or wireless and is termed as “channel” in communication terminology. Since there is only one channel, the users are allotted some amount of time for which they can utilize the channel. In this case, let it be 5 seconds. So, every user communicates for 5 seconds and then the channel is used by other users.
 
The channel has a certain limit of allotting time slots and cannot accommodate more users after that. Let’s assume that in this case maximum number of users that the channel can accommodate is 6 i.e. 3 couples can use the channel. Hence, if all the couples want to communicate, 2 couples might have to wait till the channel has an empty slot. Also, there are chances that one couple might interfere in communication of the other due to sharing of same channel. This can be termed as “cross correlation” and is a serious problem in GSM operations. This is how a usual GSM system works. In normal GSM, the channel utilization time or “time burst” is determined by dynamic scheduling where number of users determine the time for which channel is used by a user.
 
When we go the CDMA way, every user’s voice is converted to a unique code which only the intended recipient instruments can understand. The code here is a “spreading sequence” of digital bits and is common between both the transmitter and the receiver. Since, code is digital, the information to be sent is also required to be converted to a digital format. With this method, there are no time boundations and even if all users are using the channel at the same time, there will be no interference and secrecy will be maintained. In CDMA, since the code is unique for every transmitter, it is determined by the receiver in two steps: Acquisition and tracking.
 
Under acquisition, the receiver acquires the sent signal and generates the decoding sequence which it receives from the base station. In tracking, it keeps synchronization between the received signal and decoding sequence so that the output is exactly same as the input.
History of CDMA
The idea of using CDMA as commercialized or licensed technology came in the year 1988. In 1993, the first protocol under CDMA IS-95 was introduced and in 1995, it was commercialized. Since that time, lots of changes in CDMA technology have occurred in terms of resource allocation, data usage, bit rate and several such factors.
 
Interim Standard-95 or TIA-EIA-95 is the first 2G-CDMA based cellular standard by Qualcomm and has been branded as cdmaOne. IS-95 defines forward (Base station to mobile) and reverse (mobile to Base station) link specifications. CDMA2000 is a 3G standard, backward compatible with IS-95. IS-2000 or CDMA2000 1X is the core wireless air interface standard. CDMA 2000 differs from IS-95 is that it includes beam forming; this increases the gain at the mobile and allows better SNR and a larger number of users. Also, CDMA2000 has double the capacity of IS-95.
 
WCDMA was developed in order to support high bandwidth applications like gaming, multimedia services, etc.  The later version of CDMA is WCDMA systems which combine the CDMA air interface with GSM based networks. In contrast to cdmaOne and CDMA2000 (which uses 1.25 MHz wide radio signal), WCDMA uses a 5 MHz wide radio signal and a chip rate of 3.84 Mbps, which is about three times higher than the chip rate of CDMA2000 (1.22 Mbps). Thus WCDMA offers higher capacity and QoS.
  

Comments

excellent

 

outstanding explanation 

 

"This code encodes the information into a low frequency signal before it is transmitted over a channel"

Please help me understand why the encoded information is encoded into a low frequency.

If I understand you correctly, the original data is actually a relatively low frequency signal. A much higher frequency is the code which samples the original data. thus, in effect, i now have much more information being transmitted. I could then transmit this increased (relative to original data) data over a signal that is of a lower frequency thus taking much longer to convey the original data.

Have I understood what you were explaining? What does that buy me?

"Under acquisition, the receiver acquires the sent signal and generates the decoding sequence which it receives from the base station"

In trying to understand the above, I am thinking of the GPS system.

It is my understanding that the receiver is not given the decoding sequence directly but indirectly through the name of the decoding code (PRN). The receiver must independently somehow know the coding associated with the PRN. 

CDMA, inherently would not be "secret" if my understanding is correct.

 

"When we go the CDMA way, every user’s voice is converted to a unique code which only the intended recipient instruments can understand."

If I understand you, then the unique code(A) applied to the original data generates a new higher frequency(in terms of how often data is changing) sequence of bits such that if OTHER unique codes(B) are used to decode the encoded original data, the outcome will somehow be understood to NOT be the original data and if the unique code(A) is applied to decode the encoded original data, that the outcome will somehow be understood TO BE the original data.

If I am correct, can you amplify on how I know the decoding for B is understood to not be the original data and similarily, the decoding for A is understood to be the original data.

"The code here is a “spreading sequence” of digital bits"

I do not believe I am understanding you and I sincerely do want to understand you.

What I understand so far is that the encoded stream of data is a great deal more data than the original data as one is actually sending every sample taken of the original data (at least twice as many(nyquist) and probably a great deal more).  I do not see how that is "spreading" the data, other than one could say that every bit of the original data is now composed of many bits of the encoded data.

Another way of looking at "spreading" might be a result of  transmiting each of the coding samples of the original data on a different frequency, then in effect I would be "spreading" a single piece of the  original data across many frequencies.  Is this what you mean?

If this is so, then the "spreading" method (distributing the original data across many frequencies) has not been explained (as far as I understand) up to this point.

If my understanding is valid, then what I can appreciate is that this CDMA process ends up giving me redundant samples for each original data point.  In certain situations this might be a great idea.  But we also might appreciate that I could give away the redundant samples. In effect, I am thus giving away sample time slots. If those time slots were assigned a frequency, then effect I am giving away the chance for another original data stream to use that time slot and that frequency.

Thus to ensure there is orthogonality (each data stream is not impacted by another data stream) I am guessing the code applied to the original data is a declaration of when a sample is taken and that this user has exclusive access to that sample time slot and its corresponding frequency at that time slot.

Is my thinking above correct or am I way off base.  I have no clue. You are the only guy I have read that has come close to conveying CDMA principles understandable by a non mathematician.

Pages

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Search Engines will index and follow ONLY links to allowed domains.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.