In electronics, modulation is the process of varying one or more properties of high frequency periodic waveform, called the carrier signal, with respect to a modulating signal. This is done in a similar fashion as a musician may modulate the tone from a musical instrument by varying its volume, timing and pitch. The three key parameters of a periodic waveform are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"), all of which can be modified in accordance with a low frequency signal to obtain the modulated signal. Typically a high-frequency sinusoid waveform is used as carrier signal, but a square wave pulse train may also occur.
In telecommunications, modulation is the process of conveying a message signal, for example a digital bit stream or an analog audio signal, inside another signal that can be physically transmitted. Modulation of a sine waveform is used in view to transform a baseband message signal to a passband signal, for example a radio-frequency signal (RF signal). In radio communications, cable TV systems or the public switched telephone network for instance, electrical signals can only be transferred over a limited passband frequency spectrum, with specific (non-zero) lower and upper cutoff frequencies. Modulating a sine wave carrier makes it possible to keep the frequency content of the transferred signal as close as possible to the centre frequency (typically the carrier frequency) of the passband.
In music synthesizers, modulation may be used to synthesise waveforms with a desired overtone spectrum. In this case the carrier frequency is typically in the same order or much lower than the modulating waveform. See for example frequency modulation synthesis or ring modulation.
A device that performs modulation is known as a modulator and a device that performs the inverse operation of modulation is known as a demodulator (sometimes detector or demod). A device that can do both operations is a modem (short for "Modulator-Demodulator").
List of common digital modulation techniques
The most common digital modulation techniques are:
Phase-shift keying (PSK):
Binary PSK (BPSK), using M=2 symbols
Quadrature PSK (QPSK), using M=4 symbols
8PSK, using M=8 symbols
16PSK, using M=16 symbols
Differential PSK (DPSK)
Differential QPSK (DQPSK)
Offset QPSK (OQPSK)
p/4–QPSK
Frequency-shift keying (FSK):
Audio frequency-shift keying (AFSK)
Multi-frequency shift keying (M-ary FSK or MFSK)
Dual-tone multi-frequency (DTMF)
Continuous-phase frequency-shift keying (CPFSK)
Amplitude-shift keying (ASK)
On-off keying (OOK), the most common ASK form
M-ary vestigial sideband modulation, for example 8VSB
Quadrature amplitude modulation (QAM) - a combination of PSK and ASK:
Polar modulation like QAM a combination of PSK and ASK.[citation needed]
Continuous phase modulation (CPM) methods:
Minimum-shift keying (MSK)
Gaussian minimum-shift keying (GMSK)
Orthogonal frequency-division multiplexing (OFDM) modulation:
discrete multitone (DMT) - including adaptive modulation and bit-loading.
Wavelet modulation
Trellis coded modulation (TCM), also known as trellis modulation
Spread-spectrum techniques:
Direct-sequence spread spectrum (DSSS)
Chirp spread spectrum (CSS) according to IEEE 802.15.4a CSS uses pseudo-stochastic coding
Frequency-hopping spread spectrum (FHSS) applies a special scheme for channel release
MSK and GMSK are particular cases of continuous phase modulation. Indeed, MSK is a particular case of the sub-family of CPM known as continuous-phase frequency-shift keying (CPFSK) which is defined by a rectangular frequency pulse (i.e. a linearly increasing phase pulse) of one symbol-time duration (total response signaling).
OFDM is based on the idea of frequency-division multiplexing (FDM), but is utilized as a digital modulation scheme. The bit stream is split into several parallel data streams, each transferred over its own sub-carrier using some conventional digital modulation scheme. The modulated sub-carriers are summed to form an OFDM signal. OFDM is considered as a modulation technique rather than a multiplex technique, since it transfers one bit stream over one communication channel using one sequence of so-called OFDM symbols. OFDM can be extended to multi-user channel access method in the orthogonal frequency-division multiple access (OFDMA) and multi-carrier code division multiple access (MC-CDMA) schemes, allowing several users to share the same physical medium by giving different sub-carriers or spreading codes to different users.
Of the two kinds of RF power amplifier, switching amplifiers (Class C amplifiers) cost less and use less battery power than linear amplifiers of the same output power. However, they only work with relatively constant-amplitude-modulation signals such as angle modulation (FSK or PSK) and CDMA, but not with QAM and OFDM. Nevertheless, even though switching amplifiers are completely unsuitable for normal QAM constellations, often the QAM modulation principle are used to drive switching amplifiers with these FM and other waveforms, and sometimes QAM demodulators are used to receive the signals put out by these switching amplifiers.
Broadband Internet access, often shortened to just broadband, is a high data rate Internet access—typically contrasted with dial-up access using a 56k modem.
Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.
Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 2.0 Mbit/s[1], the 2006 OECD report[2] is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States (US) Federal Communications Commission (FCC) as of 2009, defines "Basic Broadband" as data transmission speeds exceeding 768 kilobits per second (Kbps), or 768,000 bits per second, in at least one direction: downstream (from the Internet to the user’s computer) or upstream (from the user’s computer to the Internet).[3] The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services.[4]
Data rates are defined in terms of maximum download because several common consumer broadband technologies such as ADSL are "asymmetric"—supporting much slower maximum upload data rate than download.
"Broadband penetration" is now treated as a key economic indicator.[2][5]
Contents [hide]
1 Overview
2 Technology
2.1 DSL (ADSL/SDSL)
2.2 Multilinking Modems
2.3 ISDN
2.4 T-1/DS-1
2.5 Wired Ethernet
2.6 Rural broadband
2.7 Satellite Internet
2.8 Cellular broadband
2.9 Power-line Internet
2.10 Wireless ISP
2.11 WorldSpace
3 Pricing in the United States
4 Broadband worldwide
5 See also
5.1 Broadband technologies
5.2 Broadband implementations and standards
5.3 Future broadband implementations
5.4 Broadband applications
5.5 Other
6 References
7 External links
[edit]Overview
Broadband transmission rates
Connection Transmission data rate
DS-1 (Tier 1) 1.544 Mbit/s
E-1 2.048 Mbit/s
DS-3 (Tier 3) 44.736 Mbit/s
OC-3 155.52 Mbit/s
OC-12 622.08 Mbit/s
OC-48 2.488 Gbit/s
OC-192 9.953 Gbit/s
OC-768 39.813 Gbit/s
OC-1536 79.6 Gbit/s
OC-3072 159.2 Gbit/s
Broadband is often called "high-speed" access to the Internet, because it usually has a high rate of data transmission. In general, any connection to the customer of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet access. The International Telecommunication Union Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity that is faster than primary rate ISDN, at 1.5 to 2 Mbit/s. The FCC definition of broadband is 768 kbit/s (0.8 Mbit/s). The Organization for Economic Co-operation and Development (OECD) has defined broadband as 256 kbit/s in at least one direction and this bit rate is the most common baseline that is marketed as "broadband" around the world. There is no specific bitrate defined by the industry, however, and "broadband" can mean lower-bitrate transmission methods. Some Internet Service Providers (ISPs) use this to their advantage in marketing lower-bitrate connections as broadband.
In practice, the advertised bandwidth is not always reliably available to the customer; ISPs often allow a greater number of subscribers than their backbone connection or neighborhood access network can handle, under the assumption that most users will not be using their full connection capacity very frequently. This aggregation strategy works more often than not, so users can typically burst to their full bandwidth most of the time; however, peer-to-peer (P2P) file sharing systems, often requiring extended durations of high bandwidth usage, stress these assumptions, and can cause major problems for ISPs who have excessively overbooked their capacity. For more on this topic, see traffic shaping. As takeup for these introductory products increases, telcos are starting to offer higher bit rate services. For existing connections, this most of the time simply involves reconfiguring the existing equipment at each end of the connection.
As the bandwidth delivered to end users increases, the market expects that video on demand services streamed over the Internet will become more popular, though at the present time such services generally require specialized networks. The data rates on most broadband services still do not suffice to provide good quality video, as MPEG-2 video requires about 6 Mbit/s for good results. Adequate video for some purposes becomes possible at lower data rates, with rates of 768 kbit/s and 384 kbit/s used for some video conferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC. The MPEG-4 format delivers high-quality video at 2 Mbit/s, at the low end of cable modem and ADSL performance.
Increased bandwidth has already made an impact on newsgroups: postings to groups such as alt.binaries.* have grown from JPEG files to entire CD and DVD images. According to NTL,[disambiguation needed] the level of traffic on their network increased from a daily inbound news feed of 150 gigabytes of data per day and 1 terabyte of data out each day in 2001 to 500 gigabytes of data inbound and over 4 terabytes out each day in 2002.[citation needed]
[edit]Technology
The standard broadband technologies in most areas are DSL and cable modems. Newer technologies in use include VDSL and pushing optical fiber connections closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in fiber to the premises and fiber to the curb schemes, has played a crucial role in enabling Broadband Internet access by making transmission of information over larger distances much more cost-effective than copper wire technology. In a few areas not served by cable or ADSL, community organizations have begun to install Wi-Fi networks, and in some cities and towns local governments are installing municipal Wi-Fi networks. As of 2006, broadband mobile Internet access has become available at the consumer level in some countries, using the HSDPA and EV-DO technologies. The newest technology being deployed for mobile and stationary broadband access is WiMAX.
[edit]DSL (ADSL/SDSL)
Main article: Asymmetric digital subscriber line
[edit]Multilinking Modems
Roughly double the dial-up rate can be achieved with multilinking technology. What is required are two modems, two phone lines, two dial-up accounts, and ISP support for multilinking, or special software at the user end. This inverse multiplexing option was popular with some high-end users before ISDN, DSL and other technologies became available.
Diamond and other vendors had created dual phone line modems with bonding capability. The data rate of dual line modems is faster than 90 kbit/s. The Internet and phone charge will be twice the ordinary dial-up charge.
Load balancing takes two Internet connections and feeds them into your network as one double data rate, more resilient Internet connection. By choosing two independent Internet providers the load balancing hardware will automatically use the line with least load which means should one line fail, the second one automatically takes up the slack.
[edit]ISDN
Integrated Service Digital Network (ISDN) is one of the oldest broadband digital access methods for consumers and businesses to connect to the Internet. It is a telephone data service standard. Its use in the United States peaked in the late 1990s prior to the availability of DSL and cable modem technologies. Broadband service is usually compared to ISDN-BRI because this was the standard broadband access technology that formed a baseline for the challenges faced by the early broadband providers. These providers sought to compete against ISDN by offering faster and cheaper services to consumers.
A basic rate ISDN line (known as ISDN-BRI) is an ISDN line with 2 data "bearer" channels (DS0 - 64 kbit/s each). Using ISDN terminal adapters (erroneously called modems), it is possible to bond together 2 or more separate ISDN-BRI lines to reach bandwidths of 256 kbit/s or more. The ISDN channel bonding technology has been used for video conference applications and broadband data transmission.
Primary rate ISDN, known as ISDN-PRI, is an ISDN line with 23 DS0 channels and total bandwidth of 1,544 kbit/s (US standard). ISDN E1 (European standard) line is an ISDN lines with 30 DS0 channels and total bandwidth of 2,048 kbit/s. Because ISDN is a telephone-based product, a lot of the terminology and physical aspects of the line are shared by the ISDN-PRI used for voice services. An ISDN line can therefore be "provisioned" for voice or data and many different options, depending on the equipment being used at any particular installation, and depending on the offerings of the telephone company's central office switch. Most ISDN-PRI's are used for telephone voice communication using large PBX systems, rather than for data. One obvious exception is that ISPs usually have ISDN-PRI's for handling ISDN data and modem calls.
It is mainly of historical interest that many of the earlier ISDN data lines used 56 kbit/s rather than 64 kbit/s "B" channels of data. This caused ISDN-BRI to be offered at both 128 kbit/s and 112 kbit/s rates, depending on the central office's switching equipment.
Advantages:
Constant data rate at 64 kbit/s for each DS0 channel.
Two way broadband symmetric data transmission, unlike ADSL.
One of the data channels can be used for phone conversation without disturbing the data transmission through the other data channel. When a phone call is ended, the bearer channel can immediately dial and re-connect itself to the data call.
Call setup is very quick.
Low latency
ISDN Voice clarity is unmatched by other phone services.
Caller ID is almost always available for no additional fee.
Maximum distance from the central office is much greater than it is for DSL.
When using ISDN-BRI, there is the possibility of using the low-bandwidth 16 kbit/s "D" channel for packet data and for always on capabilities.
Disadvantages:
ISDN offerings are dwindling in the marketplace due to the widespread use of faster and cheaper alternatives.
ISDN routers, terminal adapters ("modems"), and telephones are more expensive than ordinary POTS equipment, like dial-up modems.
ISDN provisioning can be complicated due to the great number of options available.
ISDN users must dial in to a provider that offers ISDN Internet service, which means that the call could be disconnected.
ISDN is billed as a phone line, to which is added the bill for Internet ISDN access.
"Always on" data connections are not available in all locations.
Some telephone companies charge unusual fees for ISDN, including call setup fees, per minute fees, and higher rates than normal for other services.
[edit]T-1/DS-1
These are highly-regulated services traditionally intended for businesses, that are managed through Public Service Commissions (PSCs) in each state, must be fully defined in PSC tariff documents, and have management rules dating back to the early 1980s which still refer to teletypes as potential connection devices. As such, T-1 services have very strict and rigid service requirements which drive up the provider's maintenance costs and may require them to have a technician on standby 24 hours a day to repair the line if it malfunctions. (In comparison, ISDN and DSL are not regulated by the PSCs at all.) Due to the expensive and regulated nature of T-1 lines, they are normally installed under the provisions of a written agreement, the contract term being typically one to three years. However, there are usually few restrictions to an end-user's use of a T-1, uptime and bandwidth data rates may be guaranteed, quality of service may be supported, and blocks of static IP addresses are commonly included.
Since a T-1 was originally conceived for voice transmission, and voice T-1's are still widely used in businesses, it can be confusing to the uninitiated subscriber. It is often best to refer to the type of T-1 being considered, using the appropriate "data" or "voice" prefix to differentiate between the two. A voice T-1 would terminate at a phone company's central office (CO) for connection to the PSTN; a data T-1 terminates at a point of presence (POP) or data center. The T-1 line which is between a customer's premises and the POP or CO is called the local loop. The owner of the local loop need not be the owner of the network at the POP where your T-1 connects to the Internet, and so a T-1 subscriber may have contracts with these two organizations separately.
The nomenclature for a T-1 varies widely, cited in some circles a DS-1, a T1.5, a T1, or a DS1. Some of these try to distinguish amongst the different aspects of the line, considering the data standard a DS-1, and the physical structure of the trunk line a T-1 or T-1.5. They are also called leased lines, but that terminology is usually for data rates under 1.5 Mbit/s. At times, a T-1 can be included in the term "leased line" or excluded from it. Whatever it is called, it is inherently related to other broadband access methods, which include T-3, SONET OC-3, and other T-carrier and Optical Carriers. Additionally, a T-1 might be aggregated with more than one T-1, producing an nxT-1, such as 4xT-1 which has exactly 4 times the bandwidth of a T-1.
When a T-1 is installed, there are a number of choices to be made: in the carrier chosen, the location of the demarcation point, the type of channel service unit (CSU) or data service unit (DSU) used, the WAN IP router used, the types of bandwidths chosen, etc. Specialized WAN routers are used with T-1 lines that route Internet or VPN data onto the T-1 line from the subscriber's packet-based (TCP/IP) network using customer premises equipment (CPE). The CPE typical consists of a CSU/DSU that converts the DS-1 data stream of the T-1 to a TCP/IP packet data stream for use in the customer's Ethernet LAN. It is noteworthy that many T-1 providers optionally maintain and/or sell the CPE as part of the service contract, which can affect the demarcation point and the ownership of the router, CSU, or DSU.
Although a T-1 has a maximum of 1.544 Mbit/s, a fractional T-1 might be offered which only uses an integer multiple of 128 kbit/s for bandwidth. In this manner, a customer might only purchase 1/12th or 1/3 of a T-1, which would be 128 kbit/s and 512 kbit/s, respectively.
T-1 and fractional T-1 data lines are symmetric, meaning that their upload and download data rates are the same.
[edit]Wired Ethernet
Where available, this method of broadband connection to the Internet would indicate that Internet access is very fast. However, just because Ethernet is offered doesn't mean that the full 10, 100, or 1000 Mbit/s connection can be utilized for direct Internet access. In a college dormitory, for example, the 100 Mbit/s Ethernet access might be fully available to on-campus networks, but Internet access bandwidths might be closer to 4xT-1 data rate (6 Mbit/s). If you are sharing a broadband connection with others in a building, the access bandwidth of the leased line into the building would of course govern the end-user's data rate.
In certain locations, however, true Ethernet broadband access might be available. This would most commonly be the case at a POP or a data center, and not at a typical residence or business. When Ethernet Internet access is offered, it could be fiber-optic or copper twisted pair, and the bandwidth will conform to standard Ethernet data rates of up to 10 Gbit/s. The primary advantage is that no special hardware is needed for Ethernet. Ethernet also has a very low latency.
[edit]Rural broadband
One of the great challenges of broadband is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easy for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected.
Several rural broadband solutions exist, though each has its own pitfalls and limitations[clarification needed]. Some choices are better than others, but are dependent on how proactive the local phone company is about upgrading their rural technology.
Wireless Internet Service Provider (WISPs) are rapidly becoming a popular broadband option for rural areas,[citation needed]. The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option[6]. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); speeds are significantly slower (2 – 50 times slower); and the network can be less stable, due to interference from other wireless devices, weather and line-of-sight problems.Al[7]
[edit]Satellite Internet
Main article: Satellite Internet
Satellites in geostationary orbits are able to relay broadband data from the satellite company to each customer. Satellite Internet is usually among the most expensive ways of gaining broadband Internet access, but in rural areas it may be the only choice other than cellular broadband. However, costs have been coming down in recent years to the point that it is becoming more competitive with other broadband options.
Broadband satellite Internet also has a high latency problem is due to the signal having to travel to an altitude of 35,786 km (22,236 mi) above sea level (from the equator) out into space to a satellite in geostationary orbit and back to Earth again. The signal delay can be as much as 500 milliseconds to 900 milliseconds, which makes this service unsuitable for applications requiring real-time user input such as certain multiplayer Internet games and first-person shooters played over the connection. Despite this, it is still possible for many games to be played, but the scope is limited to real-time strategy or turn-based games. The functionality of live interactive access to a distant computer can also be subject to the problems caused by high latency. These problems are more than tolerable for just basic email access and web browsing and in most cases are barely noticeable.
For geostationary satellites there is no way to eliminate this problem. The delay is primarily due to the great distances travelled which, even at the speed of light (about 300,000 km/second or 186,000 miles per second), can be significant. Even if all other signalling delays could be eliminated it still takes electromagnetic radio waves about 500 milliseconds, or half a second, to travel from ground level to the satellite and back to the ground, a total of over 71,400 km (44,366 mi) to travel from the source to the destination, and over 143,000 km (88,856 mi) for a round trip (user to ISP, and then back to user—with zero network delays). Factoring in other normal delays from network sources gives a typical one-way connection latency of 500–700 ms from the user to the ISP, or about 1,000–1,400 milliseconds latency for the total Round Trip Time (RTT) back to the user. This is far worse than most dial-up modem users' experience, at typically only 150–200 ms total latency.
Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) satellites however do not have such great delays. The current LEO constellations of Globalstar and Iridium satellites have delays of less than 40 ms round trip, but their throughput is less than broadband at 64 kbps per channel. The Globalstar constellation orbits 1,420 km above the earth and Iridium orbits at 670 km altitude. The proposed O3b Networks MEO constellation scheduled for deployment in 2010 would orbit at 8,062 km, with RTT latency of approximately 125 ms. The proposed new network is also designed for much higher throughput with links well in excess of 1 Gbps (Giga bits per second).
Most satellite Internet providers also have a FAP (Fair Access Policy). Perhaps one of the largest disadvantages of satellite Internet, these FAPs usually throttle a user's throughput to dial-up data rates after a certain "invisible wall" is hit (usually around 200 MB a day). This FAP usually lasts for 24 hours after the wall is hit, and a user's throughput is restored to whatever tier they paid for. This makes bandwidth-intensive activities nearly impossible to complete in a reasonable amount of time (examples include P2P and newsgroup binary downloading).
The European ASTRA2Connect system has a FAP based on a monthly limit of 2Gbyte of data downloaded, with download data rates reduced for the remainder of the month if the limit is exceeded. Other Satellite Internet offers have advanced FAP mechanisms based on sliding time windows. It is the case for instance for the Tooway service that verifies download quotas during the last hours, days and weeks. The purpose is to allow temporary excessive downloads when needed while saving volume for the end of the month.[8].
Advantages
True global broadband Internet access availability
Mobile connection to the Internet (with some providers)
Disadvantages
High latency compared to other broadband services, especially 2-way satellite service
Unreliable: drop-outs are common during travel, inclement weather, and during sunspot activity
The narrow-beam highly directional antenna must be accurately pointed to the satellite orbiting overhead
The Fair Access Policy limits heavy usage, if applied by the service provider
VPN use is discouraged, problematic, and/or restricted with satellite broadband, although available at a price
One-way satellite service requires the use of a modem or other data uplink connection
Satellite dishes are very large. Although most of them employ plastic to reduce weight, they are typically between 80 and 120 cm (30 to 48 inches) in diameter.
[edit]Cellular broadband
Main article: Cellular broadband
Cellular phone towers are very widespread, and as cellular networks move to third generation (3G) networks they can support fast data; using technologies such as EVDO, HSDPA and UMTS.
These can give broadband access to the Internet, with a cell phone, with Cardbus, ExpressCard, or USB cellular modems, or with cellular broadband routers, which allow more than one computer to be connected to the Internet using one cellular connection.
[edit]Power-line Internet
Main article: Power line communication
This is a new service still in its infancy that may eventually permit broadband Internet data to travel down standard high-voltage power lines. However, the system has a number of complex issues, the primary one being that power lines are inherently a very noisy environment. Every time a device turns on or off, it introduces a pop or click into the line. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with these natural signaling disruptions and work around them.
Broadband over power lines (BPL), also known as Power line communication, has developed faster in Europe than in the US due to a historical difference in power system design philosophies. Nearly all large power grids transmit power at high voltages in order to reduce transmission losses, then near the customer use step-down transformers to reduce the voltage. Since BPL signals cannot readily pass through transformers, repeaters must be attached to the transformers. In the US, it is common for a small transformer hung from a utility pole to service a single house. In Europe, it is more common for a somewhat larger transformer to service 10 or 100 houses. For delivering power to customers, this difference in design makes little difference, but it means delivering BPL over the power grid of a typical US city will require an order of magnitude more repeaters than would be required in a comparable European city.
The second major issue is signal strength and operating frequency. The system is expected to use frequencies in the 10 to 30 MHz range, which has been used for decades by licensed amateur radio operators, as well as international shortwave broadcasters and a variety of communications systems (military, aeronautical, etc.). Power lines are unshielded and will act as transmitters for the signals they carry, and have the potential to completely wipe out the usefulness of the 10 to 30 MHz range for shortwave communications purposes, as well as compromising the security of its users.
In telecommunications and signal processing, baseband is an adjective that describes signals and systems whose range of frequencies is measured from zero to a maximum bandwidth or highest signal frequency; it is sometimes used as a noun for a band of frequencies starting at zero. It can often be considered as synonym to lowpass, and antonym to passband, bandpass or radio frequency (RF) signal.
Various uses
A baseband bandwidth is equal to the highest frequency of a signal or system, or an upper bound on such frequencies.[1] By contrast, a non-baseband (passband) bandwidth is the difference between a highest frequency and a nonzero lowest frequency.
A baseband signal or lowpass signal is a signal that can include frequencies that are very near zero, by comparison with its highest frequency (for example, a sound waveform can be considered as a baseband signal, whereas a radio signal or any other modulated signal is not).[2]
A baseband channel or lowpass channel (or system, or network) is a channel (e.g. a telecommunications system) that can transfer frequencies that are very near zero.[3] Examples are serial cables and local area networks (LANs).
Baseband modulation, also known as line coding,[4] aims at transferring a digital bit stream over a baseband channel, as an alternative to carrier-modulated approaches.[5]
An equivalent baseband signal or equivalent lowpass signal is – in analog and digital modulation methods with constant carrier frequency (for example ASK, PSK and QAM, but not FSK) – a complex valued representation of the modulated physical signal (the so called passband signal or RF signal). The equivalent baseband signal is where I(t) is the inphase signal, Q(t) the quadrature phase signal, and j the imaginary unit. In a digital modulation method, the I(t) and Q(t) signals of each modulation symbol are evident from the constellation diagram. The frequency spectrum of this signal includes negative as well as positive frequencies. The physical passband signal corresponds to
where ? is the carrier angular frequency in rad/s.
Pulse-code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a numeric (usually binary) code. PCM has been used in digital telephone systems and 1980s-era electronic musical keyboards. It is also the standard form for digital audio in computers and the compact disc "red book" format. It is also standard in digital video, for example, using ITU-R BT.601. Uncompressed PCM is not typically used for video in standard definition consumer applications such as DVD or DVR because the bit rate required is far too high.
Modulation
Sampling and quantization of a signal (red) for 4-bit PCM
In the diagram, a sine wave (red curve) is sampled and quantized for pulse code modulation. The sine wave is sampled at regular intervals, shown as ticks on the x-axis. For each sample, one of the available values (ticks on the y-axis) is chosen by some algorithm (in this case, the floor function is used). This produces a fully discrete representation of the input signal (shaded area) that can be easily encoded as digital data for storage or manipulation. For the sine wave example at right, we can verify that the quantized values at the sampling moments are 7, 9, 11, 12, 13, 14, 14, 15, 15, 15, 14, etc. Encoding these values as binary numbers would result in the following set of nibbles: 0111, 1001, 1011, 1100, 1101, 1110, 1110, 1111, 1111, 1111, 1110, etc. These digital values could then be further processed or analyzed by a purpose-specific digital signal processor or general purpose CPU. Several Pulse Code Modulation streams could also be multiplexed into a larger aggregate data stream, generally for transmission of multiple streams over a single physical link. One technique is called time-division multiplexing, or TDM, and is widely used, notably in the modern public telephone system. Another technique is called Frequency-division multiplexing, where the signal is assigned a frequency in a spectrum, and transmitted along with other signals inside that spectrum. Currently, TDM is much more widely used than FDM because of its natural compatibility with digital communication, and generally lower bandwidth requirements.
There are many ways to implement a real device that performs this task. In real systems, such a device is commonly implemented on a single integrated circuit that lacks only the clock necessary for sampling, and is generally referred to as an ADC (Analog-to-Digital converter). These devices will produce on their output a binary representation of the input whenever they are triggered by a clock signal, which would then be read by a processor of some sort.
[edit]Demodulation
To produce output from the sampled data, the procedure of modulation is applied in reverse. After each sampling period has passed, the next value is read and a signal is shifted to the new value. As a result of these transitions, the signal will have a significant amount of high-frequency energy. To smooth out the signal and remove these undesirable aliasing frequencies, the signal would be passed through analog filters that suppress energy outside the expected frequency range (that is, greater than the Nyquist frequency fs / 2). Some systems use digital filtering to remove some of the aliasing, converting the signal from digital to analog at a higher sample rate such that the analog filter required for anti-aliasing is much simpler. In some systems, no explicit filtering is done at all; as it's impossible for any system to reproduce a signal with infinite bandwidth, inherent losses in the system compensate for the artifacts — or the system simply does not require much precision. The sampling theorem suggests that practical PCM devices, provided a sampling frequency that is sufficiently greater than that of the input signal, can operate without introducing significant distortions within their designed frequency bands.
The electronics involved in producing an accurate analog signal from the discrete data are similar to those used for generating the digital signal. These devices are DACs (digital-to-analog converters), and operate similarly to ADCs. They produce on their output a voltage or current (depending on type) that represents the value presented on their inputs. This output would then generally be filtered and amplified for use.
[edit]Limitations
There are two sources of impairment implicit in any PCM system:
Choosing a discrete value near the analog signal for each sample (quantization error)
The quantization error swings between to . In the ideal case (with a fully linear ADC) it is equally distributed over this interval thus with follows equals zero while the equals to
Between samples no measurement of the signal is made; due to the sampling theorem this results in any frequency above or equal to (fs being the sampling frequency) being distorted or lost completely (aliasing error). (One half the sampling frequency is known as the Nyquist frequency.)
As samples are dependent on time, an accurate clock is required for accurate reproduction. If either the encoding or decoding clock is not stable, its frequency drift will directly affect the output quality of the device. A slight difference between the encoding and decoding clock frequencies is not generally a major concern; a small constant error is not noticeable. Clock error does become a major issue if the clock is not stable, however. A drifting clock, even with a relatively small error, will cause very obvious distortions in audio and video signals, for example.
Extra information: PCM data from a master with a clock frequency that can not be influenced requires an exact clock at the decoding side to ensure that all the data is used in a continuous stream without buffer underrun or buffer overflow. Any frequency difference will be audible at the output since the number of samples per time interval can not be correct. The data speed in a compact disk can be steered by means of a servo that controls the rotation speed of the disk; here the output clock is the master clock. For all "external master" systems like DAB the output stream must be decoded with a regenerated and exact synchronous clock. When the wanted output sample rate differs from the incoming data stream clock then a sample rate converter must be inserted in the chain to convert the samples to the new clock domain.
[edit]Digitization as part of the PCM process
In conventional PCM, the analog signal may be processed (e.g. by amplitude compression) before being digitized. Once the signal is digitized, the PCM signal is usually subjected to further processing (e.g. digital data compression).
PCM with linear quantization is known as Linear PCM (LPCM).[1]
Some forms of PCM combine signal processing with coding. Older versions of these systems applied the processing in the analog domain as part of the A/D process; newer implementations do so in the digital domain. These simple techniques have been largely rendered obsolete by modern transform-based audio compression techniques.
DPCM encodes the PCM values as differences between the current and the predicted value. An algorithm predicts the next sample based on the previous samples, and the encoder stores only the difference between this prediction and the actual value. If the prediction is reasonable, fewer bits can be used to represent the same information. For audio, this type of encoding reduces the number of bits required per sample by about 25% compared to PCM.
Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of the quantization step, to allow further reduction of the required bandwidth for a given signal-to-noise ratio.
Delta modulation is a form of DPCM which uses one bit per sample.
In telephony, a standard audio signal for a single phone call is encoded as 8000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital signal known as DS0. The default signal compression encoding on a DS0 is either µ-law (mu-law) PCM (North America and Japan) or A-law PCM (Europe and most of the rest of the world). These are logarithmic compression systems where a 12 or 13-bit linear PCM sample number is mapped into an 8-bit value. This system is described by international standard G.711. An alternative proposal for a floating point representation, with 5-bit mantissa and 3-bit radix, was abandoned.
Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8-bit µ-law or A-law PCM samples into a series of 4-bit ADPCM samples. In this way, the capacity of the line is doubled. The technique is detailed in the G.726 standard.
Later it was found that even further compression was possible and additional standards were published. Some of these international standards describe systems and ideas which are covered by privately owned patents and thus use of these standards requires payments to the patent holders.
Some ADPCM techniques are used in Voice over IP communications.
[edit]Encoding for transmission
Main article: Line code
Pulse-code modulation can be either return-to-zero (RZ) or non-return-to-zero (NRZ). For a NRZ system to be synchronized using in-band information, there must not be long sequences of identical symbols, such as ones or zeroes. For binary PCM systems, the density of 1-symbols is called ones-density.[2]
Ones-density is often controlled using precoding techniques such as Run Length Limited encoding, where the PCM code is expanded into a slightly longer code with a guaranteed bound on ones-density before modulation into the channel. In other cases, extra framing bits are added into the stream which guarantee at least occasional symbol transitions.
Another technique used to control ones-density is the use of a scrambler polynomial on the raw data which will tend to turn the raw data stream into a stream that looks pseudo-random, but where the raw stream can be recovered exactly by reversing the effect of the polynomial. In this case, long runs of zeroes or ones are still possible on the output, but are considered unlikely enough to be within normal engineering tolerance.
In other cases, the long term DC value of the modulated signal is important, as building up a DC offset will tend to bias detector circuits out of their operating range. In this case special measures are taken to keep a count of the cumulative DC offset, and to modify the codes if necessary to make the DC offset always tend back to zero.
Many of these codes are bipolar codes, where the pulses can be positive, negative or absent. In the typical alternate mark inversion code, non-zero pulses alternate between being positive and negative. These rules may be violated to generate special symbols used for framing or other special purposes.
In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data.
The general definitions of the terms are as follows:
Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver.[1]
Error correction is the detection of errors and reconstruction of the original, error-free data.
Error correction may generally be realized in two different ways:
Automatic repeat request (ARQ) (sometimes also referred to as backward error correction): This is an error control technique whereby an error detection scheme is combined with requests for retransmission of erroneous data. Every block of data received is checked using the error detection code used, and if the check fails, retransmission of the data is requested – this may be done repeatedly, until the data can be verified.
Forward error correction (FEC): The sender encodes the data using an error-correcting code (ECC) prior to transmission. The additional information (redundancy) added by the code is used by the receiver to recover the original data. In general, the reconstructed data is what is deemed the "most likely" original data.
ARQ and FEC may be combined, such that minor errors are corrected without retransmission, and major errors are corrected via a request for retransmission: this is called hybrid automatic repeat-request (HARQ).
Introduction
The general idea for achieving error detection and correction is to add some redundancy (i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message, and to recover data determined erroneous. Error-detection and correction schemes can be either systematic or non-systematic: In a systematic scheme, the transmitter sends the original data, and attaches a fixed number of check bits (or parity data), which are derived from the data bits by some deterministic algorithm. If only error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. In a system that uses a non-systematic code the original message is transformed into an encoded message that has at least as many bits as the original message.
Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memory-less models where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, error-detecting and correcting codes can be generally distinguished between random-error-detecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors.
If the channel capacity cannot be determined, or is highly varying, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding.
[edit]Error detection schemes
Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided.
There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors).
Random-error-correcting codes based on minimum distance coding can provide a suitable alternative to hash functions when a strict guarantee on the minimum number of errors to be detected is desired. Repetition codes, described below, are special cases of error-correcting codes: although rather inefficient, they find applications for both error correction and detection due to their simplicity.
[edit]Repetition codes
Main article: Repetition code
A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data is divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern "1011", the four-bit block can be repeated three times, thus producing "1011 1011 1011". However, if this twelve-bit pattern was received as "1010 1011 1011" – where the first block is unlike the other two – it can be determined that an error has occurred.
Repetition codes are not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., "1010 1010 1010" in the previous example would be detected as correct). The advantage of repetition codes is that it they are extremely simple, and are in fact used in some transmissions of numbers stations.[citation needed]
[edit]Parity bits
Main article: Parity bit
A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous.
Parity bits can be implemented as either even parity or odd parity. When using even parity, the parity bit is set to 1 if the number of ones in a given set of bits (not including the parity bit) is odd, making the entire set of bits (including the parity bit) even. When using odd parity, the parity bit is set to 1 if the number of ones in a given set of bits (not including the parity bit) is even, making the entire set of bits (including the parity bit) odd. Thus, odd parity can be obtained from even parity by flipping the parity bit, and vice versa.
Extensions and variations on the parity bit mechanism are horizontal redundancy checks, vertical redundancy checks, and "double," "dual," or "diagonal" parity (used in RAID-DP).
[edit]Checksums
Main article: Checksum
A checksum of a message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a one's-complement prior to transmission to detect errors resulting in all-zero messages.
Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Luhn algorithm and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers.
[edit]Cyclic redundancy checks (CRCs)
Main article: Cyclic redundancy check
A cyclic redundancy check (CRC) is a single-burst-error-detecting cyclic code and non-secure hash function designed to detect accidental changes to digital data in computer networks. It is characterized by specification of a so-called generator polynomial, which is used as the divisor in a polynomial long division over a finite field, taking the input data as the dividend, and where the remainder becomes the result.
Cyclic codes have favorable properties in that they are well suited for detecting burst errors. CRCs are particularly easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives.
Even parity is a special case of a cyclic redundancy check, where the single-bit CRC is generated by the divisor x+1.
[edit]Cryptographic hash functions
Main article: Cryptographic hash function
A cryptographic hash function can provide strong assurances about data integrity, provided that changes of the data are only accidental (i.e., due to transmission errors). Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is infeasible to find some input data (other than the one given) that will yield the same hash value. Message authentication codes, also called keyed cryptographic hash functions, provide additional protection against intentional modification by an attacker.
[edit]Error-correcting codes
Main article: Forward error correction
Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d-1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired.
Codes with minimum Hamming distance d=2 are degenerate cases of error-correcting codes, and can be used to detect single errors. The parity bit is an example of a single-error-detecting code.
The Berger code is an early example of a unidirectional error(-correcting) code that can detect any number of errors on an asymmetric channel, provided that only transitions of cleared bits to set bits or set bits to cleared bits can occur.
[edit]Error correction
[edit]Automatic repeat request
Main article: Automatic repeat request
Automatic Repeat reQuest (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame.
Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions.
Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ.
ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet. However, ARQ requires the availability of a back channel, results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity.[2]
[edit]Error-correcting code
Main article: Forward error correction
An error-correcting code (ECC) or forward error correction (FEC) code is a system of adding redundant data, or parity data, to a message, such that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) were introduced, either during the process of transmission, or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a back-channel is not required in forward error correction, and it is therefore suitable for simplex communication such as broadcasting. Error-correcting codes are frequently used in lower-layer communication, as well as for reliable storage in media such as CDs, DVDs, hard disks, and RAM.
Error-correcting codes are usually distinguished between convolutional codes and block codes:
Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding.
Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, of which Reed-Solomon codes are the most notable ones due to their widespread use these days. Turbo codes and low-density parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency.
Shannon's theorem is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols.
The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes which are both optimal and have efficient encoding and decoding algorithms.
[edit]Hybrid schemes
Main article: Hybrid ARQ
Hybrid ARQ is a combination of ARQ and forward error correction. There are two basic approaches[2]:
Messages are always transmitted with FEC parity data (and error-detection redundancy). A receiver decodes a message using the parity information, and requests retransmission using ARQ only if the parity data was not sufficient for successful decoding (identified through a failed integrity check).
Messages are transmitted without parity data (only with error-detection information). If a receiver detects an error, it requests FEC information from the transmitter using ARQ, and uses it to reconstruct the original message.
The latter approach is particularly attractive on an erasure channel when using a rateless erasure code.
[edit]Applications
Applications that require low latency (such as telephone conversations) cannot use Automatic Repeat reQuest (ARQ); they must use Forward Error Correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be any good.
Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. (This is also why FEC is used in data storage systems such as RAID and distributed data store).
Applications that use ARQ must have a return channel. Applications that have no return channel cannot use ARQ.
Applications that require extremely low error rates (such as digital money transfers) must use ARQ.
[edit]The Internet
In a typical TCP/IP stack, error control is performed at multiple levels:
Each Ethernet frame carries a CRC-32 checksum. Frames received with incorrect checksums are discarded by the receiver hardware.
The IPv4 header contains a checksum protecting the contents of the header. Packets with mismatching checksums are dropped within the network or at the receiver.
The checksum was omitted from the IPv6 header in order to minimize processing costs in network routing and because current link layer technology is assumed to provide sufficient error detection (see also RFC 3819).
UDP has an optional checksum covering the payload and addressing information from the UDP and IP headers. Packets with incorrect checksums are discarded by the operating system network stack. The checksum is optional under IPv4, only, because the IP layer checksum may already provide the desired level of error protection.
TCP provides a checksum for protecting the payload and addressing information from the TCP and IP headers. Packets with incorrect checksums are discarded within the network stack, and eventually get retransmitted using ARQ, either explicitly (such as through triple-ack) or implicitly due to a timeout.
[edit]Deep-space telecommunications
Development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting from 1968 digital error correction was implemented in the form of (sub-optimally decoded) convolutional codes and Reed-Muller codes.[3] The Reed-Muller code was well suited to the noise the spacecraft was subject to (approximately matching a Bell curve), and was implemented at the Mariner spacecraft for missions between 1969 and 1977.
The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging amongst scientific information of Jupiter and Saturn.[4] This resulted in increased coding requirements, and thus the spacecrafts were supported by (optimally Viterbi-decoded) convolutional codes that could be concatenated with an outer Golay (24,12,8) code. The Voyager 2 probe additionally supported an implementation of a Reed-Solomon code: the concatenated Reed-Solomon-Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune.
The CCSDS currently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes.
The different kinds of deep space and orbital missions that are conducted suggest that trying to find a "one size fits all" error correction system will be an ongoing problem for some time to come. For missions close to earth the nature of the channel noise is different from that a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from earth, the problem of correcting for noise gets larger.
[edit]Satellite broadcasting (DVB)
The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and High Definition TV) and IP data. Transponder availability and bandwidth constraints have limited this growth, because transponder capacity is determined by the selected modulation scheme and Forward error correction (FEC) rate.
Overview
QPSK coupled with traditional Reed Solomon and Viterbi codes have been used for nearly 20 years for the delivery of digital satellite TV.
Higher order modulation schemes such as 8PSK, 16QAM and 32QAM have enabled the satellite industry to increase transponder efficiency by several orders of magnitude.
This increase in the information rate in a transponder comes at the expense of an increase in the carrier power to meet the threshold requirement for existing antennas.
Tests conducted using the latest chipsets demonstrate that the performance achieved by using Turbo Codes may be even lower than the 0.8 dB figure assumed in early designs.
[edit]Data storage
Error detection and correction codes are often used to improve the reliability of data storage media.
A "parity track" was present on the first magnetic tape data storage in 1951. The "Optimal Rectangular Code" used in group code recording tapes not only detects but also corrects single-bit errors.
Some file formats, particularly archive formats, include a checksum (most often CRC32) to detect corruption and truncation and can employ redundancy and/or parity files to recover portions of corrupted data.
Reed Solomon codes are used in compact discs to correct errors caused by scratches.
Modern hard drives use CRC codes to detect and Reed-Solomon codes to correct minor errors in sector reads, and to recover data from sectors that have "gone bad" and store that data in the spare sectors.[5]
RAID systems use a variety of error correction techniques, to correct errors when a hard drive completely fails.
[edit]Error-correcting memory
Main article: Dynamic random access memory#Errors and error correction
DRAM memory may provide increased protection against soft errors by relying on error correcting codes. Such error-correcting memory, known as ECC or EDAC-protected memory, is particularly desirable for high fault-tolerant applications, such as servers, as well as deep-space applications due to increased radiation.
Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy.
Interleaving allows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as a single event upset (SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error correcting code), and the illusion of an error-free memory system may be maintained.[6]
The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking. It corresponds to or is part of the link layer of the TCP/IP reference model.
The Data Link Layer is the protocol layer which transfers data between adjacent network nodes in a wide area network or between nodes on the same local area network segment[1]. The Data Link Layer provides the functional and procedural means to transfer data between network entities and might provide the means to detect and possibly correct errors that may occur in the Physical Layer. Examples of data link protocols are Ethernet for local area networks (multi-node), the Point-to-Point Protocol (PPP), HDLC and ADCCP for point-to-point (dual-node) connections.
The Data Link Layer is concerned with local delivery of frames between devices on the same LAN. Data Link frames, as these protocol data units are called, do not cross the boundaries of a local network. Inter-network routing and global addressing are higher layer functions, allowing Data Link protocols to focus on local delivery, addressing, and media arbitration. In this way, the Data Link layer is analogous to a neighborhood traffic cop; it endeavors to arbitrate between parties contending for access to a medium.
When devices attempt to use a medium simultaneously, frame collisions occur. Data Link protocols specify how devices detect and recover from such collisions, but it does not prevent them from happening.
Delivery of frames by layer 2 devices is affected through the use of unambiguous hardware addresses. A frame's header contains source and destination addresses that indicate which device originated the frame and which device is expected to receive and process it. In contrast to the hierarchical and routable addresses of the network layer, layer 2 addresses are flat, meaning that no part of the address can be used to identify the logical or physical group to which the address belongs.
The data link thus provides data transfer across the physical link. That transfer can be reliable or unreliable; many data link protocols do not have acknowledgments of successful frame reception and acceptance, and some data link protocols might not even have any form of checksum to check for transmission errors. In those cases, higher-level protocols must provide flow control, error checking, and acknowledgments and retransmission.
In some networks, such as IEEE 802 local area networks, the Data Link Layer is described in more detail with Media Access Control (MAC) and Logical Link Control (LLC) sublayers; this means that the IEEE 802.2 LLC protocol can be used with all of the IEEE 802 MAC layers, such as Ethernet, token ring, IEEE 802.11, etc., as well as with some non-802 MAC layers such as FDDI. Other Data Link Layer protocols, such as HDLC, are specified to include both sublayers, although some other protocols, such as Cisco HDLC, use HDLC's low-level framing as a MAC layer in combination with a different LLC layer. In the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (power lines, phone lines and coaxial cables), the Data Link Layer is divided into three sub-layers (Application Protocol Convergence, Logical Link Control and Medium Access Control).
Within the semantics of the OSI network architecture, the Data Link Layer protocols respond to service requests from the Network Layer and they perform their function by issuing service requests to the Physical Layer.
List of Data Link Layer services
Encapsulation of network layer data packets into frames
Frame synchronization
Logical link control (LLC) sublayer:
Error control (automatic repeat request,ARQ), in addition to ARQ provided by some Transport layer protocols, to forward error correction (FEC) techniques provided on the Physical Layer, and to error-detection and packet canceling provided at all layers, including the network layer. Data link layer error control (i.e. retransmission of erroneous packets) is provided in wireless networks and V.42 telephone network modems, but not in LAN protocols such as Ethernet, since bit errors are so uncommon in short wires. In that case, only error detection and canceling of erroneous packets are provided.
Flow control, in addition to the one provided on the Transport layer. Data link layer error control is not used in LAN protocols such as Ethernet, but in modems and wireless networks.
Media access control (MAC) sublayer:
Multiple access protocols for channel-access control, for example CSMA/CD protocols for collision detection and retransmission in Ethernet bus networks and hub networks, or the CSMA/CA protocol for collision avoidance in wireless networks.
Physical addressing (MAC addressing)
LAN switching (packet switching) including MAC filtering and spanning tree protocol
Data packet queueing or scheduling
Store-and-forward switching or cut-through switching
Quality of Service (QoS) control
Virtual LANs (VLAN)
[edit]Protocol examples
ARCnet
ATM
Cisco Discovery Protocol (CDP)
Controller Area Network (CAN)
Econet
Ethernet
Ethernet Automatic Protection Switching (EAPS)
Fiber Distributed Data Interface (FDDI)
Frame Relay
High-Level Data Link Control (HDLC)
IEEE 802.2 (provides LLC functions to IEEE 802 MAC layers)
IEEE 802.11 wireless LAN
Link Access Procedures, D channel (LAPD)
LocalTalk
Multiprotocol Label Switching (MPLS)
Point-to-Point Protocol (PPP)
Serial Line Internet Protocol (SLIP) (obsolete)
Spanning tree protocol
StarLan
Token ring
Unidirectional Link Detection (UDLD)
and most forms of serial communication.
In an equivalent baseband model of a communication system, the modulated signal is replaced by a complex valued equivalent baseband signal with carrier frequency of 0 hertz, and the RF channel is replaced by an equivalent baseband channel model where the frequency response is transferred to baseband frequencies.
A signal "at baseband" is usually considered to include frequencies from near 0 Hz up to the highest frequency in the signal with significant power.
In general, signals can be described as including a whole range of different frequencies added together. In telecommunications in particular, it is often the case that those parts of the signal which are at low frequencies are 'copied' up to higher frequencies for transmission purposes, since there are few communications media that will pass low frequencies without distortion. Then, the original, low frequency components are referred to as the baseband signal. Typically, the new, high-frequency copy is referred to as the 'RF' (radio-frequency) signal.baseband signal is low frequency signal which when modulated is transmitted on various channels.
The concept of baseband signals is most often applied to real-valued signals, and systems that handle real-valued signals. Fourier analysis of such signals includes a negative-frequency band, but the negative-frequency information is just a mirror of the positive-frequency information, not new information. For complex-valued signals, on the other hand, the negative frequencies carry new information. In that case, the full two-sided bandwidth is generally quoted, rather than just the half measured from zero; the concept of baseband can be applied by treating the real and imaginary parts of the complex-valued signal as two different real signals.
[edit]Baseband vs passband transmission in Ethernet and other network access technology
The word "BASE" in Ethernet physical layer standards, for example 10BASE5, 100BASE-T and 1000BASE-SX, implies baseband digital transmission, i.e that a line code is used, and that an unfiltered wire (i.e. a low-pass transmission channel) is used. This is as opposed to 10PASS-TS Ethernet, where "PASS" implies passband transmission. Passband transmission makes communication possible over a passband filtered channel such as the telephone network local-loop or a wireless channel. Passband digital transmission requires a digital modulation scheme, often provided by modem equipment. In the 10PASS-TS case the VDSL standard is utilized, which is based on the Discrete multi-tone modulation (DMT) scheme. Other examples of passband transmission are wireless networks and cable modems....
[edit]Modulation
A signal at baseband is often used to modulate a higher frequency carrier wave in order that it may be transmitted via radio. Modulation results in shifting the signal up to much higher frequencies (radio frequencies, or RF) than it originally spanned. A key consequence of the usual double-sideband amplitude modulation (AM) is that, usually, the range of frequencies the signal spans (its spectral bandwidth) is doubled. Thus, the RF bandwidth of a signal (measured from the lowest frequency as opposed to 0 Hz) is usually twice its baseband bandwidth. Steps may be taken to reduce this effect, such as single-sideband modulation; the highest frequency of such signals greatly exceeds the baseband bandwidth.
Some signals can be treated as baseband or not, depending on the situation. For example, a switched analog connection in the telephone network has energy below 300 Hz and above 3400 Hz removed by bandpass filtering; since the signal has no energy very close to zero frequency, it may not be considered a baseband signal, but in the telephone systems frequency-division multiplexing hierarchy, it is usually treated as a baseband signal, by comparison with the modulated signals used for long-distance transmission. The 300 Hz lower band edge in this case is treated as "near zero", being a small fraction of the upper band edge.
Comments
Post a Comment