Skip to main content

Peripherals

 A graphical user interface (GUI) is a type of user interface which allows people to interact with a computer and computer-controlled devices which employ graphical icons, visual indicators or special graphical elements called "widgets", along with text labels or text navigation to represent the information and actions available to a user. The actions are usually performed through direct manipulation of the graphical elements. Use of this acronym led to creation of the neologism guituitive (an interface which is intuitive).


Graphical user interface design is an important adjunct to application programming. Its goal is to enhance the usability of the underlying logical design of a stored program. The visible graphical interface features of an application are sometimes referred to as "chrome". They include graphical elements (widgets) that may be used to interact with the program. Common widgets are: windows, buttons, menus, and scroll bars. Larger widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones usually act as a user-input tool.


The widgets of a well-designed system are functionally independent from and indirectly linked to program functionality, so the graphical user interface can be easily customized, allowing the user to select or design a different skin at will


Some graphical user interfaces are designed for the rigorous requirements of vertical markets. These are known as "application specific graphical user interfaces." Examples of application specific graphical user interfaces:


Touchscreen point of sale software used by waitstaff in busy restaurants 

Self-service checkouts used in some retail stores.. 

ATMs 

Airline self-ticketing and check-in 

Information kiosks in public spaces like train stations and museums 

Monitor/control screens in embedded industrial applications which employ a real time operating system (RTOS). 

The latest cell phones and handheld game systems also employ application specific touchscreen graphical user interfaces.



Graphical user interfaces compared to command line interfaces

Graphical user interfaces were introduced in reaction to the steep learning curve of command line interfaces (CLI), which require commands to be typed on the keyboard. Since the commands available in command line interfaces can be numerous, complicated operations can be completed using a short sequence of words and symbols[citation needed]. This allows for greater efficiency and productivity once many commands are learned[citation needed], but reaching this level takes some time because the command words are not easily discoverable. WIMPs ("window, icon, menu, pointing device"), on the other hand, present the user with numerous widgets that represent and can trigger some of the system's available commands.


WIMPs extensively use modes as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. CLIs use modes only in the form of a current directory.


Most modern operating systems provide both a graphical user interface and some level of a CLI, although the graphical user interfaces usually receive more attention. The graphical user interface is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm or File System Visualizer (FSV).


Applications may also provide both interfaces, and when they do the graphical user interface is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program non-interactively, such as in a shell script.








Public key cryptography, also known as asymmetric cryptography, is a form of cryptography in which a user has a pair of cryptographic keys - a public key and a private key. The private key is kept secret, while the public key may be widely distributed. The keys are related mathematically, but the private key cannot be practically derived from the public key. A message encrypted with the public key can be decrypted only with the corresponding private key.


Conversely, Secret key cryptography, also known as symmetric cryptography uses a single secret key for both encryption and decryption.


The two main branches of public key cryptography are:


public key encryption — a message encrypted with a recipient's public key cannot be decrypted by anyone except the recipient possessing the corresponding private key. This is used to ensure confidentiality. 

digital signatures — a message signed with a sender's private key can be verified by anyone who has access to the sender's public key, thereby proving that the sender signed it and that the message has not been tampered with. This is used to ensure authenticity. 

An analogy for public-key encryption is that of a locked mailbox with a mail slot. The mail slot is exposed and accessible to the public; its location (the street address) is in essence the public key. Anyone knowing the street address can go to the door and drop a written message through the slot; however, only the person who possesses the key can open the mailbox and read the message.


An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The message can be opened by anyone, but the presence of the seal authenticates the sender.


A central problem for public-key cryptography is proving that a public key is authentic, and has not been tampered with or replaced by a malicious third party. The usual approach to this problem is to use a public-key infrastructure (PKI), in which one or more third parties, known as certificate authorities, certify ownership of key pairs. Another approach, used by PGP, is the "web of trust" method to ensure authenticity of key pairs.


Public key techniques are much more computationally intensive than purely symmetric algorithms. The judicious use of these techniques enables a wide variety of applications. In practice, public key cryptography is used in combination with secret-key methods for efficiency reasons. For encryption, the message may be encrypted with secret-key algorithm using a randomly generated key, and that key encrypted with the user's public key. For digital signatures, a message is hashed (using a cryptographic hash function) and the resulting "hash value" is signed; before verifying the signature, the recipient computes the hash of the message himself, and compares this hash value with the signed hash value to check that the message has not been tampered with.


Security

There is nothing especially more secure about asymmetric key algorithms than symmetric key algorithms[citation needed]. There are popular ones and unpopular ones. There are broken ones and ones that are, for now, not broken[citation needed]. Unfortunately, popularity is not a reliable indicator of security. Some algorithms have security proofs with various properties and of varying quality. Many proofs claim that breaking an algorithm, with respect to some well-defined security goals, is equivalent to solving one of the more popular mathematical problems that are presumed to be intractable, like factoring the product of two large primes or finding discrete logarithms[citation needed]. Some proofs have also been shown to be flawed[citation needed]. None of these algorithms can be proved secure in as absolute a sense as the one-time pad has. As with all cryptographic algorithms, these algorithms must be chosen and used with care.


The most obvious application of a public key encryption system is confidentiality; a message which a sender encrypts using the recipient's public key can only be decrypted by the recipient's paired private key.


Public-key digital signature algorithms can be used for sender authentication and non-repudiation. For instance, a user can encrypt a message with his own private key and send it. If another user can successfully decrypt it using the corresponding public key, this provides assurance that the first user (and no other) sent it. In practice, a cryptographic hash value of the message is calculated, encrypted with the private key and sent along with the message (resulting in a cryptographic signature of the message). The receiver can then verify message integrity and origin by calculating the hash value of the received message and comparing it against the decoded signature (the original hash). If the hash from the sender and the hash on the receiver side do not match, then the received message is not identical to the message which the sender "signed", or the sender's identity is wrong.


To achieve authentication, non-repudiation, and confidentiality, the sender would first encrypt the message using his private key, then a second encryption is performed using the recipient's public key.


These characteristics are useful for many other, sometimes surprising, applications, like digital cash, password-authenticated key agreement, multi-party key agreement, etc.




Actual algorithms — two linked keys

Not all asymmetric key algorithms operate in precisely this fashion. The most common ones have the property that Alice and Bob each own two keys, one for encryption and one for decryption. In a secure asymmetric key encryption scheme, the decryption key should not be deducible from the encryption key. This is known as public-key encryption, since the encryption key can be published without compromising the security of encrypted messages. In the analogy above, Bob might publish instructions on how to make a lock ("public key"), but the lock is such that it is impossible (so far as is known) to deduce from these instructions how to make a key which will open that lock ("private key"). Those wishing to send messages to Bob use the public key to encrypt the message; Bob uses his private key to decrypt it.



[edit] Weaknesses

Of course, there is a possibility that someone could "pick" Bob's or Alice's lock. Among symmetric key encryption algorithms, only the one-time pad can be proven to be secure against any adversary, no matter how much computing power is available. Unfortunately, there is no public-key scheme with this property, since all public-key schemes are susceptible to brute force key search attack. Such attacks are impractical if the amount of computation needed to succeed (termed 'work factor' by Shannon) is out of reach of potential attackers. The work factor can be increased by simply choosing a longer key. Other attacks may be more efficient, and some are known for some public key encryption algorithms. Both RSA and ElGamal have known attacks which are much faster than the brute force approach. Such estimates have changed both with the decreasing cost of computer power, and new mathematical discoveries.


In practice, these insecurities can be avoided by choosing key sizes large enough that the best known attack would take so long that it is not worth any adversary's time and money to break the code. For example, if an estimate of how long it takes to break an encryption scheme is one thousand years, and it were used to encrypt your credit card details, they would be safe enough, since the time needed to decrypt the details will be rather longer than the useful life of those details, which expire after a few years. Typically, the key size needed is much longer for public key algorithms than for symmetric key algorithms.


Major weaknesses have been found for several formerly promising asymmetric key algorithms. The 'knapsack packing' algorithm was found to be insecure when a new attack was found. Recently, some attacks based on careful measurements of the exact amount of time it takes known hardware to encrypt plain text have been used to simplify the search for likely decryption keys (see side channel attack). Thus, mere use of asymmetric key algorithms does not ensure security; it is an area of active research to discover and protect against new attacks.


Another potential security vulnerability in using asymmetric keys is the possibility of a man in the middle attack, in which communication of public keys is intercepted by a third party and modified to provide different public keys instead. Encrypted messages and responses must also be intercepted, decrypted and re-encrypted by the attacker using the correct public keys for different communication segments in all instances to avoid suspicion. This attack may seem to be difficult to implement in practice, but it's not impossible when using insecure media (e.g. public networks such the Internet or wireless communications). A malicious staff member at Alice or Bob's ISP might find it outright easy.


One approach to prevent such attacks is the use of a certificate authority, a trusted third party who is responsible for verifying the identity of a user of the system and issuing a digital certificate, which is a signed block of data stating that this public key belongs to that person, company or other entity. This approach also has weaknesses. For example, the certificate authority must be trusted to have properly checked the identity of the key-holder and the correctness of the public key when it issues a certificate, and has been correctly set up at communication participants before it can be used. An attacker who could subvert the certificate authority into issuing a certificate for a bogus public key could then mount a man in the middle attack as easily as if the certificate scheme were not used at all. Despite its problems, this approach is widely used; examples include SSL and its successor, TLS, which are commonly used to provide security in web browsers, for example, to securely send credit card details to an online store.







Success factors

In many cases, an e-commerce company will survive not only based on its product, but by having a competent management team, good post-sales services, well-organized business structure, network infrastructure and a secured, well-designed website. A company that wants to succeed will have to perform 2 things: Technical and organizational aspects and customer-oriented. Following factors will make business of companies succeed in e-commerce:



[edit] Technical and organizational aspects

Sufficient work done in market research and analysis. E-commerce is not exempt from good business planning and the fundamental laws of supply and demand. Business failure is as much a reality in e-commerce as in any other form of business. 

A good management team armed with information technology strategy. A company's IT strategy should be a part of the business re-design process. 

Providing an easy and secured way for customers to effect transactions. Credit cards are the most popular means of sending payments on the internet, accounting for 90% of online purchases. In the past, card numbers were transferred securely between the customer and merchant through independent payment gateways. Such independent payment gateways are still used by most small and home businesses. Most merchants today process credit card transactions on site through arrangements made with commercial banks or credit cards companies. 

Providing reliability and security. Parallel servers, hardware redundancy, fail-safe technology, information encryption, and firewalls can enhance this requirement. 

Providing a 360-degree view of the customer relationship, defined as ensuring that all employees, suppliers, and partners have a complete view, and the same view, of the customer. However, customers may not appreciate the big brother experience. 

Constructing a commercially sound business model. 

Engineering an electronic value chain in which one focuses on a "limited" number of core competencies -- the opposite of a one-stop shop. (Electronic stores can appear either specialist or generalist if properly programmed.) 

Operating on or near the cutting edge of technology and staying there as technology changes (but remembering that the fundamentals of commerce remain indifferent to technology). 

Setting up an organization of sufficient alertness and agility to respond quickly to any changes in the economic, social and physical environment. 

Providing an attractive website. The tasteful use of colour, graphics, animation, photographs, fonts, and white-space percentage may aid success in this respect. 

Streamlining business processes, possibly through re-engineering and information technologies. 

Providing complete understanding of the products or services offered, which not only includes complete product information, but also sound advisors and selectors. 

Naturally, the e-commerce vendor must also perform such mundane tasks as being truthful about its product and its availability, shipping reliably, and handling complaints promptly and effectively. A unique property of the Internet environment is that individual customers have access to far more information about the seller than they would find in a brick-and-mortar situation. (Of course, customers can, and occasionally do, research a brick-and-mortar store online before visiting it, so this distinction does not hold water in every case.)




Problems

Even if a provider of E-commerce goods and services rigorously follows these "key factors" to devise an exemplary e-commerce strategy, problems can still arise. Sources of such problems include:


Failure to understand customers, why they buy and how they buy. Even a product with a sound value proposition can fail if producers and retailers do not understand customer habits, expectations, and motivations. E-commerce could potentially mitigate this potential problem with proactive and focused marketing research, just as traditional retailers may do. 

Failure to consider the competitive situation. One may have the will to construct a viable book e-tailing business model, but lack the capability to compete with Amazon.com. 

Inability to predict environmental reaction. What will competitors do? Will they introduce competitive brands or competitive web sites? Will they supplement their service offerings? Will they try to sabotage a competitor's site? Will price wars break out? What will the government do? Research into competitors, industries and markets may mitigate some consequences here, just as in non-electronic commerce. 

Over-estimation of resource competence. Can staff, hardware, software, and processes handle the proposed strategy? Have e-tailers failed to develop employee and management skills? These issues may call for thorough resource planning and employee training. 

Failure to coordinate. If existing reporting and control relationships do not suffice, one can move towards a flat, accountable, and flexible organizational structure, which may or may not aid coordination. 

Failure to obtain senior management commitment. This often results in a failure to gain sufficient corporate resources to accomplish a task. It may help to get top management involved right from the start. 

Failure to obtain employee commitment. If planners do not explain their strategy well to employees, or fail to give employees the whole picture, then training and setting up incentives for workers to embrace the strategy may assist. 

Under-estimation of time requirements. Setting up an e-commerce venture can take considerable time and money, and failure to understand the timing and sequencing of tasks can lead to significant cost overruns. Basic project planning, critical path, critical chain, or PERT analysis may mitigate such failings. Profitability may have to wait for the achievement of market share. 

Failure to follow a plan. Poor follow-through after the initial planning, and insufficient tracking of progress against a plan can result in problems. One may mitigate such problems with standard tools: benchmarking, milestones, variance tracking, and penalties and rewards for variances. 

Becoming the victim of organized crime. Many syndicates have caught on to the potential of the Internet as a new revenue stream. Two main methods are as follows: (1) Using identity theft techniques like phishing to order expensive goods and bill them to some innocent person, then liquidating the goods for quick cash; (2) Extortion by using a network of compromised "zombie" computers to engage in distributed denial of service attacks against the target Web site until it starts paying protection money. 

Failure to expect the unexpected. Too often new businesses do not take into account the amount of time, money or resources needed to complete a project and often find themselves without the necessary components to become successful. 






Security is the condition of being protected against danger or loss. In the general sense, security is a concept similar to safety. The nuance between the two is an added emphasis on being protected from dangers that originate from outside. Individuals or actions that encroach upon the condition of protection are responsible for the breach of security.


The word "security" in general usage is synonymous with "safety," but as a technical term "security" means that something not only is secure but that it has been secured. In telecommunications, the term security has the following meanings:


A condition that results from the establishment and maintenance of protective measures that ensure a state of inviolability from hostile acts or influences. 

With respect to classified matter, the condition that prevents unauthorized persons from having access to official information that is safeguarded in the interests of national security. 

Measures taken by a military unit, an activity or installation to protect itself against all acts designed to, or which may, impair its effectiveness. 

Sources: from Federal Standard 1037C and adapted from the Department of Defense Dictionary of Military and Associated Terms


Security has to be compared and contrasted with other related concepts: Safety, continuity, reliability. The key difference between security and reliability is that security must take into account the actions of active malicious agents attempting to cause destruction.






An Internet service provider (abbr. ISP, also called Internet access provider or IAP) is a business or organization that provides to consumers access to the Internet and related services. In the past, most ISPs were run by the phone companies. Now, ISPs can be started by just about any individual or group with sufficient money and expertise. In addition to Internet access via various technologies such as dial-up and DSL, they may provide a combination of services including Internet transit, domain name registration and hosting, web hosting, and colocation.


[edit] ISP connection options

ISPs employ a range of technologies to enable consumers to connect to their network. For "home users", the most popular options include dial-up, DSL (typically ADSL), Broadband wireless access, Cable modem, and ISDN (typically BRI). For customers who have more demanding requirements, such as medium-to-large businesses, or other ISPs, DSL (often SHDSL or ADSL), Ethernet, Metro Ethernet, Gigabit Ethernet, Frame Relay, ISDN (BRI or PRI), ATM, satellite Internet access and SONET are more likely. With the increasing popularity of downloading music and online video and the general demand for faster page loads, higher bandwidth connections are becoming more popular.



[edit] How ISPs connect to the Internet

Just as their customers pay them for Internet access, ISPs themselves pay upstream ISPs for Internet access. In the simplest case, a single connection is established to an upstream ISP using one of the technologies described above, and the ISP uses this connection to send or receive any data to or from parts of the Internet beyond its own network; in turn, the upstream ISP uses its own upstream connection, or connections to its other customers (usually other ISPs) to allow the data to travel from source to destination.


In reality, the situation is often more complicated. For example, ISPs with more than one Point of Presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and have connections to each one at one or more of their PoPs. ISPs may engage in peering, where multiple ISPs interconnect with one another at a peering point or Internet exchange point (IX), allowing the routing of data between their networks, without charging one another for that data - data that would otherwise have passed through their upstream ISPs, incurring charges from the upstream ISP. ISPs that require no upstream, and have only customers and/or peers, are called Tier 1 ISPs, indicating their status as ISPs at the top of the Internet hierarchy. Routers, switches, Internet routing protocols, and the expertise of network administrators all have a role to play in ensuring that data follows the best available route and that ISPs can "see" one another on the Internet.


[edit] Virtual ISP

A Virtual ISP (vISP) purchases services from another ISP (sometimes called a wholesale ISP or similar within this context) that allow the vISP's customers to access the Internet via one or more Points of Presence (PoPs) that are owned and operated by the wholesale ISP. There are various models for the delivery of this type of service, for example, the wholesale ISP could provide network access to end users via its dial-up modem PoPs or DSLAMs installed in telephone exchanges, and route, switch, and/or tunnel the end user traffic to the vISP's network, whereupon they may route the traffic toward its destination. In another model, the vISP does not route any end user traffic, and needs only provide AAA (Authentication, Authorization and Accounting) functions, as well as any "value-add" services like email or web hosting. Any given ISP may use their own PoPs to deliver one service, and use a vISP model to deliver another service, or, use a combination to deliver a service in different areas. The service provided by a wholesale ISP in a vISP model is distinct from that of an upstream ISP, even though in some cases, they may both be one and the same company. The former provides connectivity from the end user's premises to the Internet or to the end user's ISP, the latter provides connectivity from the end user's ISP to all or parts of the rest of the Internet.


A vISP can also refer to a completely automated white label service offered to anyone at no cost or for a minimal set-up fee. The actual ISP providing the service generates revenue from the calls and may also share a percentage of that revenue with the owner of the vISP. All technical aspects are dealt with leaving the owner of vISP with the task of promoting the service. This sort of service is however declining due to the popularity of unmetered internet access also known as flatrate.






General Packet Radio Service (GPRS) is a Mobile Data Service available to users of Global System for Mobile Communications (GSM) and IS-136 mobile phones. GPRS data transfer is typically charged per megabyte of transferred data, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user has actually transferred data or has been in an idle state. GPRS can be used for services such as Wireless Application Protocol (WAP) access, Short Message Service (SMS), Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access.


2G cellular systems combined with GPRS is often described as "2.5G", that is, a technology between the second (2G) and third (3G) generations of mobile telephony. It provides moderate speed data transfer, by using unused Time division multiple access (TDMA) channels in for example the GSM system. Originally there was some thought to extend GPRS to cover other standards, but instead those networks are being converted to use the GSM standard, so that GSM is the only kind of network where GPRS is in use. GPRS is integrated into GSM Release 97 and newer releases. It was originally standardized by European Telecommunications Standards Institute (ETSI), but now by the 3rd Generation Partnership Project (3GPP).


Basics

GPRS is packet-switched, which means that multiple users share the same transmission channel, only transmitting when they have data to send. Thus the total available bandwidth can be immediately dedicated to those users who are actually sending at any given moment, providing higher use where users only send or receive data intermittently. Web browsing, receiving e-mails as they arrive and instant messaging are examples of uses that require intermittent data transfers, which benefit from sharing the available bandwidth. By contrast, in the older Circuit Switched Data (CSD) standard included in GSM standards, a connection establishes a circuit, and reserves the full bandwidth of that circuit during the lifetime of the connection.


Usually, GPRS data are billed per kilobyte of information transceived, while circuit-switched data connections are billed per second. The latter is because even when no data are being transferred, the bandwidth is unavailable to other potential users.


The multiple access methods used in GSM with GPRS are based on frequency division duplex (FDD) and FDMA. During a session, a user is assigned to one pair of up-link and down-link frequency channels. This is combined with time domain statistical multiplexing, i.e. packet mode communication, which makes it possible for several users to share the same frequency channel. The packets have constant length, corresponding to a GSM time slot. The down-link uses first-come first-served packet scheduling, while the up-link uses a scheme very similar to reservation ALOHA. This means that slotted Aloha (S-ALOHA) is used for reservation inquiries during a contention phase, and then the actual data is transferred using dynamic TDMA with first-come first-served scheduling.


GPRS originally supported (in theory) Internet Protocol (IP), Point-to-Point Protocol (PPP) and X.25 connections. The last has been typically used for applications like wireless payment terminals, although it has been removed as from the standard. X.25 can still be supported over PPP, or even over IP, but doing this requires either a router to do encapsulation or intelligence built into the end terminal. In practice, mainly IPv4 is used. PPP is often not supported by the mobile phone operator, while IPv6 is not yet popular.






Code division multiple access (CDMA) is a form of multiplexing and a method of multiple access that divides up a radio channel not by time (as in time division multiple access), nor by frequency (as in frequency-division multiple access), but instead by using different pseudo-random code sequences for each user. CDMA is a form of "spread-spectrum" signaling, since the modulated coded signal has a much higher bandwidth than the data being communicated.


As a crude analogy to the CDMA scheme, imagine a large room (channel) containing many people wishing to communicate amongst each other. Possible mechanisms for avoiding confusion include having only one person speak at a time (time division), having people speak at different pitches (frequency division), or in different directions (spatial division). The CDMA approach is, by contrast, to have them speak different languages to each other. Groups of people speaking the same language can understand each other, but not any of the other groups. Similarly in CDMA, each group of users is given a shared code. There are many codes occupying the same channel, but only the users associated with a particular code can understand each other.


CDMA also refers to digital cellular telephony systems that make use of this multiple access scheme, such as those pioneered by QUALCOMM, and W-CDMA by the International Telecommunication Union or ITU.


CDMA has been used in many communications and navigation systems, including the Global Positioning System and in the OmniTRACS satellite system for transportation logistics.


Usage in mobile telephony

A number of different terms are used to refer to CDMA implementations. The original U.S. standard defined by QUALCOMM was known as IS-95, the IS referring to an Interim Standard of the Telecommunications Industry Association (TIA). IS-95 is often referred to as 2G or second generation cellular. The QUALCOMM brand name cdmaOne may also be used to refer to the 2G CDMA standard. CDMA has been submitted for approval as a mobile air interface standard to the ITU International Telecommunication Union.


Whereas the Global System for Mobile Communications (GSM) standard is a specification of an entire network infrastructure, the CDMA interface relates only to the air interface—the radio part of the technology. For example, GSM specifies an infrastructure based on internationally approved standard while CDMA allows each operator to provide the network features as it finds suited. On the air interface, the signalling suite (GSM: ISDN SS7) work has been progressing to harmonise these.


After a couple of revisions, IS-95 was superseded by the IS-2000 standard. This standard was introduced to meet some of the criteria laid out in the IMT-2000 specification for 3G, or third generation, cellular. It is also referred to as 1xRTT which simply means "1 times Radio Transmission Technology" and indicates that IS-2000 uses the same 1.25 MHz carrier shared channel as the original IS-95 standard. A related scheme called 3xRTT uses three 1.25 MHz carriers for a 3.75 MHz bandwidth that would allow higher data burst rates for an individual user, but the 3xRTT scheme has not been commercially deployed. More recently, QUALCOMM has led the creation of a new CDMA-based technology called 1xEV-DO, or IS-856, which provides the higher packet data transmission rates required by IMT-2000 and desired by wireless network operators.


This CDMA system is frequently confused with a similar but incompatible technology called Wideband Code Division Multiple Access (W-CDMA) which forms the basis of the W-CDMA air interface. The W-CDMA air interface is used in the global 3G standard UMTS and the Japanese 3G standard FOMA, by NTT DoCoMo and Vodafone; however, the CDMA family of US national standards (including cdmaOne and CDMA2000) are not compatible with the W-CDMA family of International Telecommunication Union (ITU) standards.


Another important application of CDMA — predating and entirely distinct from CDMA cellular — is the Global Positioning System or GPS.


The QUALCOMM CDMA system includes highly accurate time signals (usually referenced to a GPS receiver in the cell base station), so cell phone CDMA-based clocks are an increasingly popular type of radio clock for use in computer networks. The main advantage of using CDMA cell phone signals for reference clock purposes is that they work better inside buildings, thus often eliminating the need to mount a GPS antenna on the outside of a building.






The Global System for Mobile communications (GSM: originally from Groupe Spécial Mobile) is the most popular standard for mobile phones in the world. GSM service is used by over 2 billion people across more than 212 countries and territories.[1][2] Its ubiquity makes international roaming very common between mobile phone operators, enabling subscribers to use their phones in many parts of the world. GSM differs significantly from its predecessors in that both signaling and speech channels are digital call quality, and so is considered a second generation (2G) mobile phone system. This has also meant that data communication was built into the system from the 3rd Generation Partnership Project (3GPP).


The key advantage of GSM systems to consumers has been higher digital voice quality and low cost alternatives to making calls, such as the Short message service (SMS, also called "text messaging"). The advantage for network operators has been the ease of deploying equipment from any vendors that implements the standard.[3] Like other cellular standards, GSM allows network operators to offer roaming services so that subscribers can use their phones on GSM networks all over the world.


Newer versions of the standard were backward-compatible with the original GSM phones. For example, Release '97 of the standard added packet data capabilities, by means of General Packet Radio Service (GPRS). Release '99 introduced higher speed data transmission using Enhanced Data Rates for GSM Evolution (EDGE).


Radio interface

GSM is a cellular network, which means that mobile phones connect to it by searching for cells in the immediate vicinity. GSM networks operate in four different frequency ranges. Most GSM networks operate in the 900 MHz or 1800 MHz bands. Some countries in the Americas (including Canada and the United States) use the 850 MHz and 1900 MHz bands because the 900 and 1800 MHz frequency bands were already allocated.


The rarer 400 and 450 MHz frequency bands are assigned in some countries, notably Scandinavia, where these frequencies were previously used for first-generation systems.


In the 900 MHz band the uplink frequency band is 890-915 MHz, and the downlink frequency band is 935-960 MHz. This 25 MHz bandwidth is subdivided into 124 carrier frequency channels, each spaced 200 kHz apart. Time division multiplexing is used to allow eight full-rate or sixteen half-rate speech channels per radio frequency channel. There are eight radio timeslots (giving eight burst periods) grouped into what is called a TDMA frame. Half rate channels use alternate frames in the same timeslot. The channel data rate is 270.833 kbit/s, and the frame duration is 4.615 ms.


The transmission power in the handset is limited to a maximum of 2 watts in GSM850/900 and 1 watt in GSM1800/1900.


GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 6 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called "Full Rate" (13 kbit/s) and "Half Rate" (6 kbit/s). These used a system based upon linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal.


GSM was further enhanced in 1997[7] with the GSM-EFR codec, a 12.2 kbit/s codec that uses a full rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full rate channels, and less robust but still relatively high quality when used in good radio conditions on half-rate channels.


There are four different cell sizes in a GSM network - macro, micro, pico and umbrella cells. The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base station antenna is installed on a mast or a building above average roof top level. Micro cells are cells whose antenna height is under average roof top level; they are typically used in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Umbrella cells are used to cover shadowed regions of smaller cells and fill in gaps in coverage between those cells.


Cell horizontal radius varies depending on antenna height, antenna gain and propagation conditions from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 km or 22 miles. There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain and the timing advance.


Indoor coverage is also supported by GSM and may be achieved by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters, to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. These are typically deployed when a lot of call capacity is needed indoors, for example in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of the radio signals from nearby cells.


The modulation used in GSM is Gaussian minimum-shift keying (GMSK), a kind of continuous-phase frequency shift keying. In GMSK, the signal to be modulated onto the carrier is first smoothed with a Gaussian low-pass filter prior to being fed to a frequency modulator, which greatly reduces the interference to neighboring channels (adjacent channel interference).


A nearby GSM handset is usually the source of the "dit dit dit, dit dit dit, dit dit dit" signal that can be heard from time to time on home stereo systems, televisions, computers, and personal music devices. When these audio devices are in the near field of the GSM handset, the radio signal is strong enough that the solid state amplifiers in the audio chain function as a detector. The clicking noise itself represents the power bursts that carry the TDMA signal. These signals have been known to interfere with other electronic devices, such as car stereos and portable audio players. This is a form of RFI, and could be mitigated or eliminated by use of additional shielding and/or bypass capacitors in these audio devices[citation needed], however, the increased cost of doing so is difficult for a designer to justify.







Frequency modulation (FM) is a form of modulation that represents information as variations in the instantaneous frequency of a carrier wave. (Contrast this with amplitude modulation, in which the amplitude of the carrier is varied while its frequency remains constant.) In analog applications, the carrier frequency is varied in direct proportion to changes in the amplitude of an input signal. Digital data can be represented by shifting the carrier frequency among a set of discrete values, a technique known as frequency-shift keying.


FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech (see FM broadcasting). Normal (analog) TV sound is also broadcast using FM. A narrowband form is used for voice communications in commercial and amateur radio settings. The type of FM used in broadcast is generally called wide-FM, or W-FM. In two-way radio, narrowband narrow-fm (N-FM) is used to conserve bandwidth. In addition, it is used to send signals into space.


FM is also used at intermediate frequencies by most analog VCR systems, including VHS, to record the luminance (black and white) portion of the video signal. FM is the only feasible method of recording video to and retrieving video from magnetic tape without extreme distortion, as video signals have a very large range of frequency components — from a few hertz to several megahertz, too wide for equalisers to work with due to electronic noise below -60 dB. FM also keeps the tape at saturation level, and therefore acts as a form of noise reduction, and a simple limiter can mask variations in the playback output, and the FM capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the signal — as was done on V2000 and many Hi-band formats — can keep mechanical jitter under control and assist timebase correction.


FM is also used at audio frequencies to synthesize sound. This technique, known as FM synthesis, was popularized by early digital synthesizers and became a standard feature for several generations of personal computer sound cards.


Applications in radio

 

An example of frequency modulation. The top diagram shows the modulating signal superimposed on the carrier wave. The bottom diagram shows the resulting frequency-modulated signal.Edwin Armstrong presented his paper: "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation", which first described FM radio, before the New York section of the Institute of Radio Engineers on November 6, 1935. The paper was published in 1936. [1]


Wideband FM (W-FM) requires a wider bandwidth than amplitude modulation by an equivalent modulating signal, but this also makes the signal more robust against noise and interference. Frequency modulation is also more robust against simple signal amplitude fading phenomena. As a result, FM was chosen as the modulation standard for high frequency, high fidelity radio transmission: hence the term "FM radio" (although for many years the BBC insisted on calling it "VHF radio", because commercial FM broadcasting uses a well-known part of the VHF band; in certain countries, expressions referencing the more familiar wavelength notion are still used in place of the more abstract modulation technique name).


FM receivers employ a special detector for FM signals and exhibit a phenomenon called capture, where the tuner is able to clearly receive the stronger of two stations being broadcast on the same frequency. Problematically however, frequency drift or lack of selectivity may cause one station or signal to be suddenly overtaken by another on an adjacent channel. Frequency drift typically constituted a problem on very old or inexpensive receivers, while inadequate selectivity may plague any tuner.


An FM signal can also be used to carry a stereo signal: see FM stereo. However, this is done by using multiplexing and demultiplexing before and after the FM process, and is not part of FM proper. The rest of this article ignores the stereo multiplexing and demultiplexing process used in "stereo FM", and concentrates on the FM modulation and demodulation process, which is identical in stereo and mono processes.


















Comments

Popular posts from this blog

What is a VPN?

 A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The link-layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. A VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point. A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual

Random English

 Shakespeare invented the word 'assassination' and 'bump'. Stewardesses is the longest word typed with only the left hand. The ant always falls over on its right side when intoxicated. The electric chair was invented by a dentist. The human heart creates enough pressure when it pumps out to the body to Squirt blood 30 feet.   Wearing headphones for just an hour will increase the bacteria in your ear By 700 times. Ants don't sleep .   ·    Owls have eyeballs that are tubular in shape, because of this, they cannot move their eyes.    ·    A bird requires more food in proportion to its size than a baby or a cat.    ·    The mouse is the most common mammal in the US.   ·    A newborn kangaroo is about 1 inch in length.    ·    A cow gives nearly 200,000 glasses of milk in her lifetime.    ·    The Canary Islands were not named for a bird called a canary. They were named after a breed of large dogs. The Latin name was Canariae insulae - "Island of Dogs."    ·