The Feature Phone

The Evolution of the Feature Phone

Until the end of the 1990s, the development of mobile phones focused on being small, light and inexpensive while allowing longer talk times and standby times. From the turn of the century onwards, new incentives were needed to sell mobile phones. Such incentives consisted of additional features. This became possible with new developments in technology especially:

Finally, another wireless technology was added that made it possible to attach other devises such as headsets. This new technology was called Bluetooth.

Display

One of the reasons why cell phones didn’t get smaller from the end of the 1990s onwards, but rather larger again, was due to the displays. In the beginning it was completely sufficient to offer a 2 or 3 line display in which the mobile phone provider, time, phone numbers, names or call status were shown. There were also icons that became increasingly similar, showing the charge status of the battery and the reception strength of the network.

Displays with 2-3 lines of numbers became graphic displays. These were initially monochrome, then colored.

But over time, people moved from line display to graphic display. These were initially displays with 80 x 80 pixel resolution such as the Nokia 3310. A graphic display had many more advantages in the design of increasingly complex user menus. Icons could be introduced to make navigation in menus easier.

In 1997, Siemens was the first device to have color displays. However, these served more to display text in color than to create images. It wasn’t until 2001 that the Sony-Ericsson T68, the first color display with a resolution of 101 x 80 pixels and 256 colors, appeared. This made it possible to display colored images and photos.

Display Sony-Ericsson T68 (Source: Teltarif)

Storage

As already discussed above, flash technology was used in the 2nd generation of mobile phones to store and, if necessary, change programs and data. It was initially hoped that the more expensive flash memory could be replaced by a ROM, i.e. non volontile memory, but the programs were so complex that they always needed updates and so flash memory remained.

In the end, flash memories are also integrated circuits and are subject to Moore’s Law, i.e. they get smaller and cheaper every year or they had more capacity year after year. In 1999, around 256 Mbit flash memory was a common product. This also resulted in more memory for applications and additional functions such as saving photographies. However, for a long time the internal memory of mobile phones was quite limited, especially for photographs.

Development Flash Memory (Sources: Toshiba, Mitsubishi, Intel, NEC, Samsung)

In 2001, the Siemens SL45 telephone came onto the market for the first time, a device into which an external memory card could be inserted. This made it possible to access a lot of memory but also to exchange it easily. At the time, it was much easier to download images using memory cards than to connect a device to a PC via cable.

Siemens SL45. Source: Siemens

Universal Serial Bus (USB)

New applications in the phone made it necessary to exchange data with external devices, mostly PCs. So it became important to create backup copies of the cell phone’s internal data, such as synchronizing entries in the phone book or calendar entries in the cell phone book with those on PCs. This required a standardized interface for data exchange.

In the world of PCs there were initially two interfaces. A parallel port and a serial port. The parallel interface was used for fast data transfer. This was achieved because entire data words could be transmitted in parallel through a whole bundle of cables. Most printers were connected via parallel interfaces. This was done with thick cables and wide connectors. A serial interface was usually a “universal asynchronous receiver transmitter” (UART). In this proven interface, the data was transferred serially bit by bit. A two-wire cable was usually sufficient for this.

In the mid-1990s, the PC industry (Intel, Microsoft, Compaq, DEC, IBM, Intel, Microsoft, NEC, and Nortel) developed a new interface standard to replace the parallel and serial interfaces. This standard was called the Universal Serial Bus (USB). It should also be easier to use such an interface. Instead of special SW having to be loaded and/or to be activated in the PC, the PC should automatically recognize that kind of device was connected to a USB interface and what kind of application it was. This concept was called “Plug and Play”. USB was designed to be bidirectional from the start and was intended to support high bit rates. In 1996 the first USB interfaces were offered by Intel. In 1998 the software (from Microsoft) was ready and USB was installed in PCs with Windows98 operating system. Initially the data rate was limited to 12 Mbit/s.

From 1998 a USB interface was also used in mobile phones. A USB interface appeared for the first time in the Qualcomm MSM3100 chipset.

In 2000, USB 2.0 was released, which dramatically increased data rates to 480 Mbit/s. New connectors were also added, especially mini A connectors. It was particularly suitable for connecting a PC with very compact devices such as compact digital cameras, MP3 players or cell phones. In 2004, the Motorola Moto Razr V3 was the first mobile phone with a MiniUSB port for power supply, data transfer and a headphone jack.

Mini USB Connector as used for some Feature Phones

Camera

Until the late sixties, the Photography was based on Chemicals, especially on films. To created „digital Images“ it was required to measure the intensity (and lager color) of a light spot. A breakthrough for digital images was the invention and development of Charged Coupled Devices.

Charged Coupled Devices (CCD)

It was an early discovery that a semiconductor diode had special properties under the influence of light. If a diode is operated in the reverse direction, it does not conduct any current. However, if light falls on the diode, electrons are “released” and generate a small current, and the more light, the more current.

Therefore a „light diode“ was a natural element for a digital camera. It was straight forward to arrange a matrix of small light diodes, e.g 100 x 100 on a surface that then illuminated using optics. Each diode corresponds to a point of light in the image, which is called a pixel. Pixel is a „made-up word“ which is derived from the abbreviation of Picture (Pix) and Element.

However, when making a picture it is impossible to measure e.g. 10,000 currents. It would be much better to measure voltages than currents. This can be done by charging small capacitors at the diodes during light exposure. The charge of the capacitor corresponds to the amount of light that hit the pixel. It is quite easy to create a whole matrix of diodes with associated capacitors on a silicon plate, but the problem was: how to “read” the charge of the capacitors.

The solution came from two researchers at Bell Laboratories in the late 1960s, although they were not working on light diodes at all, but on memory elements. Their names were Williard Boyle and George E. Smith. They received the Nobel Prize in Physics in 2009 for their work. They also worked with capacitors on silicon, which they wanted to load and read out in order to store bits. Boyle and Smith invented a method by which charge could be transferred from one capacitor to an adjacent (empty) capacitor by changing the voltages across the capacitors. The drawing below shows the process.

Principle of Charged Coupled Devices

The charge is thus passed on like a “bucket chain” until it reaches the end of the silicon surface. Here the charge can then be converted into a digital value using an analog-to-digital converter. This way, charges can be applied to a silicon surface and stored as well as read out again. This was Boyle and Smith’s original plan. However, this method was not used for storing digital data. It proved to be very advantageous for reading out charges that were generated by exposure to light. So the main application for CCD was to create sensors for digital cameras.

Some semiconductor companies quickly began to build corresponding CCD sensors. The Fairchild company succeeded in producing a 100 x 100 pixel sensor in 1975. These were built into the first digital camera manufactured by Kodak. However, only simple images could be taken in black and white with low resolution.

Digital Cameras

Higher quality cameras were developed for satellites and space travel. A spy satellite that was launched at the end of 1976 already received an 800 x 800 sensor. The first successful camera product came from Sony in 1983 and was a CCD-G5 video camera.

For a long time there was no storage medium for cameras. The first cameras at the end of the 1980s still had static RAM as a storage medium. It wasn’t until the end of the 1990s that the replaceable flash card for digital cameras became popular, such as 40MB Flash from Toshiba.

Kyocera VP-210 first mobile phone with integrated camera. Source: Wikimedia

The first mobile phone with a built-in camera was produced by Kyocera in 1999 for the Japanese i-phone market. It is noteworthy that the camera was built into the front and thus takes a “selfie”, i.e. a picture of the user. The corresponding photos were not transferred to PCs but sent via e-mail via mobile internet.

In Europe it took another three years until the first cell phones with cameras appeared. One of the first was the Nokia 7650 which was released in 2002 when GPRS was already introduced. It was also primarily a mobile internet phone. It could display images with VGA resolution (640 x 480 pixels) and 16 million (24 bit) colors. The phone was a “slider” and the camera was revealed by sliding the phone open. It was also the first Nokia phone to run the Symbian Operating system.

Nokia 7650. First GPRS Phone with camera. Source: Nokia

Audio

Audio was one of the first applications that introduced digital storage. A complete album (which formerly was corresponding to a LP) was stored on a CD. But the audio signal was stored directly without any compression. The need to compress audio signals was first driven by the idea, to use 64 kbit/s channels to transfer Music. This could be a valuable service. ISDN lines could be used to „stream“ music. However, this application was driving research e.g. at Bell Laboratories but music was never streamed over ISDN. It took another 10 years to finally stream music over the internet. Another driver for audio compression was in the introduction to digitize moving pictures.

MOVING PICTURE EXPERT GROUP (MPEG)

In the 1980s there were efforts to digitize films and videos in a similar way to audio. Until now, video recording could only be carried out analogously on video tape.

Japanese engineer Hiroshy Yasuda of Nippon Telegraf and Telefone (NTT) and Italian engineer Leonardo Chiariglione of Centro Studi e Laboratori Telecomunicazioni (CSELT) in Italy founded an organization called the Moving Picture Expert Group (MPEG). It met for the first time in May 1988. The aim was to define a standard for video and audio for storage on video CDs, a precursor to DVD. This standard is called MPEG-1. The standard was published for the first time in 1993.

When developing an image coding process, MPEG was able to rely on a standard that was defined by the ITU-T in 1988 as H.261. This standard was already used for video conferencing in the 1980s.

There were various methods for audio coding, which were referred to as MP Audio Layer 1, MP Audio Layer 2 and MP Audio Layer 3 (MP1, MP2 and MP3 for short).

Psychoacoustic

How is it possible to compress an audio signal. The methods for speech compression could not be applied, since speech compression assumes that the audio signal is always a speech signal. So other methods must be used. All these methods depend on psychoacoustics.

The audible range for humans is from 20 Hz to 16 kHz. To quantize a wideband audio signal, it is required to sample at least 40 k sample/s with an accuracy of 16 bits to capture the entire dynamic range of hearing. For CD recordings, a sampling rate of 44100 sample/s is selected. This means 2 x 440100 x 16 bits (1.4112 Mbit/s). These data rates are too high to be able to be stored alongside video data. The aim of good coding was always to achieve 128 kbit/s, for example to transport audio over digital telephone lines. This meant that the audio file had to be compressed by more than a factor of 10.

There are two main elements that allow the data for audio files to be compressed without the quality of the recording (significantly) suffering. One element is the fact that different parts of the audible spectrum have different significance for hearing. Areas in the low to medium spectrum, for example from 60 to 2000 Hz, are particularly sensitive and susceptible to interference. In higher frequency ranges, precise coding is no longer as influential. It is therefore worth dividing the audio signals into subbands and encoding them with different levels of precision.

The second essential element comes from the field of „psychoacoustics“. Due to the anatomy of the hearing organ, it is not possible to hear all the nuances that are visible during a spectral analysis of the signal. For example, if a loud tone is played at 1 kHz as in the following figure, it is not possible to hear less intense tones below and above this frequency, which fall below a so-called masking threshold. They are „masked“. We all know this effect when operating a vacuum cleaner or a hair dryer. When a loud machine is creating a lot of noice, we can no longer hear music on the radio, even though the music signal is actually still present and would also be detectable in a digital signal.

A similar effect also occurs over in time domain. If a loud signal is played for a short time, you will no longer be able to hear quieter sounds just before and after this event. This effect is called pre-masking and post-masking.

Masking effects of audio signals (in frequency and time)

These maskings can be used in addition to the band effects. Taking into account this masking effects, it is possible to create (compressed) signals that differ significantly in time and frequency but don’t differ in perception from the original signal.

MP3

In the 1980s, research into audio coding involving psychoacoustic effects was carried out at many renowned research institutions around the world. Four groups worked on the the standardization of MP3, which should lead to high compression with low complexity for the coding algorithm.

  • ASPEC (Adaptive Spectral Perceptual Entropy Coding): 
    Frauenhofer Gesellschaft, AT&T Bell Labs, France Telecom und Thomson Brandt (France)
  • MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing):
    (Matsushita, CCETT (France), ITT (Germany) und Philips (Netherlands)
  • ATRAC (Adaptive Transform Acoustic Coding):
    (Fujitsu, JVC, NEC und Sony (all Japan))
  • SB-ADPCM (Subband Adaptive Differential Pulse Code Modulation):
    NTT (Japan)

In 1991 these procedures were tested and evaluated. MUSICAM proved to be most robust and very efficient. Therefore it became the basis for sub band coding.

Based on MUSICAM, MP1 was the first algorithm for audio compression developed by Philips. Philips developed a so-called Digital Compact Cassette (DCC), a successor to its famous Compact Cassette but with better audio quality. To do this, Philips engineers divided the audio signal into 32 subbands. This meant that the audio signal for MP1 could be reduced to 364 kbit/s with almost the same audio quality. Philips called the process PASC (Precision Adaptive Sub-band Coding). Unfortunately, the DCC was a flop. It had too many disadvantages compared to the MP3 players that soon followed. Magnetic tape as data storage had become obsolete.

MP2 was also based on MUSICAM. However, it was optimized to achieve bit rates of 256 (128kbit/s per channel). The main target for MP2 was the emerging digital radio standard (Digital Audio Broadcasting) DAB. However, MP2 was also used for Digital Video Broadcasting (DVB).

A working group of 6 researchers from CSELT, AT&T Bell Labs and Frauenhofer adopted MUSICAM’s algorithms and incorporated essential elements of their ASPEC project into the work. Karlheinz Brandenburg from the Frauenhofer Institute was particularly active here. He had already worked on audio coding at the University of Nuremberg Erlangen for over 10 years. In 1989 he received a patent for audio coding which was then incorporated into the final MP3 standard. MP3 achieved the same quality at a bit rate of 128 kbit/s as MP2 at 192 kbit/s. In 1994 MP3 was released as a standard.

The source code for MP3 was published so that MP3 could be encoded and decoded on various computer systems. With PCs becoming more and more powerful, it was possible to decode, i.e. play MP3 data, in real time. This made MP3 interesting for storing and playing music on digital storage. Only encoding had to be licensed, decoding was free. Soon, SW manufacturers acquired licenses for MP3 encoder programs that could be purchased on the market.

MP3 Player

In 1998, the MPMan F10 from Korea was the first portable MP3 player with FLASH memory. However, only with 32 Mbytes of memory. This was the beginning of a boom in MP3 players. Since the turn of the century, MP3 changed the music business since user rights were circumvented through (illegal) MP3 distribution on the Internet.

The Siemens SL45, the first mobile phone with an MP3 player, appeared in 2001. As discussed above, this was also the first phone with a flash card. Unfortunately, this device was ahead of its time as the MP3 boom had not yet fully taken off. As a “business phone” it was also too expensive for the young target group. An MP3 phone became more widespread around 2005 when Sony Ericsson released the W800 Walkman mobile phone, which could already store a large number of titles with 512 Mbytes of flash memory.

FM Receiver

Since the introduction of the transistor, FM receivers were already very small to be called “pocket radios”. In the 1980s, engineers at Philips in Hamburg worked on a tiny IC that combined virtually all of the FM functionality, from antenna to speakers, into just a few mm2 of silicon. Only an external control capacitor was created for frequency control. Philips sold millions of these chips, primarily for cheap clock radios built in Japan. Later Chips were developed, where, the frequency could be controlled by an external controller. This made FM receiver chips interesting for use in cell phones.

The Nokia 8310 was the first mobile phone with FM radio reception in 2001.

Games

At the end of the 1990s, cell phone processors became increasingly powerful. So it was just a matter of time that people started programming games on a cell phone. This was accelerated by the fact that, as described above, the displays now allowed simple graphics.

Nokia was again a pioneer here. In 1997 the Nokia 6110 was released which included the game Snake. Snake was originally an early computer game from the 1970s that was played on consoles at the time. Later it also ran on some home computers. On a mobile phone, Snake was now used for leisure times for users (not just young ones). However, such games were hard-coded. No additional games could be purchased. This only changed with WAP-capable phones. Games could be now both purchased and downloaded. This is how other games like PacMan, Space Invaders and Tetris came to mobile phones.

For downloading games, it also became important that mobile phones used appropriate operating systems. First examples were JavaMS or Symbian.

Snakes on a Nokia Phone