Monday, 28 February 2011

System Development Life Cycle

A system life cycle refers to the stages used to successfully create a system which is exactly what is desired by a customer. This life cycle is used within computing, in software development for example, as well as outside of computing for large-scale projects. The life cycle makes it less likely for a project to fail or be undesirable.  It is used because of the consequences which could be the result of a system failure.  Below the phases of the system life cycle have been detailed.

Phase One: Analysis

  • Finding out the purpose of the project and final objectives, as such forming a requirements specification.
  • Research (e.g. use of surveys, reports and interviews etc.)
  • Any other information required in order to know exactly what is needed..

Phase Two: Design

  • How will the system be created?
  • Making sure that the system will meet its objectives before it is created.
  • Final specification and design

Phase Three: Implementation

  • Create the system
  • Establishing or setting-up the system
  • Preparing the system for use
  • Training people to use the system
  • Creating instructions so people can use the system

Phase Four: Testing

  • Test parts of the system and the system overall to make sure all elements function correctly individually, and that they function correctly together.
  • Make sure that those trained to do so can use the system.

Phase Five: Evaluation

  • Is it the right system for the problem?
  • Is it effective?
  • What can be improved next time?

After these phases:

  • Maintenance in the form of updating the system to fix problems and make changes so it suits the user's needs.
  • Second iteration of the cycle.

Tuesday, 8 February 2011

Types of Sound, Conversion, Storage and Transmission

Sound refers to a type of energy that causes particles to vibrate as it travels through a medium other than a vacuum (which contains no particles).  This can then be converted to its electrical counterpart - a signal, which can be the equivalent of many types of energy.  The aforementioned process is done using a transducer such as  a microphone, which converts from one type of energy to another.  When transmitted as energy the data is analogue, but requires conversion to digital data to be stored in memory on a computer.  Analogue data continuously changes between an infinite amount of values, whilst digital data is discrete and as such has only fixed values which it can vary between.
  In order to be converted to digital data the analogue sound is sampled at periodic intervals (twice every second for example), using a process called Pulse Amplitude Modulation.  Quality is lost however, because not of the data from the analogue sound has been retained, and the values (e.g. loudness) of the samples will have been rounded using a process called quantisation, and stored in binary.  These numbers represent Pulse Code Modulation, and each value is stored in sequence in a binary file.  When sampling two factors are considered - the sampling rate which refers to the number of times an audio track is sampled per second, and the sampling resolution, which is the number of binary digits allocated to the range of possible values for the discrete data.
  Frequency refers to the amount of complete waves per second, and is measured in Hz, kHz (thousands), and MHz (millions).  According to Harry Nyquist, a famous American electrical engineer, an audio recording must be sampled at a rate of twice the frequency of the actual recording or higher in order to retain its quality.  In order to calculate how large the file will be after quantisation when the minimum amount of samples is taken, one needs to multiply the frequency by 2, and then by the resolution.  This gives the amount in bits, which can be converted to bytes, kilobytes and megabytes.
  In order to play the music stored using speakers, the digital data must be converted back to analogue.  As a result of only having samples, the computer is required guess the missing values, by looking at the values they are between.  Graphically this would seem like a best fit line between the discrete steps.  This often results in the audio sounding different to when it was originally created.
  On computers sound may be stored in a variety of different file formats which vary in quality.  The most common is WAV, which is on average 2.5 MB of data per minute of audio.  MPEG formats on the other hand do not retain data concerning frequencies which cannot be interpreted by the human brain.  As such they can be only 10% of the size of the original audio recording.  When stored digitally it is also much easier for us to edit and alter recordings in certain ways, such as adding effects.  This makes changing music much easier than it was before these advanced techniques, when each audio track was recorded on a separate tape, and if there was a mistake the recording had to be recreated.
  Sound may also be synthesised using a MIDI (Music Information Digital Interface), which gives the computer instructions about what exact sound to make, including instructions about factors such as its pitch or loudness.  This is like vector graphics, as the exact instructions used to create the file must be given.  As such this technology cannot be used to create a copy of an existing audio recording.  The file size is much smaller however, because it is not the audio recording which is stored, but the instructions used to create it.
  One last way that audio can be used on computers is through the use of streaming technology.  This involves buffering sound in packets over a network, or for the most part the internet.  Parts of the sound recording are sent in small amounts of bits, and are discarded after being played.  Although this music cannot be stored on a hard disk, it is more difficult to be copied meaning that copyright is protected, and cannot be listened to when the computer cannot contact the server.  It is also affected by bandwidth, because if only a small amount of bits cannot be sent per second then the audio track will often need to stop, wait until more data has been received and then resume.

Tuesday, 1 February 2011

Images, Graphics and Compression Techniques

Images and graphics on computers have traditionally been, and generally still are stored as bitmaps - therefore arrays of binary digits, which depending on the colour depth will have a certain colour mapped to each pixel in a bitmap.  A pixel is the smallest addressable area of colour in an image, and these make up bitmaps.
  Colour depth refers to the number of bits used per pixel to store the colour, and when there are a greater range of colours available for use the bitmap will allot a larger binary number to each pixel resulting in a larger number of combinations of binary digits.  For example, if there was a colour depth of 3-bit, then 8 colours would be available, due to there being 8 possible combinations (000 to 111).  The number of possible combinations can be quickly figured out by putting 2 to the power of binary digits allotted to each pixel.  Therefore if the colour depth was 8-bit then 256 colours would be available, and 2 to the power of 24 colours available for 24-bit colour depth.  The most commonly used colour depth is 24-bit colour, which is known as true colour because a colour depth of 24-bit or higher cannot be distinguished from reality by the naked eye.  When 24-bit colour is used, 8 bits are allocated for red, 8 bits for green, and 8 for blue and the combination of the three selected shades forms the outcome.
  Uncompressed bitmaps always use a large amount of memory however, as the total number of bits used equals the number of bits per pixel multiplied per number of pixels.  For example, a 24-bit (true colour) bitmap taken on a 14 megapixel camera would contain 14 million pixels.  This number is then multiplied by 24 to get 336000000 bits, which is 42000000 bytes, or 42 megabytes - the largest possible size of a 24-bit 14 megapixel image.  As such many people use compression techniques to reduce the file size of an image.  Compression may either be lossy which involves the quality of the image decreasing, or lossless which involves no data being lost.  An example of lossy compression is saving an image as a JPEG, a format which removes colour which is difficult for the human eye to see, whilst Run Length Encoding is a type of lossless compression, and involves designating blocks of the same colour within the image, so that the same colour does not need to be specified for every pixel in that area.
  Vector graphics refer to another method of storing images.  Instead of storing them as bitmaps, they store information about the shapes in and other properties of the image such as colour, so that each time they are accessed they can be created.  In order to be able to create a vector graphic however, the instructions used to create the image must have been inputted, so that they could have been recorded.  These instructions are stored within what is known as the drawing list and contain data such as the shape type, coordinates and area.  This means that images taken with a digital camera for example cannot be turned into vector graphics because these properties are not recorded.  They would also be unsuitable for images as complex as photographs due to their level of detail.
  As a result of instructions being used to create the image instead of data about each pixel being stored the image has a much lesser file size (sometimes up to a million times less), which means that they take up much less hard disk space and can be loaded more quickly.  They also do not become pixelated like bitmap images when zoomed in on, because of the way that the geometric data stored results in the scaling being more precise than a bitmap image because of the way the software can alter the graphic for that level of zoom, whilst bitmap images cannot be scaled in the same way due to containing a pre-defined amount of pixels.
  Lastly, resolution refers to an image's dimension in pixels in relation to the space it occupies.  More pixels in an image will result in a higher resolution, and as such the image will be of higher quality.  A typical resolution if 1024 x 768, but no area is specified, and as such this could appear to be pixelated (blocky and low quality) if projected onto a very large space such as the side of a building, or of very high quality if in small area such as a 19-inch visual display unit (monitor).  A better way of measuring resolution is through use of Dots Per Inch, which tells one how many pixels there are in a specified area, meaning that the quality of the image can be determined.