
COming SOon~
bwaheheheh
Chamber theatre is a method of adapting literary works to the stage using a maximal amount of the work's original text and often minimal and suggestive settings. In Chamber Theater,narration is included in the performed text and the narrator might be played by multiple actors . Professor Robert S. Breen (1909-1991) introduced "Chamber Theater" to his Oral Interpretation Classes at Northwestern University in 1947. Northwestern’s Professor of Performance Studies Frank Galati, who studied Chamber Theater with Dr. Breen, has directed highly acclaimed Chamber Theater Productions for the Goodman Theater and Steppenwolf Theater Companies in Chicago. Galati’s Chamber Theater adaptation of John Steinbeck’s The Grapes of Wrath won two Tony Awards on Broadway. One of the most famous and elaborate examples of chamber theatre is David Edgar's The Life and Adventures of Nicholas Nickleby, in whichCharles Dickens's characters narrate themselves in third person. Set pieces are carried in and taken away during the performance, rather than between scenes, and objects may be represented in a mimetic manner. Another example is Matthew Spangler's stage adaptation of Khaled Hosseini's novel The Kite Runner.
http://www.sxc.hu marinm
There are many different processors on the market. However, there are only a few that you should consider purchasing. Whether you're buying a computer off the shelf, building it from scratch, or upgrading your CPU, you must put some time and thought into which processor to buy. The choice you make today will affect your computer's speed and functionality for years to come.
1. There are two primary manufacturers of computer microprocessors. Intel and Advanced Micro Devices (AMD) lead the market in terms of speed and quality. Intel's desktop CPUs include Celeron, Pentium, and Core. AMD's desktop processors include Sempron, Athlon, and Phenom. Intel makes Celeron M, Pentium M, and Core mobile processors for notebooks. AMD makes mobile versions of its Sempron and Athlon, as well as the Turion mobile processor which comes in Ultra and Dual-Core versions. Both companies make both single-core and multi-core processors.
2. Each processor has a clock speed which is measured in gigahertz (GHz). Also, a processor has a front side bus which connects it with the system's random access memory (RAM.) CPUs also typically have two or three levels of cache. Cache is a type of fast memory which serves as a buffer between RAM and the processor. The processor's socket type determines which motherboard it can be installed on.
3. A microprocessor is a silicon chip containing millions of microscopic transistors. This chip functions as the computer's brain. It processes the instructions or operations contained within executable computer programs. Instead of taking instructions directly off of the hard drive, the processor takes its instructions from memory. This greatly increases the computer's speed.
4. If you're thinking about upgrading your processor yourself, you must check your motherboard specs first. The CPU you install must have the same socket size as the slot on the motherboard. Also, when you install a new processor, you may need to install a heat sink and fan. This is because faster processors produce more heat than slower ones. If you fail to protect your new CPU from this heat, you may end up replacing the processor.
5. When it comes to processors, size matters. Whether you're buying a new computer or upgrading your old one, you must get the fastest processor you can afford. This is because the processor will become obsolete very quickly. Choosing a 3.6 GHz processor over a 2 GHz today can buy you several years of cheap computing time. Also check the speed of the front side bus (FSB) when purchasing your new computer or CPU. A front side bus of 800 MHz or greater is essential for fast processing speeds. The processor's cache is also important. Make sure it has at least 1 MB of last level cache if your computing needs are average. If you're an extreme gamer or if you run intensive graphics programs, get the processor with the largest cache that fits your budget. There can be hundreds of dollars' difference between the cheapest processors and the most expensive ones. However, investing just a little extra cash can get you a much better processor.
6. Getting a processor with a dual, triple, or quad core can make a significant difference in the processing power of your computer. It's like having two, three, or four separate processors installed on your computer at one time. These processors work together to make your computer multitask faster and with greater efficiency. Getting a CPU with a larger front side bus can enhance the processor's ability to communicate with RAM, which will increase your computer's overall speed.
Functions of CPU Processor
I want to do this! What's This?
Microsoft Clipart
A CPU processor or central processing unit controls the functions of most electronic products. The CPU accepts the input data, processes the information and sends it to the component that is in charge of executing the action. CPUs are also known as microprocessors and are at the center of any computer system. Although CPUs are most often thought of as a computer chip, they can also be found in many other electronic devices including cell phones, hand held devices, microwaves, television sets and toys.
1. The CPU evolved from miniature transmitters and integrated circuits which were developed in the early 1960s by IBM and other top technology companies of the time. By the early 1970s, transmitting integrated circuits were being manufactured commercially and engineers took that technology and developed the CPU. Harnessing the transmission abilities of integrated circuits, engineers added the ability to process information and memory power, Combined, these elements became the core of the CPU. By the end of the 1970s, technology had reached the point where CPUs could be commercially produced and were the size of a fingernail.
During the 1980s, CPUs became a standard component in consumer electronics. They could be found in cameras, television sets and pocket calculators. By the next decade, the small size and cheap production cost of the CPU allowed computers to cross over from industry to the home. Today, engineers continue to fine tune CPUS, making them smaller and more powerful.
2. CPUs are made up of six key components, which work in conjunction to process and execute commands. The control unit is the brain of the CPU. This part receives the input data and decides where to send the processed information. The instruction cache is where the control unit's instructions are stored. Specific instruction data is loaded into the CPU when it is manufactured. The pre-fetch unit is the information portal. Input data goes through the pre-fetch which stores a copy of the data before sending it on to be processed by the control unit. The decode unit translate the input instruction into binary code, which is then sent on to the ALU. The arithmetic logic unit, or ALU, receives the code from the decode unit and chooses the action needed to carry out the command. RAM and ROM are the CPUs memory cache. Here, all information that has been sent, received or preloaded is stored. Sections of the RAM and ROM can be accessed by the system user.
3. There are a series of steps that a CPU performs to execute a command. Each command is handled individually and a CPU can process multiple commands in a matter of seconds. The more powerful the CPU, the faster the commands are processed.
1. A command is issued by the system user using an input device such as a keyboard or mouse.
2. The command is sent to the prefetch unit. The unit accesses the preloaded CPU memory to identify the command and sends it to the command unit.
3. The command unit determines what steps come next. This data is passed on to the decode unit.
4. The decode unit transfer the data into binary code and sends it to the ALU.
5. The ALU changes the raw data into an actual command.
6. The ALU sends a copy of the command to the RAM or ROM before sending it back to the command unit.
7. The command unit sends the code to the part of the system that will actual perform the action.
8. The action is executed and the result is sent back to the user.
4. There are different types of CPUs; each type comes with varying degrees of speed memory and preset instructions. The larger the CPU, the faster it can process, store and execute commands. A single core CPU is the smallest unit available. It is usually found in smaller appliances that only perform a simple set of actions such as a remote control or toy. Dual-core CPUs contains two command units and contain enough power and memory for most personal computers. Multi-core CPUS contain multiple command units. They are mainly used by large industrial electronic devices, servers, and network work stations.
5. CPU size refers to the unit's power to perform tasks and the amount of memory space it contains. CPU size is measured in binary digits and are called bits. Originally, CPUs contained 4-bits but that has since evolved into 8 bits. 8 bit CPUs are the smallest and slowest components available and are used mostly in toys or household appliances.
16-bit and 32-bit have become the standard CPU size and can be found in personal computers, laptops, cell phones and other electronic devices that can perform a variety of tasks. 64-bit CPUs are becoming increasingly popular in high-end personal computers and laptops. There are also larger CPUs which are usually used for industrial purposes.
From AT to BTX:
Motherboard Form Factors
You've probably heard the term motherboard a thousand times, but do you know what it really means and how it relates to the rest of your computer?
The form factor of a motherboard determines the specifications for its general shape and size. It also specifies what type of case and power supply will be supported, the placement of mounting holes, and the physical layout and organization of the board. Form factor is especially important if you build your own computer systems and need to ensure that you purchase the correct case and components.
The Succession of Motherboard Form Factors
AT & Baby AT
Prior to 1997, IBM computers used large motherboards. After that, however, the size of the motherboard was reduced and boards using the AT (Advanced Technology) form factor was released. The AT form factor is found in older computers (386 class or earlier). Some of the problems with this form factor mainly arose from the physical size of the board, which is 12" wide, often causing the board to overlap with space required for the drive bays.
Following the AT form factor, the Baby AT form factor was introduced. With the Baby AT form factor the width of the motherboard was decreased from 12" to 8.5", limiting problems associated with overlapping on the drive bays' turf. Baby AT became popular and was designed for peripheral devices — such as the keyboard, mouse, and video — to be contained on circuit boards that were connected by way of expansion slots on the motherboard.
Baby AT was not without problems however. Computer memory itself advanced, and the Baby AT form factor had memory sockets at the front of the motherboard. As processors became larger, the Baby AT form factor did not allow for space to use a combination of processor, heatsink, and fan. The ATX form factor was then designed to overcome these issues.
ATX
With the need for a more integrated form factor which defined standard locations for the keyboard, mouse, I/O, and video connectors, in the mid 1990's the ATX form factor was introduced. The ATX form factor brought about many chances in the computer. Since the expansion slots were put onto separate riser cards that plugged into the motherboard, the overall size of the computer and its case was reduced. The ATX form factor specified changes to the motherboard, along with the case and power supply. Some of the design specification improvements of the ATX form factor included a single 20-pin connector for the power supply, a power supply to blow air into the case instead of out for better air flow, less overlap between the motherboard and drive bays, and integrated I/O Port connectors soldered directly onto the motherboard. The ATX form factor was an overall better design for upgrading.
micro-ATX
MicroATX followed the ATX form factor and offered the same benefits but improved the overall system design costs through a reduction in the physical size of the motherboard. This was done by reducing the number of I/O slots supported on the board. The microATX form factor also provided more I/O space at the rear and reduced emissions from using integrated I/O connectors.
LPX
White ATX is the most well-known and used form factor, there is also a non-standard proprietary form factor which falls under the name of LPX, and Mini-LPX. The LPX form factor is found in low-profile cases (desktop model as opposed to a tower or mini-tower) with a riser card arrangement for expansion cards where expansion boards run parallel to the motherboard. While this allows for smaller cases it also limits the number of expansion slots available. Most LPX motherboards have sound and video integrated onto the motherboard. While this can make for a low-cost and space saving product they are generally difficult to repair due to a lack of space and overall non-standardization. The LPX form factor is not suited to upgrading and offer poor cooling.
NLX
Boards based on the NLX form factor hit the market in the late 1990's. This "updated LPX" form factor offered support for larger memory modules, tower cases, AGP video support and reduced cable length. In addition, motherboards are easier to remove. The NLX form factor, unlike LPX is an actual standard which means there is more component options for upgrading and repair.
Many systems that were formerly designed to fit the LPX form factor are moving over to NLX. The NLX form factor is well-suited to mass-market retail PCs.
BTX
The BTX, or Balanced Technology Extended form factor, unlike its predecessors is not an evolution of a previous form factor but a total break away from the popular and dominating ATX form factor. BTX was developed to take advantage of technologies such as Serial ATA, USB 2.0, and PCI Express. Changes to the layout with the BTX form factor include better component placement for back panel I/O controllers and it is smaller than microATX systems. The BTX form factor provides the industry push to tower size systems with an increased number of system slots.
One of the most talked about features of the BTX form factor is that it uses in-line airflow. In the BTX form factor the memory slots and expansion slots have switched places, allowing the main components (processor, chipset, and graphics controller) to use the same airflow which reduces the number of fans needed in the system; thereby reducing noise. To assist in noise reduction BTX system level acoustics have been improved by a reduced air turbulence within the in-line airflow system.
Initially there will be three motherboards offered in BTX form factor. The first, picoBTX will offer four mounting holes and one expansion slot, while microBTX will hold seven mounting holes and four expansion slots, and lastly, regularBTX will offer 10 mounting holes and seven expansion slots. The new BTX form factor design is incompatible with ATX, with the exception of being able to use an ATX power supply with BTX boards.
Today the industry accepts the ATX form factor as the standard, however legacy AT systems are still widely in use. Since the BTX form factor design is incompatible with ATX, only time will tell if it will overtake ATX as the industry standard.
Did You Know...
ATX and Baby AT boards are approximately the same size, but the ATX board is rotated 90 degrees within the case to allow for easier access to components.
The motherboard is a vital component that is responsible for smooth processing of data occurs on a computer. There was a little damage to the motherboard will cause damage to the entire system, because the motherboard is place for all the hardware components attached. Start from the graphics card, RAM, hard disk, optical drive, all combined into one unit by using the port in the motherboard. Below is list of general motherboard component :
Many motherboard manufacturers other products that are complete with many additional features that will make the compatibility of various hardware increase
Motherboard Troubleshooting
A.)GENERAL TESTING TIPS.
Before you begin, download a few of our Diagnostic Software Tools to pinpoint possible problem areas in your PC. Ideally, troubleshooting is best accomplished with duplicate parts from a used computer enabling "test" swapping of peripheral devices/cards/chips/cables. In general, it is best to troubleshoot on systems that have been leaned-out. Remove unnecessary peripherals (soundcard, modem, harddisk, etc.) to check the unworking device in as much isolation as possible. Also, when swapping devices, don't forget the power supply. Power incompetency (watts and volts) can cause intermittent problems at all levels, but especially with UARTS and HD's.
Inspect the motherboard for loose components. A loose or missing CPU, BIOS chip, Crystal Oscillator, or Chipset chip will cause the motherboard not to function. Also check for loose or missing jumper caps, missing or loose memory chips (cache and SIMM's or DIMM's). To possibly save you hours of frustration i'll mention this here, check the BIOS Setup settings. 60% of the time this is the cause of many system failures. A quick fix is to restore the BIOS Defaults. Next, eliminate the possibility of interference by a bad or improperly set up I/O card by removing all cards except the video adapter. The system should at least power up and wait for a drive time-out. Insert the cards back into the system one at a time until the problem happens again. When the system does nothing, the problem will be with the last expansion card that was put in.
Did you recently 'flash' your computers BIOS, and needed to change a jumper to do so? Perhaps you left the jumper in the 'flash' position which could cause the CMOS to be erased.
If you require the CMOS Reset and don't have the proper jumper settings try these methods: Our Help Desk receives so many requests on Clearing BIOS/CMOS Passwords that we've put together a standard text outlining the various solutions.
Switching power supplies (the most common used PC's), cannot be adequately field-tested with V/OHM meters. Remember: for most switching power supplies to work, a FLOPPY and at least 1 meg of memory must be present on the motherboard. If the necessary components are present on the motherboard and there is no power:
1) check the power cable to the wall and that the wall socket is working. (You'd be surprised!)
2) swap power supply with one that is known to work.
3) if the system still doesn't work, check for fuses on the motherboard. If there are none, you must replace the motherboard.
Peripherals are any devices that are connected to the motherboard, including I/O boards, RS232/UART devices (including mice and modems), floppies and fixed-disks, video cards, etc. On modern boards, many peripherals are integrated into the motherboard, meaning, if one peripheral fails, effectually the motherboard has to be replaced.* On older boards, peripherals were added via daughter boards.
*some MB CMOS's allow for disabling on-board devices, which may be an option for not replacing the motherboard -- though, in practicality, some peripheral boards can cost as much, if not more, than the motherboard. Also, failure of on-board devices may signal a cascading failure to other components.
1. New peripheral?
a) Check the MB BIOS documentation/setup to ensure that the BIOS supports the device and that the MB is correctly configured for the device.
(Note>> when in doubt, reset CMOS to DEFAULT VALUES. These are ) (optimized for the most generalized settings that avoid some of) (the conflicts that result from improper 'tweaking'. )
b) Check cable attachments & orientation (don't just look, reattach!)
c) If that doesn't work, double-check jumper/PnP (including software and/or MB BIOS set) settings on the device.
d) If that doesn't work, try another peripheral of same brand & model that is known to work.
e) If the swap peripheral works, the original peripheral is most likely the problem. (You can verify this by testing the non-working peripheral on a test MB of the same make & bios.)
f) If the swap periphal doesn't on the MB, verify the functionality of the first peripheral on a test machine. If the first peripheral works on another machine AND IF the set-up of the motherboard BIOS is verified AND IF all potentially conflicting peripherals have been removed OR verified to not be in conflict, the motherboard is suspect. (However, see #D below.)
g) At this point, recheck MB or BIOS documentation to see if there are known bugs with the peripheral AND to verify any MB or peripheral jumper settings that are necessary for the particular peripheral to work. Also, try a different peripheral of the same kind but a different make to see if it works. If it does not, swap the motherboard. (However, see #D below.)
2. Peripheral that worked before?
a) If the hood has been opened (or even if it has not), check the orientation and/or seating of the cables. Cables sometimes 'shake' loose or are accidentally pulled out by end-users, who then misalign or do not reattach them.
b) If that doesn't work, try the peripheral in another machine of the same make & bios that is known to work. If the peripheral still doesn't work, the peripheral is most likely the problem. (This can be verified by swapping-in a working peripheral of the same make and model AND that is configured the same as the one that is not working. If it works, then the first peripheral is the problem.)
c) If the peripheral works on another machine, double-check other peripherals and/or potential conflicts on the MB, including the power supply. If none can be found, suspect the MB.
d) At this point, recheck MB or BIOS documentation to see if there are known bugs with the peripheral AND to verify any jumper settings that might be necessary for the particular peripheral. Also, try another peripheral of the same kind but a different make to see if it works. If not, swap the motherboard!
E.)OTHER INDICATIONS OF A PROBLEM MOTHERBOARD.
1. CLOCK that won't keep correct time. >>Be sure to check/change the battery.
2. CMOS that won't hold configuration information. >>Again, check/change the battery.
Note about batteries and CMOS: in theory, CMOS should retain configuration information even if the system battery is removed or dies. In practice, some systems rely on the battery to hold this information. On these systems, a machine that is not powered-up for a week or two may report improper BIOS configuration. To check this kind of system, change the battery, power-up and run the system for several hours. If the CMOS is working, the information should be retained with the system off for more than 24 hours.
F.)BAD MOTHERBOARD OR OBSOLETE BIOS?
1. If the motherboard cannot configure to a particular peripheral, don't automatically assume a bad motherboard, even if the peripheral checks out on another machine -- especially if the other machine has a different BIOS revision. Check with the board manufacturer to see if a BIOS upgrade is available. Many BIOS upgrades can be made right on the MB with a FLASH RAM program provided by the board maker. See our BIOS page for more information.
Processor Types
This was the very first ARM processor. Actually, when it was first manufactured in April 1985, it was the very first commercial RISC processor. Ever.
As a testament to the design team, it was "working silicon" in it's first incarnation, it exceeded it's design goals, and it used less than 25,000 transistors.
The ARM 1 was used in a few evaluation systems on the BBC micro (Brazil - BBC interfaced ARM), and a PC machine (Springboard - PC interfaced ARM).
It is believed a large proportion of Arthur was developed on the Brazil hardware.
In essence, it is very similar to an ARM 2 - the differences being that R8 and R9 are not banked in IRQ mode, there's no multiply instruction, no LDR/STR with register-specified shifts, and no co-processor gubbins.
ARM 2 (v2)
Experience with the ARM 1 suggested improvements that could be made. Such additions as the MUL and MLA instructions allowed for real-time digital signal processing. Back then, it was to aid in generating sounds. Who could have predicted exactly how suitable to DSP the ARM would be, some fifteen years later?
In 1985, Acorn hit hard times which led to it being taken over by Olivetti. It took two years from the arrival of the ARM to the launch of a computer based upon it...
...those were the days my friend, we thought they'd never end.
When the first ARM-based machines rolled out, Acorn could gladly announce to the world that they offered the fastest RISC processor around. Indeed, the ARM processor kicked ass across the computing league tables, and for a long time was right up there in the 'fastest processors' listings. But Acorn faced numerous challenges. The computer market was in disarray, with some people backing IBM's PC, some the Amiga, and all sorts of little itty-bitty things. Then Acorn go and launch a machine offering Arthur (which was about as nice as the first release of Windows) which had no user base, precious little software, and not much third party support. But they succeeded.
The ARM 2 processor was the first to be used within the RISC OS platform, in the A305, A310, and A4x0 range. It is an 8MHz processor that was used on all of the early machines, including the A3000. The ARM 2 is clocked at 8MHz, which translates to approximately four and a half million instructions per second (0.56 MIPS/MHz).
ARM 3 (v2as)
Launched in 1989, this processor built on the ARM 2 by offering 4K of cache memory and the SWP instruction. The desktop computers based upon it were launched in 1990.
Internally, via the dedicated co-processor interface, CP15 was 'created' to provide processor control and identification.
Several speeds of ARM 3 were produced. The A540 runs a 26MHz version, and the A4 laptop runs a 24MHz version. By far the most common is the 25MHz version used in the A5000, though those with the 'alpha variant' have a 33MHz version.
At 25MHz, with 12MHz memory (a la A5000), you can expect around 14 MIPS (0.56 MIPS/MHz).
It is interesting to notice that the ARM3 doesn't 'perform' faster - both the ARM2 and the ARM3 average 0.56 MIPS/MHz. The speed boost comes from the higher clock speed, and the cache.
Oh, and just to correct a common misunderstanding, the A4 is not a squashed down version of the A5000. The A4 actually came first, and some of the design choices were reflected in the later A5000 design.
The 'Electron' of ARM processors, this is basically a second level revision of the ARM 3 design which removes the cache, and combines the primary chipset (VIDC, IOC, and MEMC) into the one piece of silicon, making the creation of a cheap'n'cheerful RISC OS computer a simple thing indeed. This was clocked at 12MHz (the same as the main memory), and offers approximately 7 MIPS (0.58 MIPS/MHz).
This processor isn't as terrible as it might seem. That the A30x0 range was built with the ARM250 was probably more a cost-cutting exercise than intention. The ARM250 was designed for low power consumption and low cost, both important factors in devices such as portables, PDAs, and organisers - several of which were developed and, sadly, none of which actually made it to a release.
This is not actually a processor. It is included here for historical interest. It seems the machines that would use the ARM250 were ready before the processor, so early releases of the machine contained a 'mezzanine' board which held the ARM 2, IOC, MEMC, and VIDC.
ARM 4 and ARM 5
These processors do not exist.
More and more people began to be interested in the RISC concept, as at the same sort of time common Intel (and clone) processors showed a definite trend towards higher power consumption and greater need for heat dissipation, neither of which are friendly to devices that are supposed to be running off batteries.
The ARM design was seen by several important players as being the epitome of sleek, powerful RISC design.
It was at this time a deal was struck between Acorn, VLSI (long-time manufacturers of the ARM chipset), and Apple. This lead to the death of the Acorn RISC Microprocessor, as Advanced RISC Machines Ltd was born. This new company was committed to design and support specifically for the processor, without the hassle and baggage of RISC OS (the main operating system for the processor and the desktop machines). Both of those would be left to Acorn.
In the change from being a part of Acorn to being ARM Ltd in it's own right, the whole numbering scheme for the processors was altered.
ARM 610 (v3)
This processor brought with it two important 'firsts'. The first 'first' was full 32 bit addressing, and the second 'first' was the opening for a new generation of ARM based hardware.
Acorn responded by making the RiscPC. In the past, critics were none-too-keen on the idea of slot-in cards for things like processors and memory (as used in the A540), and by this time many people were getting extremely annoyed with the inherent memory limitations in the older hardware, the MEMC can only address 4Mb of memory, and you can add more by daisy-chaining MEMCs - an idea that not only sounds hairy, it is hairy!
The RiscPC brought back the slot-in processor with a vengeance. Future 'better' processors were promised, and a second slot was provided for alien processors such as the 80486 to be plugged in. As for memory, two SIMM slots were provided, and the memory was expandable to 256Mb. This does not sound much as modern PCs come with half that as standard. However you can get a lot of milage from a RiscPC fitted with a puny 16Mb of RAM.
But, always, we come back to the 32 bit. Because it has been with us and known about ever since the first RiscPC rolled out, but few people noticed, or cared. Now as the new generation of ARM processors drop the 26 bit 'emulation' modes, we RISC OS users are faced with the option of getting ourselves sorted, or dying.
Ironically, the other mainstream operating systems for the RiscPC hardware - namely ARMLinux and netbsd/arm32 are already fully 32 bit.
Several speeds were produced; 20MHz, 30Mhz, and the 33MHz part used in the RiscPC.
The ARM610 processor features an on-board MMU to handle memory, a 4K cache, and it can even switch itseld from little-endian operation to big-endian operation. The 33MHz version offers around 28MIPS (0.84 MIPS/MHz).
As an enhancement of the ARM610, the ARM 710 offers an increased cache size (8K rather than 4K), clock frequency increased to 40MHz, improved write buffer and larger TLB in the MMU.
Additionally, it supports CMOS/TTL inputs, Fastbus, and 3.3V power but these features are not used in the RiscPC.
Clocked at 40MHz, it offers about 36MIPS (0.9 MIPS/MHz); which when combined with the additional clock speed, it runs an appreciable amount faster than the ARM 610.
The ARM7500 is a RISC based single-chip computer with memory and I/O control on-chip to minimise external components. The ARM7500 can drive LCD panels/VDUs if required, and it features power management. The video controller can output up to a 120MHz pixel rate, 32bit sound, and there are four A/D convertors on-chip for connection of joysticks etc.
The processor core is basically an ARM710 with a smaller (4K) cache.
The video core is a VIDC2.
The IO core is based upon the IOMD.
The memory/clock system is very flexible, designed for maximum uses with minimum fuss. Setting up a system based upon the ARM7500 should be fairly simple.
ARM 7500FE
A version of the ARM 7500 with hardware floating point support.
The StrongARM took the RiscPC from around 40MHz to 200-300MHz and showed a speed boost that was more than the hardware should have been able to support. Still severely bottlednecked by the memory and I/O, the StrongARM made the RiscPC fly. The processor was the first to feature different instruction and data caches, and this caused quite a lot of self-modifying code to fail including, amusingly, Acorn's own runtime compression system. But on the whole, the incompatibilities were not more painful than an OS upgrade (anybody remember the RISC OS 2 to RISC OS 3 upgrade, and all the programs that used SYS OS_UpdateMEMC, 64, 64 for a speed boost froze the machine solid!).
In instruction terms, the StrongARM can offer half-word loads and stores, and signed half-word and byte loads and stores. Also provided are instructions for multiplying two 32 bit values (signed or unsigned) and replying with a 64 bit result. This is documented in the ARM assembler user guide as only working in 32-bit mode, however experimentation will show you that they work in 26-bit mode as well. Later documentation confirms this.
The cache has been split into separate instruction and data cache (Harvard architecture), with both of these caches being 16K, and the pipeline is now five stages instead of three.
In terms of performance... at 100MHz, it offers 114MIPS which doubles to 228MIPS at 200MHz (1.14 MIPS/MHz).
In order to squeeze the maximum from a RiscPC, the Kinetic includes fast RAM on the processor card itself, as well as a version of RISC OS that installs itself on the card. Apparently it flies due to removing the memory bottleneck, though this does cause 'issues' with DMA expansion cards.
This is a version of the SA110 designed primarily for portable applications. I mention it here as I am reliably informed that the SA1100 is the processor inside the 'faster' Panasonic satellite digibox. It contains the StrongARM core, MMU, cache, PCMCIA, general I/O controller (including two serial ports), and a colour/greyscale LCD controller. It runs at 133MHz or 200MHz and it consumes less than half a watt of power.
The Thumb instruction set is a reworking of the ARM set, with a few things omitted. Thumb instructions are 16 bits (instead of the usual 32 bit). This allows for greater code density in places where memory is restricted. The Thumb set can only address the first eight registers, and there are no conditional execution instructions. Also, the Thumb cannot do a number of things required for low-level processor exceptions, so the Thumb instruction set will always come alongside the full ARM instruction set. Exceptions and the like can be handled in ARM code, with Thumb used for the more regular code.
Other versions
These versions are afforded less coverage due, mainly, to my not owning nor having access to any of these versions.
While my site started as a way to learn to program the ARM under RISC OS, the future is in embedded devices using these new systems, rather than the old 26 bit mode required by RISC OS...
...and so, these processors are something I would like to detail, in time.
This is an extension of the version three design (ARM 6 and ARM 7) that provides the extended 64 bit multiply instructions.
These instructions became a main part of the instruction set in the ARM version 4 (StrongARM, etc).
T variants
These processors include the Thumb instruction set (and, hence, no 26 bit mode).
E variants
These processors include a number of additional instructions which provide improved performance in typical DSP applications. The 'E' standing for "Enchanced DSP".
The future
The future is here. Newer ARM processors exist, but they are 32 bit devices.
This means, basically, that RISC OS won't run on them until all of RISC OS is modified to be 32 bit safe. As long as BASIC is patched, a reasonable software base will exist. However all C programs will need to be recompiled. All relocatable modules will need to be altered. And pretty much all assembler code will need to be repaired. In cases where source isn't available (ie, anything written by Computer Concepts), it will be a tedious slog.
It is truly one of the situations that could make or break the platform.
I feel, as long as a basic C compiler/linker is made FREELY available, then we should go for it. It need not be a 'good' compiler, as long as it will be a drop-in replacement for Norcroft CC version 4 or 5. Why this? Because RISC OS depends upon enthusiasts to create software, instead of big corporations. And without inexpensive reasonable tools, they might decide it is too much to bother with converting their software, so may decide to leave RISC OS and code for another platform.
I, personally, would happily download a freebie compiler/linker and convert much of my own code. It isn't plain sailing for us - think of all of the library code that needs to be checked. It will be difficult enough to obtain a 32 bit machine to check the code works correctly, never mind all the other pitfalls. Asking us for a grand to support the platform is only going to turn us away in droves. Heck, I'm still using ARM 2 and ARM 3 systems. Some of us smaller coders won't be able to afford such a radical upgrade. And that will be VERY BAD for the platform. Look how many people use the FREE user-created Internet suite in preference to commercial alternatives. Look at all of the support code available on Arcade BBS. Much of that will probably go, yes. But would a platform trying to re-establish itself really want to say goodbye to the rest?
I don't claim my code is wonderful, but if only one person besides myself makes good use of it - then it has been worth it.
The processor (CPU, for Central Processing Unit) is the computer's brain. It allows the processing of numeric data, meaning information entered in binary form, and the execution of instructions stored in memory.
The first microprocessor (Intel 4004) was invented in 1971. It was a 4-bit calculation device with a speed of 108 kHz. Since then, microprocessor power has grown exponentially. So what exactly are these little pieces of silicone that run our computers?
The processor (called CPU, for Central Processing Unit) is an electronic circuit that operates at the speed of an internal clock thanks to a quartz crystal that, when subjected to an electrical currant, send pulses, called "peaks". The clock speed (also called cycle), corresponds to the number of pulses per second, written in Hertz (Hz). Thus, a 200 MHz computer has a clock that sends 200,000,000 pulses per second. Clock frequency is generally a multiple of the system frequency (FSB, Front-Side Bus), meaning a multiple of the motherboard frequency.
With each clock peak, the processor performs an action that corresponds to an instruction or a part thereof. A measure called CPI (Cycles Per Instruction) gives a representation of the average number of clock cycles required for a microprocessor to execute an instruction. A microprocessor’s power can thus be characterized by the number of instructions per second that it is capable of processing. MIPS (millions of instructions per second) is the unit used and corresponds to the processor frequency divided by the CPI.
An instruction is an elementary operation that the processor can accomplish. Instructions are stored in the main memory, waiting to be processed by the processor. An instruction has two fields:
Operation Code | Operand Field |
The number of bits in an instruction varies according to the type of data (between 1 and 4 8-bit bytes).
Instructions can be grouped by category, of which the main ones are:
When the processor executes instructions, data is temporarily stored in small, local memory locations of 8, 16, 32 or 64 bits called registers. Depending on the type of processor, the overall number of registers can vary from about ten to many hundreds.
The main registers are:
Cache memory (also called buffer memory) is local memory that reduces waiting times for information stored in the RAM (Random Access Memory). In effect, the computer's main memory is slower than that of the processor. There are, however, types of memory that are much faster, but which have a greatly increased cost. The solution is therefore to include this type of local memory close to the processor and to temporarily store the primary data to be processed in it. Recent model computers have many different levels of cache memory:
Level 1 caches can be accessed very rapidly. Access waiting time approaches that of internal processor registers.
All these levels of cache reduce the latency time of various memory types when processing or transferring information. While the processor works, the level one cache controller can interface with the level two controller to transfer information without impeding the processor. As well, the level two cache interfaces with the RAM (level three cache) to allow transfers without impeding normal processor operation.
Control signals are electronic signals that orchestrate the various processor units participating in the execution of an instruction. Control signals are sent using an element called a sequencer. For example, the Read / Write signal allows the memory to be told that the processor wants to read or write information.
The processor is made up of a group of interrelated units (or control units). Microprocessor architecture varies considerably from one design to another, but the main elements of a microprocessor are as follows:
To process information, the microprocessor has a group of instructions, called the "instruction set", made possible by electronic circuits. More precisely, the instruction set is made with the help of semiconductors, little "circuit switches" that use the transistor effect, discovered in 1947 by John Barden, Walter H. Brattain and William Shockley who received a Nobel Prize in 1956 for it.
A transistor (the contraction of transfer resistor) is an electronic semi-conductor component that has three electrodes and is capable of modifying current passing through it using one of its electrodes (called control electrode). These are referred to as "active components", in contrast to "passive components", such as resistance or capacitors which only have two electrodes (referred to as being "bipolar").
A MOS (metal, oxide, silicone) transistor is the most common type of transistor used to design integrated circuits. MOS transistors have two negatively charged areas, respectively called source (which has an almost zero charge) and drain (which has a 5V charge), separated by a positively charged region, called a substrate). The substrate has a control electrode overlaid, called a gate, that allows a charge to be applied to the substrate.
When there is no charge on the control electrode, the positively charged substrate acts as a barrier and prevents electron movement from the source to the drain. However, when a charge is applied to the gate, the positive charges of the substrate are repelled and a negatively charged communication channel is opened between the source and the drain.
The transistor therefore acts as a programmable switch, thanks to the control electrode. When a charge is applied to the control electrode, it acts as a closed interrupter and, when there is no charge, it acts as an open interrupter.
Once combined, transistors can make logic circuits, that, when combined, form processors. The first integrated circuit dates back to 1958 and was built by Texas Instruments.
MOS transistors are therefore made of slices of silicone (called wafers) obtained after multiple processes. These slices of silicone are cut into rectangular elements to form a "circuit". Circuits are then placed in cases with input-output connectors and the sum of these parts makes an "integrated circuit". The minuteness of the engraving, written in microns (micrometers, written µm) defines the number of transistors per surface unit. There can be millions of transistors on one single processor.
Moore's Law, penned in 1965 by Gordon E. Moore, cofounder of Intel, predicted that processor performance (by extension of the number of transistors integrated in the silicone) would double every twelve months. This law was revised in 1975, bringing the number of months to 18. Moore’s Law is still being proven today.
Because the rectangular case contains input-output pins that resemble legs, the term "electronic flea" is used in French to refer to integrated circuits.
Each type of processor has its own instruction set. Processors are grouped into the following families, according to their unique instruction sets:
This explains why a program produced for a certain type of processor can only work directly on a system with another type of processor if there is instruction translation, called emulation. The term "emulator" is used to refer to the program performing this translation.
An instruction set is the sum of basic operations that a processor can accomplish. A processor’s instruction set is a determining factor in its architecture, even though the same architecture can lead to different implementations by different manufacturers.
The processor works efficiently thanks to a limited number of instructions, hardwired to the electronic circuits. Most operations can be performed using basic functions. Some architecture does, however, include advanced processor functions.
CISC (Complex Instruction Set Computer) architecture means hardwiring the processor with complex instructions that are difficult to create using basic instructions.
CISC is especially popular in 80x86 type processors. This type of architecture has an elevated cost because of advanced functions printed on the silicone.
Instructions are of variable length and may sometimes require more than one clock cycle. Because CISC-based processors can only process one instruction at a time, the processing time is a function of the size of the instruction.
Processors with RISC (Reduced Instruction Set Computer) technology do not have hardwired, advanced functions.
Programs must therefore be translated into simple instructions which complicates development and/or requires a more powerful processor. Such architecture has a reduced production cost compared to CISC processors. In addition, instructions, simple in nature, are executed in just one clock cycle, which speeds up program execution when compared to CISC processors. Finally, these processors can handle multiple instructions simultaneously by processing them in parallel.
Throughout time, microprocessor manufacturers (called founders) have developed a certain number of improvements that optimize processor performance.
Parallel processing consists of simultaneously executing instructions from the same program on different processors. This involves dividing a program into multiple processes handled in parallel in order to reduce execution time.
This type of technology, however, requires synchronization and communication between the various processes, like the division of tasks in a business: work is divided into small discrete processes which are then handled by different departments. The operation of an enterprise may be greatly affected when communication between the services does not work correctly.
Pipelining is technology that improves instruction execution speed by putting the steps into parallel.
To understand the pipeline’s mechanism, it is first necessary to understand the execution phases of an instruction. Execution phases of an instruction for a processor with a 5-step "classic" pipeline are as follows:
Instructions are organized into lines in the memory and are loaded one after the other.
Thanks to the pipeline, instruction processing requires no more than the five preceding steps. Because the order of the steps is invariable (FETCH, DECODE, EXECUTE, MEMORY, WRITE BACK), it is possible to create specialized circuits in the processor for each one.
The goal of the pipeline is to perform each step in parallel with the preceding and following steps, meaning reading an instruction (FETCH) while the previous step is being read (DECODE), while the step before that is being executed (EXECUTE), while the step before that is being written to the memory (MEMORY), and while the first step in the series is being recorded in a register (WRITE BACK).
In general, 1 to 2 clock cycles (rarely more) for each pipeline step or a maximum of 10 clock cycles per instruction should be planned for. For two instructions, a maximum of 12 clock cycles are necessary (10+2=12 instead of 10*2=20) because the preceding instruction was already in the pipeline. Both instructions are therefore being simultaneously processed, but with a delay of 1 or 2 clock cycles. For 3 instructions, 14 clock cycles are required, etc.
The principle of a pipeline may be compared to a car assembly line. The car moves from one workstation to another by following the assembly line and is completely finished by the time it leaves the factory. To completely understand the principle, the assembly line must be looked at as a whole, and not vehicle by vehicle. Three hours are required to produce each vehicle, but one is produced every minute!
It must be noted that there are many different types of pipelines, varying from 2 to 40 steps, but the principle remains the same.
Superscaling consists of placing multiple processing units in parallel in order to process multiple instructions per cycle.
HyperThreading (written HT) technology consists of placing two logic processors with a physical processor. Thus, the system recognizes two physical processors and behaves like a multitasking system by sending two simultaneous threads, referred to as SMT (Simultaneous Multi Threading). This "deception" allows processor resources to be better employed by guaranteeing the bulk shipment of data to the processor
What Is CPU Overclocking?
While the words CPU and microprocessor are used interchangeably, in the world of personal computers (PC), a microprocessor is actually a silicon chip that contains a CPU. At the heart of all personal computers sits a microprocessor that controls the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles. The three basic characteristics that differentiate microprocessors are the following:
The higher the value, the more powerful the CPU. For example, a 32-bit microprocessor that runs at 50MHz is more powerful than a 16-bit microprocessor that runs at 25MHz.
If you think overclocking sounds like an ominous term, you have the right idea. Basically overclocking means to run a microprocessor faster than the clock speed for which it has been tested and approved. Overclocking is a popular technique for getting a little performance boost from your system, without purchasing any additional hardware. Because of the performance boost overclocking, is very popular among hardcore 3D gamers.
Most times overclocking will result in a performance boost of 10 percent or less. For example, a computer with an Intel Pentium III processor running at 933MHz could be configured to run at speeds equivalent to a Pentium III 1050MHz processor by increasing the bus speed on the motherboard. Overclocking will not always have the exact same results. Two identical systems being overclocked most likely will not produce the same results. One will usually always overclock better than the other.
To overclock your CPU you must be quite familiar with hardware, and it is always a procedure conducted at your own risk. When overclocking there are some problems and issues you'll have to deal with, such as heat. An overclocked CPU will have an increased heat output, which means you have to look at additional cooling methods to ensure proper cooling of an overclocked CPU. Standard heat sinks and fans will generally not support an overclocked system. Additionally, you also have to have some understanding of the different types of system memory. Even though your CPU can be overclocked, it doesn't mean your RAM modules will support the higher speeds.
Common CPU Overclocking Methods
The most common methods of overclocking your CPU is to either raise the multiplier or raise the FSB (frontside bus) — while not the only options they are the most common. To understand overclocking, you have to understand the basics of CPU speeds. The speed of a CPU is measured in Megahertz (MHz) or Gigahertz (GHz). This represents the number of clock cycles that can be performed per second. The more clock cycles your CPU can do, the faster it processes information.
The formula for processor speed is: frontside bus x multiplier = processor speed.
Example:
(1) Pentium III 450MHz
The CPU runs at 450 million clock cycles per second. The CPU runs at at a speed of 450 megahertz. Using our processor speed equation we have: 100MHz (frontside bus) x 4.5 (multiplier) = 450MHz (processor speed)
The frontside bus connects the CPU to the main memory on the motherboard — basically, it's the conduit used by your entire system to communicate with your CPU. One caution with raising the FBS is that is can affect other system components. When you change the multiplier on a CPU, it will change only the CPU speed. If you change the FSB you are changing the speed at which all components of your system communicate with the CPU.
Using our example above, the multiplier is 4.5. Since valid multipliers end in .0 or .5, you could try increasing the multiplier to 5.0 to obtain a performance boost (which would result in 100MHz x 5.0 = 500MHz). By far the easiest way to overclock a CPU is to raise the multiplier, but this cannot be done all all systems. The multiplier on newer Intel CPUs cannot be adjusted, leaving Intel overclockers with the FSB overclocking method (because of this AMD is becoming more of a popular choice for overclockers). The equation formula doesn't change for the method of raise the FSB. In the example above the FSB was 100MHz. Raising it to 133Mhz would change the equation (133Mhz x 4.5 = 598.5 MHz).
Sometimes overclocking can be that simple -- other times it's not.
Depending on your motherboard, overclocking is done one of three ways: by changing jumper or dip-switch settings (from .on. and .off. or .close. and .open.), by changing some of the Chipset Features settings in your BIOS, or by using a combination of both. In overclocking you will need to know your hardware, plan your overclocking method, and, of course perform many tests once changes have been made. You may need to adjust your CPU voltage, and you will most likely have to try several settings before obtaining a successful and stable overclock result.
Overclocking Risks (and There Are Many)
Overclocking comes with many risks, such as overheating, so you should become familiar with all the pros and cons before you attempt it. Additionally, overclocking isn't supported by the major chip manufacturers which means overclocking your CPU will void your warranty. Overclocking can also decrease the lifespan of the CPU, cause failure in critical components and may even result in some data corruption. You may also notice an increase in unexplainable crashes and freezes.
You can find many complete step-by-step guides available online that detail the actual process of overclocking. If you've decided to take the plunge and overclock your CPU, we recommend you don't start with your only usable system (try using outdated and cheap hardware to practice with) and be sure to find a knowledgeable source and read some of the overclocking information and Web pages listed below in the links section to get you started in the right direction.
Did You Know...
"Multiplier locking forces the CPU to use a multiplier that is preset by the manufacturer. Intel has been quoted as saying they use multiplier locking to prevent unscrupulous retailers from overclocking processors to higher speeds, and selling overclocked systems to consumers for the same, higher price as the faster retail model."
Key Terms To Understanding Overclocking
CPU
Abbreviation of central processing unit. The CPU is the brains of the computer.
Overclock
To run a microprocessor faster than the speed for which it has been tested and approved.
frontside bus
The bus that connects the CPU to main memory on the motherboard.
More Overclocking Related Terms
clock speed
jumper
chipset
motherboard
bus
clock cycle
COMPUTER MEMORY
The system memory is the place where the computer holds current programs and data that are in use. There are various levels of computer memory (memory), including ROM, RAM, cache, page and graphics, each with specific objectives for system operation. This section focusses on the role of computer memory, and the technology behind it.
Although memory is used in many different forms around modern PC systems, it can be divided into two essential types: RAM and ROM. ROM, or Read Only Memory, is relatively small, but essential to how a computer works. ROM is always found on motherboards, but is increasingly found on graphics cards and some other expansion cards and peripherals. Generally speaking, ROM does not change. It forms the basic instruction set for operating the hardware in the system, and the data within remains intact even when the computer is shut down. It is possible to update ROM, but it's only done rarely, and at need. If ROM is damaged, the computer system simply cannot function.
RAM, or Random Access Memory, is "volatile." This means that it only holds data while power is present. RAM changes constantly as the system operates, providing the storage for all data required by the operating system and software. Because of the demands made by increasingly powerful operating systems and software, system RAM requirements have accelerated dramatically over time. For instance, at the turn of the millennium a typical computer may have only 128Mb of RAM in total, but in 2007 computers commonly ship with 2Gb of RAM installed, and may include graphics cards with their own additional 512Mb of RAM and more.
Clearly, modern computers have significantly more memory than the first PCs of the early 1980s, and this has had an effect on development of the PC's architecture. The trouble is, storing and retrieving data from a large block of memory is more time-consuming than from a small block. With a large amount of memory, the difference in time between a register access and a memory access is very great, and this has resulted in extra layers of cache in the storage hierarchy.
When accessing memory, a fast processor will demand a great deal from RAM. At worst, the CPU may have to waste clock cycles while it waits for data to be retrieved. Faster memory designs and motherboard buses can help, but since the 1990s "cache memory" has been employed as standard between the main memory and the processor. Not only this, CPU architecture has also evolved to include ever larger internal caches. The organisation of data this way is immensely complex, and the system uses ingenious electronic controls to ensure that the data the processor needs next is already in cache, physically closer to the processor and ready for fast retrieval and manipulation.
Read on for a closer look at the technology behind computer memory, and how developments in RAM and ROM have enabled systems to function with seemingly exponentially increasing power.
The Level 1 cache, or primary cache, is on the CPU and is used for temporary storage of instructions and data organised in blocks of 32 bytes. Primary cache is the fastest form of storage. Because it's built in to the chip with a zero wait-state (delay) interface to the processor's execution unit, it is limited in size.
Level 1 cache is implemented using Static RAM (SRAM) and until recently was traditionally 16KB in size. SRAM uses two transistors per bit and can hold data without external assistance, for as long as power is supplied to the circuit. The second transistor controls the output of the first: a circuit known as a "flip-flop" - so-called because it has two stable states which it can flip between. This is contrasted to dynamic RAM (DRAM), which must be refreshed many times per second in order to hold its data contents.
SRAM is manufactured in a way rather similar to how processors are: highly integrated transistor patterns photo-etched into silicon. Each SRAM bit is comprised of between four and six transistors, which is why SRAM takes up much more space compared to DRAM, which uses only one (plus a capacitor). This, plus the fact that SRAM is also several times the cost of DRAM, explains why it is not used more extensively in PC systems.
Intel's P55 MMX processor, launched at the start of 1997, was noteworthy for the increase in size of its Level 1 cache to 32KB. The AMD K6 and Cyrix M2 chips launched later that year upped the ante further by providing Level 1 caches of 64KB. 64Kb has remained the standard L1 cache size, though various multiple-core processors may utilise it differently.
For all L1 cache designs the control logic of the primary cache keeps the most frequently used data and code in the cache and updates external memory only when the CPU hands over control to other bus masters, or during direct memory access by peripherals such as optical drives and sound cards.
Some chipsets, such as the Pentium based Triton FX (and later), support a "write back" cache rather than a "write through" cache. Write through happens when a processor writes data simultaneously into cache and into main memory (to assure coherency). Write back occurs when the processor writes to the cache and then proceeds to the next instruction. The cache holds the write-back data and writes it into main memory when that data line in cache is to be replaced. Write back offers about 10% higher performance than write-through, but cache that has this function is more costly. A third type of write mode, write through with buffer, gives similar performance to write back.