Tuesday, 28 April 2015

Cryptography: An Introduction


The art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message intoplain text. Encrypted messages can sometimes be broken by cryptanalysis, also called codebreaking, although modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mailmessages, credit card information, and corporate data. One of the most popular cryptography systems used on the Internet is Pretty Good Privacybecause it's effective and free. Cryptography systems can be broadly classified into symmetric-key systems that use a single key that both the sender and recipient have, and public-keysystems that use two keys, a public key known to everyone and a private key that only the recipient of messages uses. Cryptography is the science of writing in secret code and is an ancient art; the first documented use of cryptography in writing dates back to circa 1900 B.C. when an Egyptian scribe used non-standard hieroglyphs in an inscription. Some experts argue that cryptography appeared spontaneously sometime after writing was invented, with applications ranging from diplomatic missives to war-time battle plans. It is no surprise, then, that new forms of cryptography came soon after the widespread development of computer communications. In data and telecommunications, cryptography is necessary when communicating over any untrusted medium, which includes just about any network, particularly the Internet. Within the context of any application-to-application communication, there are some specific security requirements, including: • Authentication: The process of proving one's identity. (The primary forms of host-to-host authentication on the Internet today are name-based or address-based, both of which are notoriously weak.) • Privacy/confidentiality: Ensuring that no one can read the message except the intended receiver. • Integrity: Assuring the receiver that the received message has not been altered in any way from the original. • Non-repudiation: A mechanism to prove that the sender really sent this message. Cryptography, then, not only protects data from theft or alteration, but can also be used for user authentication. There are, in general, three types of cryptographic schemes typically used to accomplish these goals: secret key (or symmetric) cryptography, public-key (or asymmetric) cryptography, and hash functions, each of which is described below. In all cases, the initial unencrypted data is referred to as plaintext. It is encrypted into ciphertext, which will in turn (usually) be decrypted into usable plaintext. In many of the descriptions below, two communicating parties will be referred to as Alice and Bob; this is the common nomenclature in the crypto field and literature to make it easier to identify the communicating parties. If there is a third or fourth party to the communication, they will be referred to as Carol and Dave. Mallory is a malicious party, Eve is an eavesdropper, and Trent is a trusted third party. There are several ways of classifying cryptographic algorithms. For purposes of this paper, they will be categorized based on the number of keys that are employed for encryption and decryption, and further defined by their application and use. The three types of algorithms that will be discussed are (Figure 1): • Secret Key Cryptography (SKC): Uses a single key for both encryption and decryption • Public Key Cryptography (PKC): Uses one key for encryption and another for decryption • Hash Functions: Uses a mathematical transformation to irreversibly "encrypt" information • So, why are there so many different types of cryptographic schemes? Why can't we do everything we need with just one? • The answer is that each scheme is optimized for some specific application(s). Hash functions, for example, are well-suited for ensuring data integrity because any change made to the contents of a message will result in the receiver calculating a different hash value than the one placed in the transmission by the sender. Since it is highly unlikely that two different messages will yield the same hash value, data integrity is ensured to a high degree of confidence. • Secret key cryptography, on the other hand, is ideally suited to encrypting messages, thus providing privacy and confidentiality. The sender can generate a session key on a per-message basis to encrypt the message; the receiver, of course, needs the same session key to decrypt the message. • Key exchange, of course, is a key application of public-key cryptography (no pun intended). Asymmetric schemes can also be used for non-repudiation and user authentication; if the receiver can obtain the session key encrypted with the sender's private key, then only this sender could have sent the message. Public-key cryptography could, theoretically, also be used to encrypt messages although this is rarely done because secret-key cryptography operates about 1000 times faster than public-key cryptography.

Thursday, 16 April 2015

Smart metering Technology for smart housing


A smart meter is usually an electronic device that records consumption of electric energy in intervals of an hour or less and communicates that information at least daily back to the utility for monitoring and billing.[7] Smart meters enable two-way communication between the meter and the central system. Unlike home energy monitors, smart meters can gather data for remote reporting. Such an advanced metering infrastructure (AMI) differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter. Smart Meters are electronic measurement devices used by utilities to communicate information for billing customers and operating their electric systems. For over fifteen years electronic meters, have been used effectively by utilities in delivering accurate billing data for at least a portion of their customer base. Initially, the use of this technology was applied to commercial and industrial customers due to the need for more sophisticated rates and more granular billing data requirements. The use of electronic meters came into service to the largest customers of the utility and over time gradually expanded to all customer classes. This migration was made possible by decreasing cost of the technology and advanced billing requirements for all customer classes. The combination of the electronic meters with two-way communications technology for information, monitor, and control is commonly referred to as Advanced Metering infrastructure (AMI). Previous systems, which utilized one-way communications to collect meter data were referred to as AMR (Automated Meter Reading) Systems. AMI has developed over time, from its roots as a metering reading substitute (AMR) to today’s two-way communication and data system. Not until the Smart Grid initiatives were established were these meters and systems referred to as ―Smart Meters and Smart Meter Systems‖. Hence, the present state of these technologies should be more appropriately referred to as ―an evolution, not a revolution‖ because of the development and use of Smart Meter technology and communications over the last fifteen years. The combined technologies are also required to meet national standards for accuracy and operability essential in the industry. Utility Customers Better access and data to manage energy use More accurate and timely billing Improved and increased rate options Improved outage restoration Power quality data Customer Service & Field Operations Reduced cost of Metering reading Reduced trips for off-cycle reads Eliminates handheld meter reading equipment Reduced call center transactions Reduced collections and connects/disconnects Revenue Cycle Services - Billing, Accounting, Revenue Protection Reduced back office rebilling Early detection of meter tampering and theft Reduced estimated billing and billing errors Transmission and Distribution Improved transformer load management Improved capacitor bank switching Data for improved efficiency, reliability of service, losses, and loading Improved data for efficient grid system design Power quality data for the service areas Marketing & Load Forecasting Reduced costs for collecting load research data The Role of Utility Metering Operations Metering Services operations in utilities have traditionally been tasked with providing customer billing measurement and have been responsible for accuracy, precision, and robust operations of the meters and support devices. Using a national system of standards, formal quality processes, utility best practices, and a dedicated sense of purpose, utility metering professionals have strived to produce the best system for billing utility customers in the global electric industry. In joint partnership with meter and communications manufacturers, they have driven the development of electronic metering and metering communications to deliver the top notch Smart Metering Systems available in the marketplace today. For successful Smart Meter projects, Metering Services operations are an integral part of the project planning, deployment and maintenance of the systems. Their contributions in these areas of the process are required and fundamental to the project success. The most important contributions include: Development of the Business and Technical requirements of the Systems, Participant of the technology selection team, Certification of the system meters and devices, Acceptance of the incoming production products, Development of safe and appropriate installation plans and processes, Development of a maintenance model to support the new systems, Development of training programs , Design and implementation of an appropriate In-Service Testing & Compliance process With the significant increase of new measurement technologies and integration of communication systems into basic meters, metering operations will be challenged both technically and operationally in the near and long term. The emphasis on metering operations in utilities will increase as more sophisticated billing and measurement systems are developed, designed and deployed. The Smart Grid and Smart Meter Systems Smart Meter Systems are an integral part of the Smart Grid infrastructure in data collection and communications. The Smart Grid is essentially the modernization of the transmission and distribution aspects of the electrical grid. Functionally, it is an automated electric power system that monitors and controls grid activities, ensuring the efficient and reliable two-way flow of electricity and information between power plants and consumers—and all points in between. A Smart Grid monitors electricity delivery and tracks power consumption with smart meters that transmit energy usage information to utilities via communication networks. Smart meters also allow the customers to track their own energy use. Basic Types of Smart Meter Systems There are two basic categories of Smart Meter System technologies as defined by their LAN. They are Radio Frequency (RF) and Power Line Carrier (PLC). Each of these technologies has its own advantages and disadvantages in application. The utility selects the best technology to meet its demographic and business needs. Factors that impact the selection of the technology include evaluation of existing infrastructure; impact on legacy equipment, functionality, technical requirements as well has the economic impact to the utility’s customers. The selection of the technology requires a thorough evaluation and analysis of existing needs and future requirements into a single comprehensive business case. Radio Frequency – RF Smart Meter measurements and other data are transmitted by wireless radio from the meter to a collection point. The data is then delivered by various methods to the utility data systems for processing at a central location. The utility billing, outage management, and other systems use the data for operational purposes. RF technologies are usually two different types: Mesh Technology The smart meters talk to each other (hop) to form a LAN cloud to a collector. The collector transmits the data using various WAN methods to the utility central location. – Mesh RF Technologies’ advantages include acceptable latency, large bandwidth, and typically operate at 9157 MHz frequencies. – Mesh technologies disadvantages include terrain and distance challenges for rural areas, proprietary communications, and multiple collection points. Point to Point Technology The smart meters talk directly to a collector, usually a tower. The tower collector transmits the data using various methods to the utility central location for processing. – Point to Point RF technologies advantages include little or no latency, direct communication with each endpoint, large bandwidth for better throughput, some are licensed spectrum, and can cover longer distances. – The disadvantages of point to point RF networks are licensing (not for 900MHz), terrain may prove challenging in rural areas (Line of Sight), proprietary communications used for some technologies, and less interface with DA devices Power Line Carrier - PLC Smart Meter measurements and other data can be transmitted across the utility power lines from the meter to a collection point, usually in the distribution substation feeding the meter. Some solutions have the collection point located on the secondary side of a distribution transformer. The data is then delivered to the utility data systems for processing at a central location. The utility billing, outage management, and other systems use the data for operational purposes. PLC technology advantages include leveraging the use of existing utility infrastructure of poles & wires, improved cost effectiveness for rural lines, more effective in challenging terrain, and the capability to work over long distances. PLC disadvantages include longer data transmit time (more latency), less bandwidth and throughput, limited interface with Distribution Automation (DA) devices , and higher cost in urban and suburban locations. There are other Smart Meter Systems in use that differ from those described above. However, these are generally a hybrid or combination design, a slight variation of the basic types, or niche products. The major Smart Meter System Technologies in use today are of one of these basic types.

Monday, 13 April 2015

Electric Braking: Merits and applications


Induction motors are used at various places. Speed control of induction motors is quite difficult and that’s why their use was restricted and DC motors had to be used as their speed regulation was possible. But when induction motor drives were invented and implemented, they were given preference because of many advantages over DC motors. Whenever controlling of motors is done, braking is the most important term, so as with induction motors. Induction motor braking can be done by different methods, which are i. Regenerative braking of induction motor ii. Plugging Braking of induction motor iii. Dynamic braking of induction motor is further categorized as a) AC dynamic breaking b) Self excited braking using capacitors c) DC dynamic braking d) Zero Sequence braking We know the power (input) of an induction motor is given as. Pin = 3VIscosφs Here, φs the phase angle between stator phase voltage V and the stator phase current Is. Now, for motoring operation φs < 90° and for braking operation φs > 90°. When the speed of the motor is more than the synchronous speed, relative speed between the motor conductors and air gap rotating field reverses, as a result the phase angle because greater than 90° and the power flow reverse and thus regenerative braking takes place. The nature of the speed torque curves are shown in the figure beside. It the source frequency is fixed then the regenerative braking of induction motor can only take place if the speed of the motor is greater than synchronous speed, but with a variable frequency source regenerative braking of induction motor can occur for speeds lower than synchronous speed. The main advantage of this kind of braking can be said that the generated power is use fully employed and the main disadvantage of this type of braking is that for fixed frequency sources, braking cannot happen below synchronous speeds. When the rotor of an induction motor turns slower than the speed set by the applied frequency, the motor is transforming electrical energy into mechanical energy at the motor shaft. This process is referred to as ‘motoring’. When the rotor turns faster than the synchronous speed set by a drive output, the motor is transforming mechanical energy from the motor shaft into electrical energy. It may be a ramp to stop, a reduction in commanded speed or an overhauling load that causes the shaft speed to be greater than the synchronous speed. In any case this, condition is referred to as ‘regeneration”. Essentially, mechanical energy is converted to electrical energy. The case is much the same for a DC drive and motor. The increase in DC voltage for the DC drive occurs at the armature connection. Some DC drives have not only a forward but also a reverse bridge. The reverse bridge allows the DC energy from the armature to be transferred to the utility line. If the DC drive has only the forward bridge, a shunt regulator can be used in parallel with the armature to dissipate this energy into heat. For an AC drive and motor in a regenerative condition, the AC power from the motor flows backward through the inverter bridge diodes shown in figure 1 below. On most AC drives, utility power is first converted into DC by a diode or SCR rectifier bridge These bridges are very cost effective, but only handle power in the “motoring” direction. Dynamic brake or Chopper – What’s the difference? From an electrical stand point they both do the same thing. The major difference is in the construction. Dynamic brakes have the controller, switching device and resistor housed in one self contained unit. It is rated in horsepower and has only a 20% duty cycle rating. A chopper contains only the regulator circuit and switching device and is rated in amps. The resistors are treated as a separate component. This gives the user several advantages. First, the resistors can be accurately sized for a given application. Also, the chopper module can be mounted in an enclosure while the resistor, with the large amount of heat energy to dissipate, can be remotely mounted up to 100 feet away. A close look at the application is needed before a decision to use a brake or chopper is made. Some rotational and linear loads with a low regenerative duty cycle can be handled with a brake while overhauling loads, and loads with a duty cycle greater than 20% are more suited to a chopper. In general, the chopper is a more “heavy duty” solution. Exceeding the 20% duty cycle rating, a condition that may be tough to prove after the fact causes many dynamic brake failures. Another avoidable cause of failure for dynamic brakes and choppers that warrants mention is misconnection. These devices need to be connected at the capacitor bank nodes of the DC bus. Many drives also provide a DC bus connection point at the input bridge rectifier nodes. Applications Electrical drive systems are being used more and more on ships, oil rigs, crane barges and offshore vessels of all types, for every type of powered application: main propellors and bow thrusters, driving winches and windlasses, cranes, lifts, conveyors and jacks, cable laying and tensioning. Advantages of electric braking An important benefit from using an electric drive is that reliable systems of regenerative and dynamic braking resistors are available to complement or replace traditional mechanical braking systems. The advantages of electric braking include control, reliability, mechanical simplicity, weight saving and in some cases the opportunity to make use of the regenerated braking energy. Range Cressall Resistors’ wide range of resistor technologies and long experience in this field means that we have suitable brake resistor designs for all of the above applications, with braking powers from a few kW up to many MW (needed, for example, for some crane and main propulsion brakes) and cooling methods which include liquid, forced air and natural convection. We have standard products of all types. Design considerations The majority of our designs make use of suitably rated Incoloy-sheathed mineral insulated elements. These are less vulnerable to physical damage, prevent accidental contact with live, possibly high voltages, and are thus considered much safer to use in these environments.

Sunday, 12 April 2015

AC voltage controllers performance and applications


A voltage controller, also called an AC voltage controller or AC regulator is an electronic module based on either thyristors, TRIACs,SCRs or IGBTs, which converts a fixed voltage, fixed frequency alternating current (AC) electrical input supply to obtain variable voltage in output delivered to a resistive load. This varied voltage output is used for dimming street lights, varying heating temperatures in homes or industry, speed control of fans and winding machines and many other applications, in a similar fashion to an autotransformer. AC voltage controllers (ac line voltage controllers) are employed to vary the RMS value of the alternating voltage applied to a load circuit by introducing Thyristors between the load and a constant voltage ac source. The RMS value of alternating voltage applied to a load circuit is controlled by controlling the triggering angle of the Thyristors in the ac voltage controller circuits. In brief, an ac voltage controller is a type of thyristor power converter which is used to convert a fixed voltage, fixed frequency ac input supply to obtain a variable voltage ac output. The RMS value of the ac output voltage and the ac power flow to the load is controlled by varying (adjusting) the trigger angle ‘’ The ac voltage controllers are classified into two types based on the type of input ac supply applied to the circuit.  Single Phase AC Controllers.  Three Phase AC Controllers. Single phase ac controllers operate with single phase ac supply voltage of 230V RMS at 50Hz in our country. Three phase ac controllers operate with 3 phase ac supply of 400V RMS at 50Hz supply frequency. Each type of controller may be sub divided into  Uni-directional or half wave ac controller.  Bi-directional or full wave ac controller. In brief different types of ac voltage controllers are  Single phase half wave ac voltage controller (uni-directional controller).  Single phase full wave ac voltage controller (bi-directional controller).  Three phase half wave ac voltage controller (uni-directional controller).  Three phase full wave ac voltage controller (bi-directional controller). APPLICATIONS OF AC VOLTAGE CONTROLLERS  Lighting / Illumination control in ac power circuits.  Induction heating.  Industrial heating & Domestic heating.  Transformers tap changing (on load transformer tap changing).  Speed control of induction motors (single phase and poly phase ac induction motor control).  AC magnet controls. 4. AC VOLTAGE CONTROL TECHNIQUES There are two different types of thyristor control used in practice to control the ac power flow  Phase control  On-Off control These are the two ac output voltage control techniques. In On-Off control technique Thyristors are used as switches to connect the load circuit to the ac supply (source) for a few cycles of the input ac supply and then to disconnect it for few input cycles. The Thyristors thus act as a high speed contactor (or high speed ac switch). 4.1 PHASE CONTROL TECHNIQUE In phase control the Thyristors are used as switches to connect the load circuit to the input ac supply, for a part of every input cycle. That is the ac supply voltage is chopped using Thyristors during a part of each input cycle. The thyristor switch is turned on for a part of every half cycle, so that input supply voltage appears across the load and then turned off during the remaining part of input half cycle to disconnect the ac supply from the load. By controlling the phase angle or the trigger angle ‘’ (delay angle), the output RMS voltage across the load can be controlled. The trigger delay angle ‘’ is defined as the phase angle (the value of t) at which the thyristor turns on and the load current begins to flow. Thyristor ac voltage controllers use ac line commutation or ac phase commutation. Thyristors in ac voltage controllers are line commutated (phase commutated) since the input supply is ac. When the input ac voltage reverses and becomes negative during the negative half cycle the current flowing through the conducting thyristor decreases and falls to zero. Thus the ON thyristor naturally turns off, when the device current falls to zero. Phase control Thyristors which are relatively inexpensive, converter grade Thyristors which are slower than fast switching inverter grade Thyristors are normally used. For applications up to 400Hz, if Triacs are available to meet the voltage and current ratings of a particular application, Triacs are more commonly used. Due to ac line commutation or natural commutation, there is no need of extra commutation circuitry or components and the circuits for ac voltage controllers are very simple. Due to the nature of the output waveforms, the analysis, derivations of expressions for performance parameters are not simple, especially for the phase controlled ac voltage controllers with RL load. But however most of the practical loads are of the RL type and hence RL load should be considered in the analysis and design of ac voltage controller circuits. PRINCIPLE OF ON-OFF CONTROL TECHNIQUE (INTEGRAL CYCLE CONTROL) The basic principle of on-off control technique is explained with reference to a single phase full wave ac voltage controller circuit shown below. The thyristor switches and are turned on by applying appropriate gate trigger pulses to connect the input ac supply to the load for ‘n’ number of input cycles during the time interval . The thyristor switches and are turned off by blocking the gate trigger pulses for ‘m’ number of input cycles during the time interval . The ac controller ON time usually consists of an integral number of input cycles. Referring to the waveforms of ON-OFF control technique in the above diagram, Two input cycles. Thyristors are turned ON during for two input cycles. One input cycle. Thyristors are turned OFF during for one input cycle Thyristors are turned ON precisely at the zero voltage crossings of the input supply. The thyristor is turned on at the beginning of each positive half cycle by applying the gate trigger pulses to as shown, during the ON time . The load current flows in the positive direction, which is the downward direction as shown in the circuit diagram when conducts. The thyristor is turned on at the beginning of each negative half cycle, by applying gating signal to the gate of , during . The load current flows in the reverse direction, which is the upward direction when conducts. Thus we obtain a bi-directional load current flow (alternating load current flow) in a ac voltage controller circuit, by triggering the thyristors alternately. This type of control is used in applications which have high mechanical inertia and high thermal time constant (Industrial heating and speed control of ac motors). Due to zero voltage and zero current switching of Thyristors, the harmonics generated by switching actions are reduced.

Tuesday, 7 April 2015

Fuel cells: Basics, Types, applications and technological bottlenecks


Fuel cells have been around longer than most batteries - the principle of the fuel cell was discovered in 1839 by Sir William Grove. They generate electricity from the reaction of hydrogen with oxygen to form water in a process which is the reverse of electrolysis. The fuel cell relies on a basic oxidation/reduction reaction, as with a battery, but the reaction takes place on the fuel rather than the electrodes. The fuel cell produces electricity as long as the cell receives a supply of fuel and can dispose of the oxidized old fuel . In a fuel cell, the anode usually is bathed in the fuel; the cathode collects and makes available the oxidant (often atmospheric oxygen). An ion-conducting membrane separates the two, allowing the reaction to take place without affecting the electrodes. There are six major fuel cell technologies are currently being pursued for different applications each with its own characteristics. Some operate at high temperatures, some use exotic electrode materials or catalysts, all are very complex. Alkali Phosphoric Acid Solid Oxide Molten Carbonate Proton Exchange Membrane PEM Direct Methanol DMFC They have been proposed for a wide range of applications from powering laptop computers, through automotive traction to high power load levelling. The most active developments are currently in the automotive sector where the favoured technology is PEM. This promises a high conversion efficiency of over 60% and an energy density of 120 W/Kg. DMFC do not use Hydrogen fuel with its associated supply problems, but the more convenient liquid Methanol. They are less efficient but offer compact and convenient designs suitable for future consumer electronics applications. The potential power generated by a fuel cell stack depends on the number and size of the individual fuel cells that comprise the stack and the surface area of the PEM. Advantages Fuel cell power is usually proposed as the green, alternative to the internal combustion engine, fuelled only hydrogen and leaving no pollutants other than water. Simple fuel requirements needing hydrogen fuel only, taking their oxygen from the air. No recharging is necessary. No time lost through recharging. (Acts like a perpetual primary cell) So long as fuel provided, the cells can provide constant power in remote locations. Practical fuel cells already have efficiencies as high as 60% Fuel cells deliver maximum efficiency at low power levels.( This is the reverse of the internal combustion engine) For transport applications fuel cell vehicles offer higher "well to wheel" (WTW) efficiencies than conventional internal combustion engines. hortcomings The environmentally friendly credentials of fuel cells overlook the processes needed to generate and distribute the hydrogen fuel. Fuel cells merely shift the pollution from the vehicle to some other location. Today, 98% of hydrogen is produced from fossil fuel sources. According to researchers Andrew and Jim Oswald from Warwick University: To replace petrol and diesel used for road transport in Britain with hydrogen produced by the electrolysis of water would require the building of 100 nuclear power stations or 100,000 wind turbines. If the wind turbines were sited off-shore, this would mean an approximately 10-kilometre-deep strip of wind turbines encircling the entire coastline of the British Isles. If sited on-shore then an area larger than the whole of Wales would have to be given over to wind turbines. A major factor inhibiting market take off is the lack of available infrastructure to provide the hydrogen fuel. Hydrogen fuel can be supplied in pure form in cylinders or the on board cylinders can be refilled at special refueling stations. Despite safety precautions there is still a perception by the general public that hydrogen fuel is unsafe. Alternatively hydrogen can be generated on board, as required, from hydrocarbon fuels such as Ethanol, Methanol, Petrol or Compressed Natural Gas in a process known as reforming. This is not an ideal solution. Reforming generates carbon dioxide as a waste product losing some of the green benefits of fuel cells. It is also expensive and it is like carrying your own chemical plant with you, but it does simplify the fuel supply infrastructure problem, however the fuel could just as easily power an internal combustion engine directly. Even ignoring these problems there are still many shortcomings in using fuel cells for prime motive power. The low cell voltage 0.6 - 0.7 Volts means that the system needs a lot of cells to obtain a normal operating voltage of 200 - 300 Volts to power the drive train motor. Power is generated as required but the process is not reversible within the fuel cell and so, like a primary cell, it can not accept regenerative braking loads. Fuel cells generate electrical energy but they can not store electrical energy. Fuel cells have a low dynamic range and slow transient response which causes an unacceptable lag in responding to calls for power by the driver. A power boost from a battery or from supercapacitors is therefore needed to achieve the desired system performance. Most designs need to work at high temperatures in order to achieve reasonable operating efficiencies. To generate the same efficiencies at lower temperatures requires large quantities of expensive catalysts such as platinum. Low temperature freeze-up of the electrolyte. Electrodes which are prone to contamination. Due to the need to use of exotic materials and complex system designs the system are still very expensive. Theoretically a fuel cell should be all that is needed to power an electric vehicle, however batteries are still needed to support fuel cell systems. Battery Support Batteries are needed in fuel cell vehicle applications for the following functions: During start- up to heat the fuel cell stack and the electrolyte to the optimum working temperature To pump the working fluids through the stack (air, hydrogen, water) To power the reformer if hydrogen is generated on board To provide short term power boosts to compensate for the fuel cell's slow response to sudden power demands (acceleration) To capture regenerative braking energy To power the vehicle's low voltage electrical systems Applications For automotive applications fuel cells are only suited to hybrid applications for providing the base power load with the demand peaks and troughs, and regenerative braking, being accommodated by batteries or booster capacitors. The fuel cell can therefore be dimensioned to work at its optimum working point, providing the average power rather than the peak power requirement permitting significant cost savings. Fuel cells have been used successfully in aerospace applications. Simple low power demonstrator kits are available for education purposes. Perhaps the best applications for fuel cells will be for high power load levelling. Prototypes of Direct Methanol cells are currently being trialled for mobile phone and laptop computer applications. Costs For a true comparison of alternative system efficiencies, costs and benefits, each alternative should be based on the same fuel source. Using oil as the original source of the energy the "well to wheel" cost provides a rational comparison of the energy utilisation efficiency of different systems. But oil is not the only source of energy. Electrical energy used to power electrical vehicles or to produce the hydrogen to feed the fuel cells can be derived from a wide variety of sources. These may include power stations fuelled by oil, or coal or hydro or nuclear power, or renewable resources such as wind, wave and solar power. There can thus be a huge variation in costs and environmental impacts depending on the methods used to supply the necessary fuel. Although many working systems for different applications have been built, practical, cost effective products are still perhaps ten years away. SOFC for stationary power generation Solid oxide fuel cells (SOFC) also operate at high temperatures, from 700°C to 1,000°C, depending on the design and application. There is ongoing research worldwide to establish operating conditions and material sets that could enable both ease of manufacturing and relatively low-cost mass production. For example, SOFCs with high power densities operating at lower temperatures—700°C instead of 1,000°C —have been developed and operated. The lower operating temperature may enable lower costs. A solid oxide fuel cell system has been commercialized as of this writing, but there are a number of companies working to establish systems, including Acumentrics, General Electric, IonAmerica, Rolls-Royce, and Siemens Power Corp. Alone, high-temperature fuel cells show tremendous promise. Through hybridization, high-temperature fuel cells may even achieve even greater efficiency. Hybridization occurs by combining a high-temperature fuel cell with a traditional heat engine such as a gas turbine. The resulting system performs at far higher efficiency than either system alone. Combined with an inherent low level of pollutant emission, hybrid configurations are likely to make up a major percentage of the next-generation advanced power generation systems for a wide range of applications. These efforts will likely be worthwhile given the escalating costs of fossil fuel. Because of the huge potential, some states have taken aggressive steps to be the manufacturing and employment base for fuel cell technology. California has established the Stationary Fuel Cell Collaborative, with a core group composed of state, federal, and non-government agencies to encourage a coordinated strategy. Industry is engaged through an advisory panel. Several years ago, the state of Ohio committed $103 million to establish a manufacturing and employment base for fuel cell technology. All of this activity affirms the strong interest in high-temperature fuel cells as the next generation of electricity and thermal product. Many believe high-temperature fuel cell technology will become an integral strategy for central power production of electricity and transportation fuels and a hybrid configuration is expected to provide hoteling or propulsive power for ships, locomotives, long-distance trucks, and civil aircraft. In all their potential applications—residential, commercial, industrial, or institutional, in distributed generation or in central power plants—high-temperature fuel cells indeed portend a profound change in the manner by which power will be generated in the decades to come.

Sunday, 5 April 2015

Engineering projects: Research and consultancy for programming, Physical modelling and simulation, Report publishing


MATLAB® is a high-performance technical computing language. It has an incredibly rich variety of functions and is often referred to as the Bible due to its vast programming capabilities. In MATLAB, computation, visualization, and programming are integrated such that the data can be expressed in a familiar mathematical notation. Evolved over many years with constant inputs from various users, it is now being widely used as a programming language for scientific and technical computation. One can perform powerful operations in MATLAB by using simple commands. Hence, writing programs in MATLAB is easier compared to other high-level languages such as FORTRAN or C. Users can even build their own set of functions for a particular application. It is an interactive system in which the basic data element is an array. MATLAB is used as a standard tool in various introductory and advanced courses of almost all streams of engineering. It is used as a tool for analysis, modeling and simulation, and for high productive research and development activities. It can be used for various functions such as mathematical computation, algorithm development, data acquisition, modeling, simulation, prototyping, data analysis, exploration, visualization, and engineering graphics development as well as building graphical user interface. SIMULINK is a software package for modeling, simulating, and analysing dynamic systems. MATLAB and SIMULINK are integrated and one can simulate, analyse, or revise the models in either environment. MATLAB also features a family of add-on application-specific features called toolboxes in SIMULINK. These toolboxes can be used for modeling and simulation of specialized technology in a real-time environment. This book attempts to train engineering students of different streams to use the functions and toolboxes of MATLAB and SIMULINK for the study, design, and analysis of different electrical circuits and systems. All these toolboxes can be used to build a real-time prototype of the system. Here you will directly speak to the expert, find a solution to your need or problem and save your valuable time and effort for future end-overs. Genuine advice and consultancy regarding an engineering project is provided here on reasonable consultant charges. The expert himself has vast experience of 15 years and has executed more than 500+ projects at various international universities as a professional.Also for some selected projects report can also be provided on reasonable charges on request.

Saturday, 4 April 2015

Switch Reluctance Motor drive for real time applications


The switched reluctance motor (SRM) is a type of a stepper motor, an electric motor that runs by reluctance torque. Unlike common DC motor types, power is delivered to windings in the stator (case) rather than the rotor. This greatly simplifies mechanical design as power does not have to be delivered to a moving part, but it complicates the electrical design as some sort of switching system needs to be used to deliver power to the different windings. With modern electronic devices, precisely timed switching is not a problem, and the SRM (Switched Reluctance Motor) is a popular design for modern stepper motors. Its main drawback is torque ripple. An alternate use of the same mechanical design is as a generator when driven mechanically, and the load is switched to the coils in sequence to synchronize the current flow with the rotation. Such generators can be run at much higher speeds than conventional types as the armature can be made as one piece of magnetisable material, a simple slotted cylinder . In this case use of the abbreviation SRM is extended to mean Switched Reluctance Machine, although SRG, Switched Reluctance Generator is also used. A topology that is both motor and generator is useful for starting the prime mover, as it saves a dedicated starter motor. The synchronous reluctance motor has many advantages over other ac motors. For example, its structure is simple and rugged. In addition, its rotor does not have any winding or magnetic material. Prior to twenty years ago, the SynRM was regarded as inferior to other types of ac motors due to its lower average torque and larger torque pulsation. Recently, many researchers have proposed several methods to improve the performance of the motor and drive system. In fact, the SynRM has been shown to be suitable for ac drive systems for several reasons. For example, it is not necessary to compute the slip of the SynRM as it is with the induction motor. As a result, there is no parameter sensitivity problem. In addition, it does not require any permanent magnetic material as the permanent synchronous motor does. The sensorless drive is becoming more and more popular for synchronous reluctance motors. The major reason is that the sensorless drive can save space and reduce cost. Generally speaking, there are two major methods to achieve a sensorless drive system: vector control and direct torque control. Although most researchers focus on vector control for a sensorless synchronous reluctance drive, direct torque control is simpler. By using direct torque control, the plane of the voltage vectors is divided into six or twelve sectors. Then, an optimal switching strategy is defined for each sector. The purpose of the direct torque control is to restrict the torque error and the stator flux error within given hysteresis bands. After executing hysteresis control, a switching pattern is selected to generate the required torque and flux of the motor. A closed-loop drive system is thus obtained. The SRM has wound field coils as in a DC motor for the stator windings. The rotor however has no magnets or coils attached. It is a solid salient-pole rotor (having projecting magnetic poles) made of soft magnetic material (often laminated-steel). When power is applied to the stator windings, the rotor's magnetic reluctance creates a force that attempts to align the rotor pole with the nearest stator pole. In order to maintain rotation, an electronic control system switches on the windings of successive stator poles in sequence so that the magnetic field of the stator "leads" the rotor pole, pulling it forward. Rather than using a troublesome high-maintenance mechanical commutator to switch the winding current as in traditional motors, the switched-reluctance motor uses an electronic position sensor to determine the angle of the rotor shaft and solid state electronics to switch the stator windings, which also offers the opportunity for dynamic control of pulse timing and shaping. This differs from the apparently similar induction motor which also has windings that are energized in a rotating phased sequence, in that the magnetization of the rotor is static (a salient pole that is made 'North' remains so as the motor rotates) while an induction motor has slip, and rotates at slightly less than synchronous speed. This absence of slip makes it possible to know the rotor position exactly, and the motor can be stepped arbitrarily slowly.

BLDC motor Drive and applications in real time systems


Before there were brushless DC motors there were brush DC motors, which were brought on in part to replace the less efficient AC induction motors that came before. The brush DC motor was invented all the way back in 1856 by famed German inventor and industrialist Ernst Werner von Siemens. Von Siemens is so famous that the international standard unit of electrical conductance is named after him. Von Siemens studied electrical engineering after leaving the army and produced many contributions to the world of electrical engineering, including the first electric elevator in 1880. Von Siemens’s brush DC motor was fairly rudimentary and was improved upon by Harry Ward Leonard, who nearly perfected the first effective motor control system near the end of the 19th century. This system used a rheostat to control the current in the field winding, which resulted in adjusting the output voltage of the DC generator, which in turn adjusted the motor speed. The Ward Leonard system remained in place all the way until 1960, when the Electronic Regulator Company’s thyristor devices produced solid state controllers that could convert AC power to rectified DC power more directly. It supplanted the Ward Leonard system due to its simplicity and efficiency. Advent of Brushless DC Motors Once the Electronic Regulator Company maximized the efficiency of the brush DC motor, the door was opened for an even more efficient motor device. Brushless DC motors first made the scene in 1962, when T.G. Wilson and P.H. Trickey unveiled what they called “a DC machine with solid state commutation.” Remember that the key element of brushless DC motors as opposed to brush DC motors is that the brushless DC motor requires no physical commutator, a revolutionary difference. As the device was refined and developed, it became a popular choice for special applications such as computer disk drives, robotics and in aircraft. In fact, brushless DC motors are used in these devices today, fifty years later, so great is their effectiveness. The reason these motors were such a great choice for these devices is that in these devices brush wear was a big problem, either because of the intense demands of the application or, for example, in the case of aircraft because of low humidity. Because brushless DC motors had no brushes that could wear out, they represented a great leap forward in technology for these types of devices. The problem was that as reliable as they were, these early brushless DC motors were not able to generate a great deal of power. Modern Brushless DC Motors That all changed in the 1980s, when permanent magnet materials became readily available. The use of permanent magnets, combined with high voltage transistors, enabled brushless DC motors to generate as much power as the old brush DC motors, if not more. Near the end of the 1980s, Robert E. Lordo of the POWERTEC Industrial Corporation unveiled the first large brushless DC motors, which had at least ten times the power of the earlier brushless DC motors. Today, there are probably no major motor manufacturers that do not produce brushless DC motors capable of high power jobs. Naturally, NMB Tech offers a wide variety of brushless DC motors for you to choose from, in sizes from 15mm in diameter to 65mm in diameter, from 0.7 maximum Watts output to 329.9. If you’re starting a new project that requires motors for its applications, you’ll want to seriously consider using brushless DC motors. Industries with motor needs have relied on brushless DC motors for nearly fifty years, and there is every reason to believe that they will continue to do so for decades to come. Take a look at some brushless DC motors today. The brushless DC (BLDC) motor can be envisioned as a brush DC motor turned inside out, where the permanent magnets are on the rotor, and the windings are on the stator. As a result, there are no brushes and commutators in this motor, and all of the disadvantages associated with the sparking of brush DC motors are eliminated. This motor is referred to as a "DC" motor because its coils are driven by a DC power source which is applied to the various stator coils in a predetermined sequential pattern. This process is known as commutation. However, "BLDC" is really a misnomer, since the motor is effectively an AC motor. The current in each coil alternates from positive to negative during each electrical cycle. The stator is typically a salient pole structure which is designed to produce a trapezoidal back-EMF waveshape which matches the applied commutated voltage waveform as closely as possible. However, this is very hard to do in practice, and the resulting back-EMF waveform often looks more sinusoidal than trapezoidal. For this reason, many of the control techniques used with a PMSM motor (such as Field Oriented Control) can equally be applied to a BLDC motor. Another misconception about the BLDC motor is related to how it is driven. Unlike an open-loop stepper application where the rotor position is determined by which stator coil is driven, in a BLDC motor, which stator coil is driven is determined by the rotor position. The stator flux vector position must be synchronized to the rotor flux vector position (not the other way around) in order to obtain smooth operation of the motor. In order to accomplish this, knowledge of the rotor position is required in order to determine which stator coils to energize. Several techniques exist to do this, but the most popular technique is to monitor the rotor position using hall-effect sensors. Unfortunately, these sensors and their associated connectors and harnesses result in increased system cost, and reduced reliability. In an effort to mitigate these issues, several techniques have been developed to eliminate these sensors, resulting in sensorless operation. Most of these techniques are based upon extracting position information from the back-EMF waveforms of the stator windings while the motor is spinning. However, techniques based on back-EMF sensing fall apart when the motor is spinning slowly or at a standstill, since the back-EMF waveforms are faint or non-existent. As a result, new techniques are constantly being developed which obtain rotor position information from other signals at low or zero speed. BLDC motors reign supreme in efficiency ratings, where values in the mid-nineties percent range are routinely obtained. Current research into new amorphous core materials is pushing this number even higher. Ninety six percent efficiency in the 100W range has been reported. They also compete for the title of fastest motor in the world, with speeds on some motors achieving several hundred thousand RPM (400K RPM reported in one application). The most common BLDC motor topology utilizes a stator structure consisting of three phases. As a result, a standard 6-transistor inverter is the most commonly used power stage, as shown in the diagram. Depending on the operational requirements (sensored vs. sensorless, commutated vs. sinusoidal, PWM vs. SVM, etc.) there are many different ways to drive the transistors to achieve the desired goal, which are too numerous to cover here. This places a significant requirement on the flexibility of the PWM generator, which is typically located in the microcontroller.

Thursday, 2 April 2015

Smart grid future grid for power distribution and monitoring


A smart grid is a modernized electrical grid that uses analog or digital information and communications technology to gather and act on information - such as information about the behaviors of suppliers and consumers - in an automated fashion to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity. Electronic power conditioning and control of the production and distribution of electricity are important aspects of the smart grid. “Smart grid” generally refers to a class of technology people are using to bring utility electricity delivery systems into the 21st century, using computer-based remote control and automation. These systems are made possible by two-way communication technology and computer processing that has been used for decades in other industries. They are beginning to be used on electricity networks, from the power plants and wind farms all the way to the consumers of electricity in homes and businesses. They offer many benefits to utilities and consumers -- mostly seen in big improvements in energy efficiency on the electricity grid and in the energy users’ homes and offices. For a century, utility companies have had to send workers out to gather much of the data needed to provide electricity. The workers read meters, look for broken equipment and measure voltage, for example. Most of the devices utilities use to deliver electricity have yet to be automated and computerized. Now, many options and products are being made available to the electricity industry to modernize it. The “grid” amounts to the networks that carry electricity from the plants where it is generated to consumers. The grid includes wires, substations, transformers, switches and much more. Much in the way that a “smart” phone these days means a phone with a computer in it, smart grid means “computerizing” the electric utility grid. It includes adding two-way digital communication technology to devices associated with the grid. Each device on the network can be given sensors to gather data (power meters, voltage sensors, fault detectors, etc.), plus two-way digital communication between the device in the field and the utility’s network operations center. A key feature of the smart grid is automation technology that lets the utility adjust and control each individual device or millions of devices from a central location. Features of the smart grid The smart grid represents the full suite of current and proposed responses to the challenges of electricity supply. Because of the diverse range of factors there are numerous competing taxonomies and no agreement on a universal definition. Nevertheless, one possible categorisation is given here. Reliability The smart grid will make use of technologies, such as state estimation, that improve fault detection and allow self-healing of the network without the intervention of technicians. This will ensure more reliable supply of electricity, and reduced vulnerability to natural disasters or attack. Although multiple routes are touted as a feature of the smart grid, the old grid also featured multiple routes. Initial power lines in the grid were built using a radial model, later connectivity was guaranteed via multiple routes, referred to as a network structure. However, this created a new problem: if the current flow or related effects across the network exceed the limits of any particular network element, it could fail, and the current would be shunted to other network elements, which eventually may fail also, causing a domino effect. See power outage. A technique to prevent this is load shedding by rolling blackout or voltage reduction (brownout). The economic impact of improved grid reliability and resilience is the subject of a number of studies and can be calculated using a US DOE funded methodology for US locations using at least one calculation tool. Flexibility in network topology Next-generation transmission and distribution infrastructure will be better able to handle possible bidirection energy flows, allowing for distributed generation such as from photovoltaic panels on building roofs, but also the use of fuel cells, charging to/from the batteries of electric cars, wind turbines, pumped hydroelectric power, and other sources. Classic grids were designed for one-way flow of electricity, but if a local sub-network generates more power than it is consuming, the reverse flow can raise safety and reliability issues.[14] A smart grid aims to manage these situations.] Efficiency Numerous contributions to overall improvement of the efficiency of energy infrastructure are anticipated from the deployment of smart grid technology, in particular including demand-side management, for example turning off air conditioners during short-term spikes in electricity price, reducing the voltage when possible on distribution lines through Voltage/VAR Optimization (VVO), eliminating truck-rolls for meter reading, and reducing truck-rolls by improved outage management using data from Advanced Metering Infrastructure systems. The overall effect is less redundancy in transmission and distribution lines, and greater utilization of generators, leading to lower power prices. Load adjustment/Load balancing The total load connected to the power grid can vary significantly over time. Although the total load is the sum of many individual choices of the clients, the overall load is not a stable, slow varying, increment of the load if a popular television program starts and millions of televisions will draw current instantly. Traditionally, to respond to a rapid increase in power consumption, faster than the start-up time of a large generator, some spare generators are put on a dissipative standby mode. A smart grid may warn all individual television sets, or another larger customer, to reduce the load temporarily (to allow time to start up a larger generator) or continuously (in the case of limited resources). Using mathematical prediction algorithms it is possible to predict how many standby generators need to be used, to reach a certain failure rate. In the traditional grid, the failure rate can only be reduced at the cost of more standby generators. In a smart grid, the load reduction by even a small portion of the clients may eliminate the problem. Peak curtailment/leveling and time of use pricing To reduce demand during the high cost peak usage periods, communications and metering technologies inform smart devices in the home and business when energy demand is high and track how much electricity is used and when it is used. It also gives utility companies the ability to reduce consumption by communicating to devices directly in order to prevent system overloads. Examples would be a utility reducing the usage of a group of electric vehicle charging stations or shifting temperature set points of air conditioners in a city. To motivate them to cut back use and perform what is called peak curtailment or peak leveling, prices of electricity are increased during high demand periods, and decreased during low demand periods.[7] It is thought that consumers and businesses will tend to consume less during high demand periods if it is possible for consumers and consumer devices to be aware of the high price premium for using electricity at peak periods. This could mean making trade-offs such as cycling on/off air conditioners or running dishes at 9 pm instead of 5 pm. When businesses and consumers see a direct economic benefit of using energy at off-peak times, the theory is that they will include energy cost of operation into their consumer device and building construction decisions and hence become more energy efficient. According to proponents of smart grid plans, this will reduce the amount of spinning reserve that electric utilities have to keep on stand-by, as the load curve will level itself through a combination of "invisible hand" free-market capitalism and central control of a large number of devices by power management services that pay consumers a portion of the peak power saved by turning their device off.. Sustainability The improved flexibility of the smart grid permits greater penetration of highly variable renewable energy sources such as solar power and wind power, even without the addition of energy storage. Current network infrastructure is not built to allow for many distributed feed-in points, and typically even if some feed-in is allowed at the local (distribution) level, the transmission-level infrastructure cannot accommodate it. Rapid fluctuations in distributed generation, such as due to cloudy or gusty weather, present significant challenges to power engineers who need to ensure stable power levels through varying the output of the more controllable generators such as gas turbines and hydroelectric generators. Smart grid technology is a necessary condition for very large amounts of renewable electricity on the grid for this reason. Market-enabling The smart grid allows for systematic communication between suppliers (their energy price) and consumers (their willingness-to-pay), and permits both the suppliers and the consumers to be more flexible and sophisticated in their operational strategies. Only the critical loads will need to pay the peak energy prices, and consumers will be able to be more strategic in when they use energy. Generators with greater flexibility will be able to sell energy strategically for maximum profit, whereas inflexible generators such as base-load steam turbines and wind turbines will receive a varying tariff based on the level of demand and the status of the other generators currently operating. The overall effect is a signal that awards energy efficiency, and energy consumption that is sensitive to the time-varying limitations of the supply. At the domestic level, appliances with a degree of energy storage or thermal mass (such as refrigerators, heat banks, and heat pumps) will be well placed to 'play' the market and seek to minimise energy cost by adapting demand to the lower-cost energy support periods. This is an extension of the dual-tariff energy pricing mentioned above. Demand response support Demand response support allows generators and loads to interact in an automated fashion in real time, coordinating demand to flatten spikes. Eliminating the fraction of demand that occurs in these spikes eliminates the cost of adding reserve generators, cuts wear and tear and extends the life of equipment, and allows users to cut their energy bills by telling low priority devices to use energy only when it is cheapest. Currently, power grid systems have varying degrees of communication within control systems for their high value assets, such as in generating plants, transmission lines, substations and major energy users. In general information flows one way, from the users and the loads they control back to the utilities. The utilities attempt to meet the demand and succeed or fail to varying degrees (brownout, rolling blackout, uncontrolled blackout). The total amount of power demand by the users can have a very wide probability distribution which requires spare generating plants in standby mode to respond to the rapidly changing power usage. This one-way flow of information is expensive; the last 10% of generating capacity may be required as little as 1% of the time, and brownouts and outages can be costly to consumers. Latency of the data flow is a major concern, with some early smart meter architectures allowing actually as long as 24 hours delay in receiving the data, preventing any possible reaction by either supplying or demanding devices. Platform for advanced services As with other industries, use of robust two-way communications, advanced sensors, and distributed computing technology will improve the efficiency, reliability and safety of power delivery and use. It also opens up the potential for entirely new services or improvements on existing ones, such as fire monitoring and alarms that can shut off power, make phone calls to emergency services, etc. Provision megabits, control power with kilobits, sell the rest The amount of data required to perform monitoring and switching one's appliances off automatically is very small compared with that already reaching even remote homes to support voice, security, Internet and TV services. Many smart grid bandwidth upgrades are paid for by over-provisioning to also support consumer services, and subsidizing the communications with energy-related services or subsidizing the energy-related services, such as higher rates during peak hours, with communications. This is particularly true where governments run both sets of services as a public monopoly. Because power and communications companies are generally separate commercial enterprises in North America and Europe, it has required considerable government and large-vendor effort to encourage various enterprises to cooperate. Some, like Cisco, see opportunity in providing devices to consumers very similar to those they have long been providing to industry.[18] Others, such as Silver Spring Networks or Google, are data integrators rather than vendors of equipment. While the AC power control standards suggest powerline networking would be the primary means of communication among smart grid and home devices, the bits may not reach the home via Broadband over Power Lines (BPL) initially but by fixed wireless. The number of applications that can be used on the smart grid once the data communications technology is deployed is growing as fast as inventive companies can create and produce them. Benefits include enhanced cyber-security, handling sources of electricity like wind and solar power and even integrating electric vehicles onto the grid. The companies making smart grid technology or offering such services include technology giants, established communication firms and even brand new technology firms.

Wednesday, 1 April 2015

Superconductivity applications in Power System Engineering


Superconductors differ fundamentally in quantum physics behavior from conventional materials in the manner by which electrons, or electric currents, move through the material. It is these differences that give rise to the unique properties and performance benefits that differentiate superconductors from all other known conductors. Unique Properties • Zero resistance to direct current • Extremely high current carrying density • Extremely low resistance at high frequencies • Extremely low signal dispersion • High sensitivity to magnetic field • Exclusion of externally applied magnetic field • Rapid single flux quantum transfer • Close to speed of light signal transmission Zero resistance and high current density have a major impact on electric power transmission and also enable much smaller or more powerful magnets for motors, generators, energy storage, medical equipment and industrial separations. Low resistance at high frequencies and extremely low signal dispersion are key aspects in microwave components, communications technology and several military applications. Low resistance at higher frequencies also reduces substantially the challenges inherent to miniaturization brought about by resistive, or I2 R, heating. The high sensitivity of superconductors to magnetic field provides a unique sensing capability, in many cases 1000x superior to today’s best conventional measurement technology. Magnetic field exclusion is important in multi-layer electronic component miniaturization, provides a mechanism for magnetic levitation and enables magnetic field containment of charged particles. The final two properties form the basis for digital electronics and high speed computing well beyond the theoretical limits projected for semiconductors. All of these materials properties have been extensively demonstrated throughout the world. In 1911, H. K. Onnes, a Dutch physicist, discovered superconductivity by cooling mercury metal to extremely low temperature and observing that the metal exhibited zero resistance to electric current. Prior to 1973 many other metals and metal alloys were found to be superconductors at temperatures below 23.2K. These became known as Low Temperature Superconductor (LTS) materials. Since the 1960s a Niobium-Titanium (Ni-Ti) alloy has been the material of choice for commercial superconducting magnets. More recently, a brittle Niobium-Tin intermetallic material has emerged as an excellent alternative to achieve even higher magnetic field strength. In 1986, J. G. Bednorz and K. A. Müller discovered 2 Superconductivity: Present and Future Applications © 2009 CCAS : Coalition for the Commercial Application of Superconductors oxide based ceramic materials that demonstrated superconducting properties as high as 35K. This was quickly followed in early 1997 by the announcement by C. W. Chu of a cuprate superconductor functioning above 77K, the boiling point of liquid nitrogen. Since then, extensive research worldwide has uncovered many more oxide based superconductors with potential manufacturability benefits and critical temperatures as high as 135K. A superconducting material with a critical temperature above 23.2K is known as a High Temperature Superconductor (HTS), despite the continuing need for cryogenic refrigeration for any application. Challenges • Cost • Refrigeration • Reliability • Acceptance Forty years of development and commercialization of applications involving LTS materials have demonstrated that a superconductor approach works best when it represents a unique solution to the need. Alternatively, as the cost of the superconductor will always be substantially higher than that of a conventional conductor, it must bring overwhelming cost effectiveness to the system. The advent of HTS has changed the dynamic of refrigeration by permitting smaller and more efficient system cooling for some applications. Design, integration of superconducting and cryogenic technologies, demonstration of systems cost benefits and long term reliability must be met before superconductivity delivers on its current promise of major societal benefits and makes substantial commercial inroads into new applications. About Superconductivity Superconductivity is widely regarded as one of the great scientific discoveries of the 20th Century. This miraculous property causes certain materials, at low temperatures, to lose all resistance to the flow of electricity. This state of “losslessness” enables a range of innovative technology applications. At the dawn of the 21st century, superconductivity forms the basis for new commercial products that are transforming our economy and daily life. Current Commercial Applications • Magnetic Resonance Imaging (MRI) • Nuclear Magnetic Resonance (NMR) • High-energy physics accelerators • Plasma fusion reactors • Industrial magnetic separation of kaolin clay The major commercial applications of superconductivity in the medical diagnostic, science and industrial processing fields listed above all involve LTS materials and relatively high field magnets. Indeed, without superconducting technology most of these applications would not be viable. Several smaller applications utilizing LTS materials have also been commercialized, e.g. research magnets and MagnetoEncephalograhy (MEG). The latter is based on Superconducting Quantum Interference Device (SQUID) technology which detects and measures the weak magnetic fields generated by the brain. The only substantive commercial products incorporating HTS materials are electronic filters used in wireless base stations. About 10,000 units have been installed in wireless networks worldwide to date. More detail on these applications is presented in subsequent sections. Emerging Applications Superconductor-based products are extremely environmentally friendly compared to their conventional counterparts. They generate no greenhouse gases and are cooled by non-flammable liquid nitrogen (nitrogen comprises 80% of our atmosphere) as opposed to conventional oil coolants that are both flammable and toxic. They are also typically at least 50% smaller and lighter than equivalent conventional units which translates into economic incentives. These benefits have given rise to the ongoing development of many new applications in the following sectors: Electric Power. Superconductors enable a variety of applications to aid our aging and heavily burdened electric power infrastructure - for example, in generators, transformers, underground cables, synchronous condensers and fault current limiters. The high power density and electrical efficiency of superconductor wire results in highly compact, powerful devices and systems that are more reliable, efficient, and environmentally benign. Transportation. The rapid and efficient movement of people and goods, by land and by sea, poses important logistical, environmental, land use and other challenges. Superconductors are enabling a new generation of transport technologies including ship propulsion systems, magnetically levitated trains, and railway traction transformers. Medicine. Advances in HTS promise more compact and less costly Magnetic Resonance Imaging (MRI) systems with superior imaging capabilities. In addition, Magneto-Encephalography (MEG), Magnetic Source Imaging (MSI) and MagnetoCardiology (MCG) enable non-invasive diagnosis of brain and heart functionality. Industry. Large motors rated at 1000 HP and above consume 25% of all electricity generated in the United States. They offer a prime target for the use of HTS in substantially reducing electrical losses. Powerful magnets for water remediation, materials purification, and industrial processing are also in the demonstration stages. Communications. Over the past decade, HTS filters have come into widespread use in cellular communications systems. They enhance signal-to-noise ratios, enabling reliable service with fewer, more widely-spaced cell towers. As the world moves from analog to all digital communications, LTS chips offer dramatic performance improvements in many commercial and military applications. Scientific Research. Using superconducting materials, today’s leading-edge scientific research facilities are pushing the frontiers of human knowledge - and pursuing breakthroughs that could lead to new techniques ranging from the clean, abundant energy from nuclear fusion to computing at speeds much faster than the theoretical limit of silicon technology. Since 10% to 15% of generated electricity is dissipated in resistive losses in transmission lines, the prospect of zero loss superconducting transmission lines is appealing. In prototype superconducting transmission lines at Brookhaven National Laboratory, 1000 MW of power can be transported within an enclosure of diameter 40 cm. This amounts to transporting the entire output of a large power plant on one enclosed transmission line. This could be a fairly low voltage DC transmission compared to large transformer banks and multiple high voltage AC transmission lines on towers in the conventional systems. The superconductor used in these prototype applications is usually niobium-titanium, and liquid helium cooling is required. Current experiments with power applications of high-temperature superconductors focus on uses of BSCCO in tape forms and YBCOin thin film forms. Current densities above 10,000 amperes per square centimeter are considered necessary for practical power applications, and this threshold has been exceeded in several configurations.

Micro Electro Mechanical Systems (MEMS)


Microelectromechanical systems (MEMS) (also written as micro-electro-mechanical, MicroElectroMechanical or microelectronic and microelectromechanical systems and the related micro mechatronics) is the technology of very small devices; it merges at the nano-scale into nanoelectromechanical systems (NEMS) and nanotechnology. MEMS are also referred to as micro machines (in Japan), or micro systems technology – MST (in Europe). MEMS are separate and distinct from the hypothetical vision of molecular nanotechnology or molecular electronics. MEMS are made up of components between 1 to 100 micrometres in size (i.e. 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres (20 millionths of a metre) to a millimetre (i.e. 0.02 to 1.0 mm). They usually consist of a central unit that processes data (the microprocessor) and several components that interact with the surroundings such as microsensors.[1] At these size scales, the standard constructs of classical physics are not always useful. Because of the large surface area to volume ratio of MEMS, surface effects such aselectrostatics and wetting dominate over volume effects such as inertia or thermal mass Micro-Electro-Mechanical Systems, or MEMS, is a technology that in its most general form can be defined as miniaturized mechanical and electro-mechanical elements (i.e., devices and structures) that are made using the techniques of microfabrication. The critical physical dimensions of MEMS devices can vary from well below one micron on the lower end of the dimensional spectrum, all the way to several millimeters. Likewise, the types of MEMS devices can vary from relatively simple structures having no moving elements, to extremely complex electromechanical systems with multiple moving elements under the control of integrated microelectronics. The one main criterion of MEMS is that there are at least some elements having some sort of mechanical functionality whether or not these elements can move. The term used to define MEMS varies in different parts of the world. In the United States they are predominantly called MEMS, while in some other parts of the world they are called “Microsystems Technology” or “micromachined devices”. While the functional elements of MEMS are miniaturized structures, sensors, actuators, and microelectronics, the most notable (and perhaps most interesting) elements are the microsensors and microactuators. Microsensors and microactuators are appropriately categorized as “transducers”, which are defined as devices that convert energy from one form to another. In the case of microsensors, the device typically converts a measured mechanical signal into an electrical signal. Over the past several decades MEMS researchers and developers have demonstrated an extremely large number of microsensors for almost every possible sensing modality including temperature, pressure, inertial forces, chemical species, magnetic fields, radiation, etc. Remarkably, many of these micromachined sensors have demonstrated performances exceeding those of their macroscale counterparts. That is, the micromachined version of, for example, a pressure transducer, usually outperforms a pressure sensor made using the most precise macroscale level machining techniques. Not only is the performance of MEMS devices exceptional, but their method of production leverages the same batch fabrication techniques used in the integrated circuit industry – which can translate into low per-device production costs, as well as many other benefits. Consequently, it is possible to not only achieve stellar device performance, but to do so at a relatively low cost level. Not surprisingly, silicon based discrete microsensors were quickly commercially exploited and the markets for these devices continue to grow at a rapid rate. More recently, the MEMS research and development community has demonstrated a number of microactuators including: microvalves for control of gas and liquid flows; optical switches and mirrors to redirect or modulate light beams; independently controlled micromirror arrays for displays, microresonators for a number of different applications, micropumps to develop positive fluid pressures, microflaps to modulate airstreams on airfoils, as well as many others. Surprisingly, even though these microactuators are extremely small, they frequently can cause effects at the macroscale level; that is, these tiny actuators can perform mechanical feats far larger than their size would imply. For example, researchers have placed small microactuators on the leading edge of airfoils of an aircraft and have been able to steer the aircraft using only these microminiaturized devices. The real potential of MEMS starts to become fulfilled when these miniaturized sensors, actuators, and structures can all be merged onto a common silicon substrate along with integrated circuits (i.e., microelectronics). While the electronics are fabricated using integrated circuit (IC) process sequences (e.g., CMOS, Bipolar, or BICMOS processes), the micromechanical components are fabricated using compatible "micromachining" processes that selectively etch away parts of the silicon wafer or add new structural layers to form the mechanical and electromechanical devices. It is even more interesting if MEMS can be merged not only with microelectronics, but with other technologies such as photonics, nanotechnology, etc. This is sometimes called “heterogeneous integration.” Clearly, these technologies are filled with numerous commercial market opportunities. While more complex levels of integration are the future trend of MEMS technology, the present state-of-the-art is more modest and usually involves a single discrete microsensor, a single discrete microactuator, a single microsensor integrated with electronics, a multiplicity of essentially identical microsensors integrated with electronics, a single microactuator integrated with electronics, or a multiplicity of essentially identical microactuators integrated with electronics. Nevertheless, as MEMS fabrication methods advance, the promise is an enormous design freedom wherein any type of microsensor and any type of microactuator can be merged with microelectronics as well as photonics, nanotechnology, etc., onto a single substrate. This vision of MEMS whereby microsensors, microactuators and microelectronics and other technologies, can be integrated onto a single microchip is expected to be one of the most important technological breakthroughs of the future. This will enable the development of smart products by augmenting the computational ability of microelectronics with the perception and control capabilities of microsensors and microactuators. Microelectronic integrated circuits can be thought of as the "brains" of a system and MEMS augments this decision-making capability with "eyes" and "arms", to allow microsystems to sense and control the environment. Sensors gather information from the environment through measuring mechanical, thermal, biological, chemical, optical, and magnetic phenomena. The electronics then process the information derived from the sensors and through some decision making capability direct the actuators to respond by moving, positioning, regulating, pumping, and filtering, thereby controlling the environment for some desired outcome or purpose. Furthermore, because MEMS devices are manufactured using batch fabrication techniques, similar to ICs, unprecedented levels of functionality, reliability, and sophistication can be placed on a small silicon chip at a relatively low cost. MEMS technology is extremely diverse and fertile, both in its expected application areas, as well as in how the devices are designed and manufactured. Already, MEMS is revolutionizing many product categories by enabling complete systems-on-a-chip to be realized. There are numerous possible applications for MEMS and Nanotechnology. As a breakthrough technology, allowing unparalleled synergy between previously unrelated fields such as biology and microelectronics, many new MEMS and Nanotechnology applications will emerge, expanding beyond that which is currently identified or known. Here are a few applications of current interest: Biotechnology MEMS and Nanotechnology is enabling new discoveries in science and engineering such as the Polymerase Chain Reaction (PCR) microsystems for DNA amplification and identification, enzyme linked immunosorbent assay (ELISA), capillary electrophoresis, electroporation, micromachined Scanning Tunneling Microscopes (STMs), biochips for detection of hazardous chemical and biological agents, and microsystems for high-throughput drug screening and selection. Medicine There are a wide variety of applications for MEMS in medicine. The first and by far the most successful application of MEMS in medicine (at least in terms of number of devices and market size) are MEMS pressure sensors, which have been in use for several decades. The market for these pressure sensors is extremely diverse and highly fragmented, with a few high-volume markets and many lower volume ones. Some of the applications of MEMS pressure sensors in medicine include: The largest market for MEMS pressure sensors in the medical sector is the disposable sensor used to monitor blood pressure in IV lines of patients in intensive care. These devices were first introduced in the early 1980’s. They replaced other technologies that cost over $500 and which had a substantial recurring cost since they had to be sterilized and recalibrated after each use. MEMS disposable pressure sensors are delivered pre-calibrated in a sterilized package from the factory at a cost of around $10. MEMS pressure sensors are used to measure intrauterine pressure during birth. The device is housed in a catheter that is placed between the baby's head and the uterine wall. During delivery, the baby's blood pressure is monitored for problems during the mother's contractions. MEMS pressure sensors are used in hospitals and ambulances as monitors of a patient’s vital signs, specifically the patient’s blood pressure and respiration. The MEMS pressure sensors in respiratory monitoring are used in ventilators to monitor the patient’s breathing. MEMS pressure sensors are used for eye surgery to measure and control the vacuum level used to remove fluid from the eye, which is cleaned of debris and replaced back into the eye during surgery Special hospital beds for burn victims that employ inflatable mattresses use MEMS pressure sensors to regulate the pressure inside a series of individual inflatable chambers in the mattress. Sections of the mattress can be inflated as needed to reduce pain as well as improve patient healing. Physician’s office and hospital blood analyzers employ MEMS pressure sensors as barometric pressure correction for the analysis of concentrations of O2, CO2, calcium, potassium, and glucose in a patient's blood. MEMS pressure sensors are used in inhalers to monitor the patient’s breathing cycle and release the medication at the proper time in the breathing cycle for optimal effect. MEMS pressure sensors are used in kidney dialysis to monitor the inlet and outlet pressures of blood and the dialysis solution and to regulate the flow rates during the procedure. MEMS pressure sensors are used in drug infusion pumps of many types to monitor the flow rate and detect for obstructions and blockages that indicate that the drug is not being properly delivered to the patient. The contribution to patient care for all of these applications has been enormous. More recently, MEMS pressure sensors have been developed and are being marketed that have wireless interrogation capability. These sensors can be implanted into a human body and the pressure can be measured using a remotely scanned wand. Another application are MEMS inertial sensors, specifically accelerometers and rate sensors which are being used as activity sensors. Perhaps the foremost application of inertial sensors in medicine is in cardiac pacemakers wherein they are used to help determine the optimum pacing rate for the patient based on their activity level. MEMS devices are also starting to be employed in drug delivery devices, for both ambulatory and implantable applications. MEMS electrodes are also being used in neuro-signal detection and neuro-stimulation applications. A variety of biological and chemical MEMS sensors for invasive and non-invasive uses are beginning to be marketed. Lab-on-a-chip and miniaturized biochemical analytical instruments are being marketed as well. Communications High frequency circuits are benefiting considerably from the advent of RF-MEMS technology. Electrical components such as inductors and tunable capacitors can be improved significantly compared to their integrated counterparts if they are made using MEMS and Nanotechnology. With the integration of such components, the performance of communication circuits will improve, while the total circuit area, power consumption and cost will be reduced. In addition, the mechanical switch, as developed by several research groups, is a key component with huge potential in various RF and microwave circuits. The demonstrated samples of mechanical switches have quality factors much higher than anything previously available. Another successful application of RF-MEMS is in resonators as mechanical filters for communication circuits. Inertial Sensing MEMS inertial sensors, specifically accelerometers and gyroscopes, are quickly gaining market acceptance. For example, MEMS accelerometers have displaced conventional accelerometers for crash air-bag deployment systems in automobiles. The previous technology approach used several bulky accelerometers made of discrete components mounted in the front of the car with separate electronics near the air-bag and cost more than $50 per device. MEMS technology has made it possible to integrate the accelerometer and electronics onto a single silicon chip at a cost of only a few dollars. These MEMS accelerometers are much smaller, more functional, lighter, more reliable,