Wednesday, 27 May 2015

3D printing technology and its applications


The 3D printing technology made its way to the technological world in the year 1986, but not gain importance until 1990. It was not that popular outside the world of engineering, architecture and manufacturing. 3D printing is also known as desktop fabrication, it can form any material that can be obtained as a powder. For creating an object you need a digital 3D-model. You can scan a set of 3D images, or draw it using computer-assisted design or CAD software. You can also download them from internet. The digital 3D-model is usually saved in STL format and then sent to the printer. The process of "printing" a three-dimensional object layer-by-layer with equipment, which is quite similar with ink-jet printers. One of the most important applications of 3D printing is in the medical industry. With 3D printing, surgeons can produce mockups of parts of their patient's body which needs to be operated upon.3D printing make it possible to make a part from scratch in just hours. It allows designers and developers to go from flat screen to exact part. Nowadays almost everything from aerospace components to toys are getting built with the help of 3D printers. 3D printing can provide great savings on assembly costs because it can print already assembled products. With 3D printing, companies can now experiment with new ideas and numerous design iterations with no extensive time or tooling expense. They can decide if product concepts are worth to allocate additional resources. 3D printing could even challenge mass production method in the future. 3D printing is going to impact so many industries, such as automotive, medical, business & industrial equipment, education, architecture, and consumer-product industries. 3D printing or additive manufacturing is a process of making three dimensional solid objects from a digital file. The creation of a 3D printed object is achieved using additive processes. In an additive process an object is created by laying down successive layers of material until the entire object is created. Each of these layers can be seen as a thinly sliced horizontal cross-section of the eventual object. It all starts with making a virtual design of the object you want to create. This virtual design is made in a CAD (Computer Aided Design) file using a3D modeling program (for the creation of a totally new object) or with the use of a 3D scanner (to copy an existing object). A 3D scanner makes a 3D digital copy of an object. 3d scanners use different technologies to generate a 3d model such as time-of-flight, structured / modulated light, volumetric scanning and many more. Recently, many IT companies like Microsoft and Google enabled their hardware to perform 3d scanning, a great example is Microsoft’s Kinect. This is a clear sign that future hand-held devices like smartphones will have integrated 3d scanners. Digitizing real objects into 3d models will become as easy as taking a picture. Prices of 3d scanners range from very expensive professional industrial devices to 30 USD DIY devices anyone can make at home. 3D printer is unlike of the common printers. On a 3D printer the object is printed by three dimension. A 3D model is built up layer by layer. Therefore the whole process is called rapid prototyping, or 3D printing. The resolution of the current printers are among the 328 x 328 x 606 DPI (xyz) at 656 x 656 x 800 DPI (xyz) in ultra-HD resolution. The accuracy is 0.025 mm - 0.05 mm per inch. The model size is up to 737 mm x 1257 mm x 1504 mm. The biggest drawback for the individual home user is still the high cost of 3D printer. Another drawback is that it takes hours or even days to print a 3D model (depending on the complexity and resolution of the model). Besides above, the professional 3D software and 3D model design is also in a high cost range. Alternatively there are already simplified 3D printers for hobbyist which are much cheaper. And the materials it uses is also less expensive. These 3D printers for home use are not as accurate as commercial 3D printer. Not all 3D printers use the same technology. There are several ways to print and all those available are additive, differing mainly in the way layers are build to create the final object. Some methods use melting or softening material to produce the layers. Selective laser sintering (SLS) and fused deposition modeling (FDM) are the most common technologies using this way of printing. Another method of printing is when we talk about curing a photo-reactive resin with a UV laser or another similar power source one layer at a time. The most common technology using this method is called stereolithography (SLA). To be more precise: since 2010, the American Society for Testing and Materials (ASTM) group “ASTM F42 – Additive Manufacturing”, developed a set of standards that classify the Additive Manufacturing processes into 7 categories according to Standard Terminology for Additive Manufacturing Technologies. These seven processes are: Vat Photopolymerisation Material Jetting Binder Jetting Material Extrusion Powder Bed Fusion Sheet Lamination Directed Energy Deposition One of the most important applications of 3D printing is in the medical industry. With 3D printing, surgeons can produce mockups of parts of their patient's body which needs to be operated upon. 3D printing make it possible to make a part from scratch in just hours. It allows designers and developers to go from flat screen to exact part. Nowadays almost everything from aerospace components to toys are getting built with the help of 3D printers. 3D printing is also used for jewelry and art, architecture, fashion design, art, architecture and interior design.

Monday, 25 May 2015

How to file a patent, IPR in India


Intellectual property rights are the rights given to persons over the creations of their minds. They usually give the creator an exclusive right over the use of his/her creation for a certain period of time. The importance of intellectual property in India is well established at all levels- statutory, administrative and judicial. India ratified the agreement establishing the World Trade Organisation (WTO). This Agreement, inter-alia, contains an Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS) which came into force from 1st January 1995. It lays down minimum standards for protection and enforcement of intellectual property rights in member countries which are required to promote effective and adequate protection of intellectual property rights with a view to reducing distortions and impediments to international trade. The obligations under the TRIPS Agreement relate to provision of minimum standard of protection within the member countries legal systems and practices. The Agreement provides for norms and standards in respect of following areas of intellectual property Patents Trade Marks Copyrights Geographical Indications Industrial Designs Patents The basic obligation in the area of patents is that, invention in all branches of technology whether products or processes shall be patentable if they meet the three tests of being new involving an inventive step and being capable of industrial application. In addition to the general security exemption which applied to the entire TRIPS Agreement, specific exclusions are permissible from the scope of patentability of inventions, the prevention of whose commercial exploitation is necessary to protect public order or morality, human, animal, plant life or health or to avoid serious prejudice to the environment. Further, members may also exclude from patentability of diagnostic, therapeutic and surgical methods of the treatment of human and animals and plants and animal other than micro-organisms and essentially biological processes for the production of plants and animals.

Saturday, 23 May 2015

Project report writing Plagiarism and academic honesty


1 Introduction A technical report is a formal report designed to convey technical information in a clear and easily accessible format. It is divided into sections which allow different readers to access different levels of information. This guide explains the commonly accepted format for a technical report; explains the purposes of the individual sections; and gives hints on how to go about drafting and refining a report in order to produce an accurate, professional document. 11 Originality and plagiarism Whenever you make use of other people's facts or ideas, you must indicate this in the text with a number which refers to an item in the list of references. Any phrases, sentences or paragraphs which are copied unaltered must be enclosed in quotation marks and referenced by a number. Material which is not reproduced unaltered should not be in quotation marks but must still be referenced. It is not sufficient to list the sources of information at the end of the report; you must indicate the sources of information individually within the report using the reference numbering system. Information that is not referenced is assumed to be either common knowledge or your own work or ideas; if it is not, then it is assumed to be plagiarised i.e. you have knowingly copied someone else's words, facts or ideas without reference, passing them off as your own. This is a serious offence. If the person copied from is a fellow student, then this offence is known as collusion and is equally serious. Examination boards can, and do, impose penalties for these offences ranging from loss of marks to disqualification from the award of a degree This warning applies equally to information obtained from the Internet. It is very easy for markers to identify words and images that have been copied directly from web sites. If you do this without acknowledging the source of your information and putting the words in quotation marks then your report will be sent to the Investigating Officer and you may be called before a disciplinary panel. • Plagiarism refers to the unacknowledged borrowing of others’ writing. When you use others’ exact words without citing their work, you are guilty of plagiarizing, whether you “borrow” an entire report or a single sentence. In academic contexts, plagiarism is a form of academic dishonesty. In university classrooms, the teacher usually assumes that the work you turn in is your own original work. If you turn in work that is not your own or that was co-created with others, you are obliged to make that clear to your instructor. When you borrow language from other sources, you are expected to use quotation marks and provide a full bibliographical reference. Whenever you borrow graphics or quote passages, you are legally and ethically obliged to acknowledge that use, following appropriate conventions for document sources. If you have doubts about whether or not you are using your own or others’ writing ethically and legally, ask your instructor. • Plagiarism is misrepresenting someone else’s work as your own, taking someone else’s writing or ideas and passing them off as your own work. Plagiarism refers to stealing ideas as well as words, while copyright refers only to exact expression. Some acts of plagiarism are also copyright violations, but some are not. Some acts of plagiarism are covered by law (e.g., copyright violations), but other acts of plagiarism are a matter of institutional or professional or general ethics. • In academic contexts, plagiarism is regarded as an issue of academic honesty. Academic honesty refers to the ethics, codes, and guidelines under which the academic community does its learning, its teaching, its academic work. Those guidelines are important to the university’s being able to achieve its educational mission—its goals for teaching, learning, research, and service. How to avoid plagiarism Plagiarism can sometimes be the result of poor note taking, or paraphrasing without properly citing the reference. You can avoid plagiarism by: • citing your references • referencing correctly • recording direct quotes and paraphrases correctly when note taking. Quotes When you use the exact words, ideas or images of another person, you are quoting the author. If you do not use quotation marks around the original author's direct words and cite the reference, you are plagiarising. Paraphrasing Paraphrasing is when you take someone else's concepts and put them into your own words without changing the original meaning. Even though you are not using the same words you still need to state where the concepts came from. Note taking Poor note taking can lead to plagiarism. You should always take care to: • record all reference information correctly • use quotation marks exactly as in the original • paraphrase correctly • clearly distinguish your own ideas from the ideas of other authors and researchers. Plagiarism is when you take (i) somebody else’s idea, or (ii) somebody else’s words, and use them such that you appear to be the original creator or author of the idea or words. Even if you change a few words of someone else's sentence, it is still plagiarism if the same idea is presented. Plagiarism is a form of academic misconduct that is prohibited by the Student Code of Conduct. Plagiarism is unacceptable in all academic work and all documents authored by you, including assignments and project reports. Since published documents are stored and accessed in public places, it is quite possible that a published paper, thesis or dissertation can be accused of plagiarism, perhaps years after it is published. When you write a thesis/dissertation that includes discussion of research results from other documents, plagiarism may creep in unintentionally. Therefore, it is particularly important that you recognize plagiarism and make special efforts to avoid it. Plagiarism can also have legal consequences. Because of the Berne copyright convention, virtually all published material (including on-line, internet material) should be considered to have copyright protection whether it has a copyright notice or not. Suggestions to help you avoid plagiarism: • Take notes when you read. Do not copy complete sentences unless you want to quote the sentence. • Wait some time (or a day) after you read the original source text to write your draft. • Don't draft your paper with the original source text (or a photocopy) open next to you. Use your notes. Go back to the source later to check something you are unsure of. You can certainly use other peoples' ideas and words in your writing, as long as you give them appropriate credit. There are established methods of giving credit to your source of ideas and words. This is discussed in the following section. January 12, 2006 Examples: • If you use exact words from another source, put quotation marks around them. Example: According to Derrisen (2004), "since the flow of liquids in open channels is not subject to any pressure other than the weight of the liquid and atmospheric pressure on the surface, the theoretical analysis can be much simpler." • It is not sufficient to use the citation alone if a direct quote is used. Incorrect example: Since the flow of liquids in open channels is not subject to any pressure other than the weight of the liquid and atmospheric pressure on the surface, the theoretical analysis can be much simpler (Derrisen, 2004). • Changing a few words is not sufficient, since copying the ‘writing style’ is also plagiarism. Either use your own words or quote it. Incorrect example: Liquid flow in open channels is subject to pressure from the weight of the liquid and atmospheric pressure; therefore, the theoretical analysis is much easier (Derrisen, 2004). • Quotes are not necessary if you just use the idea. However, a citation is still required. Warning: it is almost impossible to put a single sentence into your own words. This is why you should read and understand, then write from your notes. Acceptable example: Simple models can be developed for a liquid in open channel flow since it is driven only by atmospheric pressure and weight of the liquid (Derrisen, 2004). Note that the last part of the sentence is a fact (not an idea); one can only change the wording of a fact, not the fact itself! When you describe an experiment, the facts (e.g., specifications of a piece of equipment) will be the same for all students, but the word and sentence arrangements will be different. • Put the citation where it will best identify which information is derived from which source. Place the citation after a key word or phrase which suggests a citation is needed. If most or all of a paragraph is from one source, put the citation at the end of the topic sentence. Repeat the citation later if necessary to make the source of information clear. Examples: - In a study of gear mechanics (Brable, 2005) showed that... - Several of these studies identified the critical control parameter… - Heat transfer in regenerators have been modeled by finite difference method (Jurgel, 2001) and by finite element method (Mitchell, 1996, Templeton, 2003)…. Please note that it is the responsibility of the student to avoid plagiarism in theses and dissertations. Your advisor and thesis committee can help with suggestions, but only if you discuss specific instances of a potential problem. If you are concerned about the quality of your writing, you can take the help of others to proofread your thesis.

Sunday, 17 May 2015

Nanogrids: An ultimate solution for creative energy aware buildings


Nanogrids are small microgrids, typically serving a single building or a single load. Navigant Research has developed its own definition of a nanogrid as being 100 kW for grid-tied systems and 5 kW for remote systems not interconnected with a utility grid. Nanogrids mimic the innovation that is rising up from the bottom of the pyramid and capturing the imagination of growing numbers of technology vendors and investment capital, particularly in the smart building and smart transportation spaces, says Navigant. In other ways, nanogrids are more conventional than microgrids since they do not directly challenge utilities in the same way. Nanogrids are restricted to a single building or a single load, and therefore do not bump up against regulations prohibiting the transfer or sharing of power across a public right-of-way. From a technology point of view, perhaps the most radical idea behind nanogrids is a clear preference for direct current (DC) solutions, whether these systems are connected to the grid or operate as standalone systems, according to Navigant. building is often only as intelligent as the electrical distribution network it connects with. That's why smart buildings are often seen as an extension of the smart grid. Meters, building controls, intelligent lighting and HVAC systems, distributed energy systems and the software layered on top are indeed valuable for controlling localized energy use within a building. But in many cases, the building relies on the utility or regional electricity grid to value those services. Some analysts define these technologies as the "enterprise smart grid" because of their interaction with the electricity network. But what if there were no supporting centralized grid? How would buildings be designed then? In today's grid-centric framework, the building works for the benefit of the larger grid. But Nordman thinks there's a future where the opposite is true. And that future is the nanogrid. A nanogrid is different from a microgrid, according to the authors. Although some microgrids can be developed for single buildings, they mostly interface with the utility. Some aren't even fully islandable. A nanogrid, however, would be "indifferent to whether a utility grid is present." Rather, it would be a mostly autonomous DC-based system that would digitally connect individual devices to one other, as well as to power generation and storage within the building. Nordman and Christensen describe it this way: "A nanogrid is a single domain of power -- for voltage, capacity, reliability, administration, and price. Nanogrids include storage internally; local generation operates as a special type of nanogrid. A building-scale microgrid can be as simple as a network of nanogrids, without any central entity." The nanogrid is conceptually similar to an automobile or aircraft, which both house their own isolated grid networks powered by batteries that can support electronics, lighting and internet communications. Uninterruptible power supplies also perform a similar function in buildings during grid disturbances. Here's a simple diagram illustrating how a nanogrid -- based on the concept of "local power distribution" -- would function. Essentially, it would allow most devices to plug into power sockets and connect to the nanogrid, which could balance supply with demand from those individual loads.

The gyrator: A magic device to formulate an inductor


The gyrator is a 2-port network containing two current sources controlled by the terminal voltages and oppositely directed, as shown inside the rectangular outline in the figure. Its two parameters g1, g2 are transconductances, units A/V. In the figure, it is shown fed by a voltage source v and loaded by a capacitor C. The circuit is easy to solve, giving i1 = (g1g2/jωC)v, or Z = jω(C/g1g2) = jωL, so a capacitive reactance is converted to an inductive reactance. The circuit effectively inverts its load impedance, changing 1/jωC into jωC. No passive network can function as a gyrator, but with the help of operational amplifiers it is not difficult to build one. The figure shows a circuit that is sometimes called a generalized impedance converter (GIC). The circuit is shown with component values that cause it to simulate an inductance of 1 H. It is not difficult to analyze this circuit. The voltages at nodes a, c and d are equal if the feedback is working, and equal to the applied voltage. Now all the voltages can be found by starting at the lowest resistor and calculating the currents working upwards, remembering that the op-amp input currents are zero. Finally, the impedance the circuit presents to the source is just the applied voltage divided by the current. The expression is shown in the figure. Nothing in the circuit is connected to ground except the lowest impedance, so the ground connection is not essential.
The circuit was tested by applying a sinusoidal voltage of various frequencies, while measuring the voltage across the circuit and the current through it with the AC scales of two DMM's. The waveforms are sinusoidal, so the meters read properly. Between about 500 Hz and 7000 Hz, the circuit did simulate a 1 H inductor quite well. At lower frequencies, the reactance of the capacitor becomes large, and the op-amp outputs tend to saturate. At higher frequencies, the capacitive reactance becomes small and overloads the op-amps. In both cases the feedback becomes inaccurate, and the apparent inductance rises. The capacitor must be properly chosen for the frequency range desired. Note that the capacitor may also be in the position of Z2. I used LM833 and LM385 dual op-amps with essentially the same results for either. To observe the time-domain behavior, apply a square wave to the circuit through a 10k resistor. Use a unity-gain buffer to avoid loading the signal generator so the square wave does not droop. Observe the applied voltage as well as the voltage across the circuit with an oscilloscope. The voltage across the circuit decreased exponentially to zero with the expected time constant of L/R = 0.0001 s, a very pretty result. Unlike a normal inductor, the current that this circuit can absorb is limited. To draw the input voltage to zero, the lower op-amp output voltage must sink to 10k times the input current. Otherwise, the output will saturate and the feedback will fail. This would occur if we used a 1k resistor instead of a 10k resisor as the R in our RL circuit, trying to get a time constant of 1 ms. This limitation must be carefully watched in applying this circuit, and the component values should be carefully chosen to eliminate it. This is, indeed, a remarkable circuit that presents inductance without magnetic fields. Related to the gyrator is the Negative Impedance Converter, illustrated at the right. An easy analysis shows that it converts an impedance Z into its negative, -Z. This will convert a capacitive reactance 1/jωC into the inductive reactance j(1/ωC). This is not a real inductance, since the reactance decreases as the frequency increases instead of increasing. The inductance is a function of frequency, not a constant. If you test the circuit shown, a voltage V applied to the input will cause a current V/10kΩ to flow out of the input terminal, instead of into it, as would occur with a normal resistance. A 411 op-amp operates as desired with a resistive Z, but if you replace the resistance with an 0.01 μF capacitor, the circuit will oscillate (I measured 26.4 kHz), the op-amp output saturating in both directions alternately. When the op-amp was replaced by an LM748 with a (very large) compensation capacitor of 0.001 μF, the circuit behaved somewhat better. Using an AC voltmeter and milliammeter, the reactance varied from 16 kΩ at 1 kHz to 32 kHz at 500 Hz, showing that it does indeed decrease as the frequency increases. An investigation with the oscilloscope would verify that the reactance was inductive, with voltage leading the current. The waveform rapidly becomes a triangle wave at higher frequencies (because of the large compensation capacitance). It is clear that this circuit is not a practical one for converting a capacitance to an inductance, even if the strange frequency dependence is accepted.

Reverse Engineering: An nontraditional way of Engineering


Engineering is the profession involved in designing, manufacturing, constructing, and maintaining of products, systems, and structures. At a higher level, there are two types of engineering: forward engineering and reverse engineering. Forward engineering is the traditional process of moving from high-level abstractions and logical designs to the physical implementation of a system. In some situations, there may be a physical part without any technical details, such as drawings, bills-of-material, or without engineering data, such as thermal and electrical properties. The process of duplicating an existing component, subassembly, or product, without the aid of drawings, documentation, or computer model is known as reverse engineering. Reverse engineering can be viewed as the process of analyzing a system to: 1. Identify the system's components and their interrelationships 2. Create representations of the system in another form or a higher level of abstraction 3. Create the physical representation of that system Reverse engineering is very common in such diverse fields as software engineering, entertainment, automotive, consumer products, microchips, chemicals, electronics, and mechanical designs. For example, when a new machine comes to market, competing manufacturers may buy one machine and disassemble it to learn how it was built and how it works. A chemical company may use reverse engineering to defeat a patent on a competitor's manufacturing process. In civil engineering, bridge and building designs are copied from past successes so there will be less chance of catastrophic failure. In software engineering, good source code is often a variation of other good source code. In some situations, designers give a shape to their ideas by using clay, plaster, wood, or foam rubber, but a CAD model is needed to enable the manufacturing of the part. As products become more organic in shape, designing in CAD may be challenging or impossible. There is no guarantee that the CAD model will be acceptably close to the sculpted model. Reverse engineering provides a solution to this problem because the physical model is the source of information for the CAD model. This is also referred to as the part-to-CAD process. Another reason for reverse engineering is to compress product development times. In the intensely competitive global market, manufacturers are constantly seeking new ways to shorten lead-times to market a new product. Rapid product development (RPD) refers to recently developed technologies and techniques that assist manufacturers and designers in meeting the demands of reduced product development time. For example, injection-molding companies must drastically reduce the tool and die development times. By using reverse engineering, a three-dimensional product or model can be quickly captured in digital form, re-modeled, and exported for rapid prototyping/tooling or rapid manufacturing. Following are reasons for reverse engineering a part or product: 1. The original manufacturer of a product no longer produces a product 2. There is inadequate documentation of the original design 3. The original manufacturer no longer exists, but a customer needs the product 4. The original design documentation has been lost or never existed 5. Some bad features of a product need to be designed out. For example, excessive wear might indicate where a product should be improved 6. To strengthen the good features of a product based on long-term usage of the product 7. To analyze the good and bad features of competitors' product 8. To explore new avenues to improve product performance and features 9. To gain competitive benchmarking methods to understand competitor's products and develop better products 10. The original CAD model is not sufficient to support modifications or current manufacturing methods 11. The original supplier is unable or unwilling to provide additional parts 12. The original equipment manufacturers are either unwilling or unable to supply replacement parts, or demand inflated costs for sole-source parts 13. To update obsolete materials or antiquated manufacturing processes with more current, less-expensive technologies Reverse engineering enables the duplication of an existing part by capturing the component's physical dimensions, features, and material properties. Before attempting reverse engineering, a well-planned life-cycle analysis and cost/benefit analysis should be conducted to justify the reverse engineering projects. Reverse engineering is typically cost effective only if the items to be reverse engineered reflect a high investment or will be reproduced in large quantities. Reverse engineering of a part may be attempted even if it is not cost effective, if the part is absolutely required and is mission-critical to a system. Reverse engineering of mechanical parts involves acquiring three-dimensional position data in the point cloud using laser scanners or computed tomography (CT). Representing geometry of the part in terms of surface points is the first step in creating parametric surface patches. A good polymesh is created from the point cloud using reverse engineering software. The cleaned-up polymesh, NURBS (Non-uniform rational B-spline) curves, or NURBS surfaces are exported to CAD packages for further refinement, analysis, and generation of cutter tool paths for CAM. Finally, the CAM produces the physical part. It can be said that reverse engineering begins with the product and works through the design process in the opposite direction to arrive at a product definition statement (PDS). In doing so, it uncovers as much information as possible about the design ideas that were used to produce a particular product.

Electrical Power Quality issues and implications


Ideally, the best electrical supply would be a constant magnitude and frequency sinusoidal voltage waveform. However, because of the non-zero impedance of the supply system, of the large variety of loads that may be encountered and of other phenomena such as transients and outages, the reality is often different. The Power Quality of a system expresses to which degree a practical supply system resembles the ideal supply system.  If the Power Quality of the network is good, then any loads connected to it will run satisfactory and efficiently. Installation running costs and carbon footprint will be minimal.  If the Power Quality of the network is bad, then loads connected to it will fail or will have a reduced lifetime, and the efficiency of the electrical installation will reduce. Installation running costs and carbon footprint will be high and/or operation may not be possible at all. COST OF POOR POWER QUALITY Poor Power Quality can be described as any event related to the electrical network that ultimately results in a financial loss. Possible consequences of poor Power Quality include (Fig. 1):  Unexpected power supply failures (breakers tripping, fuses blowing).  Equipment failure or malfunctioning  Equipment overheating (transformers, motors, …) leading to their lifetime reduction.  Damage to sensitive equipment (PC‟s, production line control systems, …).  Electronic communication interferences.  Increase of system losses.  Need to oversize installations to cope with additional electrical stress with consequential increase of installation and running costs and associated higher carbon footprint.  Penalties imposed by utilities because the site pollutes the supply network too much.  Connection refusal of new sites because the site would pollute the supply network too much.  Impression of unsteadiness of visual sensation induced by a light stimulus whose luminance or spectral distribution fluctuates with time (flicker)  Health issues with and reduced efficiency of personnel,.. The following main contributors to Low Voltage poor Power Quality can be defined:  Reactive power, as it loads up the supply system unnecessary,  Harmonic pollution, as it causes extra stress on the networks and makes installations run less efficiently,  Load imbalance, especially in office building applications, as the unbalanced loads may result in excessive voltage imbalance causing stress on other loads connected to the same network, and leading to an increase of neutral current and neutral to earth voltage build-up,  Fast voltage variations leading to flicker. All this phenomena potentially lead to inefficient running of installations, system down time and reduced equipment life and consequently high installation running costs. In addition to financial loss due to „production stops‟, another factor of the cost of poor Power Quality can be identified by analyzing the extra kWh losses that exist due to the presence of harmonic pollution in typical network components such as transformers, cables and motors. As this loss has to be supplied by the utility power plants, a financial loss and CO2 emissions can be assigned to it. Exact values of this loss depend on the local situation of kWh tariffs and ways that the electrical power is generated (e.g. 3 nuclear power plants have almost no CO2 footprint per kWh generated as opposed to coal power plants for which the footprint is large at around 900-1000 g/kWh produced. Most harmonic pollution nowadays is created as harmonic current produced by loads in individual installations. This harmonic current, injected into the network impedance transfers into harmonic voltage, (Ohm‟s law); which gets applied to all the loads within that user‟s installation. As a result the user employing harmonic loads may suffer from Power Quality problems. In addition however, the harmonic current produced in one installation if not filtered is also flowing through the feeding transformers into the utility supply and creates harmonic voltage distortion on the public network too. As a result, any utility user connected to the same supply will become affected by the pollution created by another utility customer and could suffer operational consequences in his own installation due to this. In order to limit this type of problems most utilities have adopted Power Quality standards/regulations that shall be respected by the users of the supply network. In extreme cases, non-compliance with these regulations leads to a connection refusal of a new installation, which in turn can have a significant impact on the production and revenue loss of the company. POWER QUALITY PARAMETERS (TERMINOLOGY) Reactive power and power factor (cos  In an AC supply, the current is often phase-shifted from the supply voltage. This leads to different power definitions: The active power P [kW], which is responsible of the useful work, is associated with the portion of the current which is in phase with the voltage. - The reactive power Q [kvar], which sustains the electromagnetic field used to make e.g. a motor operate is an energy exchange (per unit of time) between reactive components of the electrical system (capacitors and reactors). It is associated with the portion of the current which is phase shifted by 90° with the voltage. - The apparent power S [kVA], which gives a geometrical combination of the active and of the reactive powers, can be seen as the total power drawn from the network. The ratio between the active power and the apparent power if often referred to as the displacement power factor or cos , and gives a measure of how efficient the utilization of the electrical energy is. A cos that equals to 1 refers to the most efficient transfer of useful energy. A cos that equals to 0 refers to the most inefficient way of transferring useful energy. Harmonic distortion The harmonic pollution is often characterized by the Total Harmonic Distortion or THD which is by definition equal to the ratio of the RMS harmonic content to the fundamental. Voltage unbalance In the symmetrical components theory Fortescue has shown that any three phase system can be expressed as the sum of three symmetrical sets of balanced phasors: the first set having the same phase sequence as the initial system (positive phase sequence), the second set having the inverse phase sequence (negative phase sequence) and the third one consisting of three phasors in phase (zero phase sequence or homopolar components). A normal three phase supply has the three phases of same magnitude but with a phase shifted by 120°. Any deviation (magnitude or phase) of one of the three signals will result in a negative phase sequence component and/or a zero phase sequence component. Flicker According to the International Electrotechnical Vocabulary (IEV) [4] of the International Electrotechnical Committee (IEC), flicker is defined as 'Impression of unsteadiness of visual sensation induced by a light stimulus whose luminance or spectral distribution fluctuates with time'. From a more practical point of view one can say that voltage fluctuations on the supply network cause change of the luminance of lamps, which in turn can create the visual phenomenon called flicker. While a small flicker level may be acceptable, above a certain threshold it becomes annoying to people present in a room where the flicker exists. The degree of annoyance grows very rapidly with the amplitude of the fluctuation. Further on, at certain repetition rates of the voltage fluctuation, even small fluctuation amplitudes can be annoying. REGULATIONS Utility regulations for harmonic pollution are often based on internationally recognized work undertaken by reputable independent bodies which have defined maximum allowable distortion limits for proper operation of equipment. Commonly quoted examples of such documents targeting harmonic pollution. In general the principle of the regulations is as follows:  Limit the total voltage distortion (THDV) contribution that can be created by a customer. In this it is taken into account that if the totally accepted level of voltage distortion is e.g. 5% (of the fundamental voltage), this limit has to be divided over all the users connected. Possibly limits are also imposed for individual harmonic voltage components (e.g. 3% maximum limit for individual harmonic voltages).  Convert the voltage distortion limits into current limits which are accepted to flow into the supply system. The current limits can then be easily verified through measurement.

Design considerations for solar power plant


As the demand for solar electric systems grows, progressive builders are adding solar photovoltaics (PV) as an option for their customers. This overview of solar photovoltaic systems will give the builder a basic understanding of: • Evaluating a building site for its solar potential • Common grid-connected PV system configurations and components • Considerations in selecting components • Considerations in design and installation of a PV system • Typical costs and the labor required to install a PV system • Building and electric code requirements • Where to find more information Emphasis will be placed on information that will be useful in including a grid-connected PV system in a bid for a residential or small commercial building. We will also cover those details of the technology and installation that may be helpful in selecting subcontractors to perform the work, working with a designer, and directing work as it proceeds. Evaluating a Building Site – While the Pacific Northwest may have good to excellent solar potential, not every building site will be suitable for a solar installation. The first step in the design of a photovoltaic system is determining if the site you are considering has good solar potential. Some questions you should ask are: • Is the installation site free from shading by nearby trees, buildings or other obstructions? • Can the PV system be oriented for good performance? • Does the roof or property have enough area to accommodate the solar array? • If the array will be roof-mounted, what kind of roof is it and what is its condition? Mounting Location – Solar modules are usually mounted on roofs. If roof area is not available, PV modules can be pole-mounted, ground-mounted, wall-mounted or installed as part of a shade structure (refer to the section “System Components/Array Mounting Racks” below). Shading – Photovoltaic arrays are adversely affected by shading. A well-designed PV system needs clear and unobstructed access to the sun’s rays from about 9 a.m. to 3 p.m., throughout the year. Even small shadows, such as the shadow of a single branch of a leafless tree can significantly reduce the power output of a solar module.1 Shading from the building itself – due to vents, attic fans, skylights, gables or overhangs – must also be avoided. Keep in mind that an area may be unshaded during one part of the day, but shaded at another part of the day. Also, a site that is unshaded in the summer may be shaded in the winter due to longer winter shadows.2 Orientation – In northern latitudes, by conventional wisdom PV modules are ideally oriented towards true south.3 But the tilt or orientation of a roof does not need to be perfect because solar modules produce 95 percent of their full power when within 20 degrees of the sun’s direction. Roofs that face east or west may also be acceptable. As an example, a due west facing rooftop solar PV system, tilted at 20 degrees in Salem, Oregon, will produce about 88 percent as much power as one pointing true south at the same location. Flat roofs work well because the PV modules can be mounted on frames and tilted up toward true south. Tilt – Generally the optimum tilt of a PV array in the Pacific Northwest equals the geographic latitude minus about 15 degrees to achieve yearly maximum output of power. An increased tilt favors power output in the winter and a decreased tilt favors output in the summer. In western Washington and Oregon, with their cloudier winters, the optimum angle is less than the optimum east of the Cascades. Required Area – Residential and small commercial systems require as little as 50 square feet for a small system up to as much as 1,000 square feet. As a general rule for the Pacific Northwest, every 1,000 watts of PV modules requires 100 square feet of collector area for modules using crystalline silicon (currently the most common PV cell type). Each 1,000 watts of PV modules can generate about 1,000 kilowatt-hours (kWh) per year in locations west of the Cascades and about 1,250 kWh per year east of the Cascades. When using less efficient modules, such as amorphous silicon or other thin-film types, the area will need to be approximately doubled. If your location limits the physical size of your system, you may want to install a system that uses more-efficient PV modules. Keep in mind that access space around the modules can add up to 20 percent to the required area. Roof Types – For roof-mounted systems, typically composition shingles are easiest to work with and slate and tile roofs are the most difficult. Nevertheless, it is possible to install PV modules on all roof types. If the roof will need replacing within 5 to 10 years, it should be replaced at the time the PV system is installed to avoid the cost of removing and reinstalling the PV system. Photovoltaic System Types Photovoltaic system types can be broadly classified by answers to the following questions: • Will it be connected to the utility’s transmission grid? • Will it produce alternating current (AC) or direct current (DC) electricity, or both? • Will it have battery back-up? • Will it have back-up by a diesel, gasoline or propane generator set? Here we will focus on systems that are connected to the utility transmission grid, variously referred to as utility-connected, grid-connected, grid-interconnected, grid-tied or grid-intertied systems. These systems generate the same quality of alternating current (AC) electricity as is provided by your utility. The energy generated by a grid-connected system is used first to power the AC electrical needs of the home or business. Any surplus power that is generated is fed or “pushed” onto the electric utility’s transmission grid. Any of the building’s power requirements that are not met by the PV system are powered by the transmission grid. In this way, the grid can be thought of as a virtual battery bank for the building. Common System Types – Most new PV systems being installed in the United States are grid-connected residential systems without battery back-up. Many grid-connected AC systems are also being installed in commercial or public facilities. The grid-connected systems we will be examining here are of two types, although others exist. These are: • Grid-connected AC system with no battery or generator back-up. • Grid-connected AC system with battery back-up. Is a Battery Bank Really Needed? – The simplest, most reliable, and least expensive configuration does not have battery back-up. Without batteries, a grid-connected PV system will shut down when a utility power outage occurs. Battery back-up maintains power to some or all of the electric equipment, such as lighting, refrigeration, or fans, even when a utility power outage occurs. A grid-connected system may also have generator back-up if the facility cannot tolerate power outages With battery back-up, power outages may not even be noticed. However, adding batteries to a system comes with several disadvantages that must be weighed against the advantage of power back-up. These disadvantages are: • Batteries consume energy during charging and discharging, reducing the efficiency and output of the PV system by about 10 percent for lead-acid batteries. • Batteries increase the complexity of the system. Both first cost and installation costs are increased. • Most lower cost batteries require maintenance. • Batteries will usually need to be replaced before other parts of the system and at considerable expense. System Components Pre-engineered photovoltaic systems can be purchased that come with all the components you will need, right down to the nuts and bolts. Any good dealer can size and specify systems for you, given a description of your site and needs. Nevertheless, familiarity with system components, the different types that are available, and criteria for making a selection is important. Basic components of grid-connected PV systems with and without batteries are: • Solar photovoltaic modules • Array mounting racks • Grounding equipment • Combiner box • Surge protection (often part of the combiner box) • Inverter • Meters – system meter and kilowatt-hour meter • Disconnects: - Array DC disconnect - Inverter DC disconnect - Inverter AC disconnect - Exterior AC disconnect If the system includes batteries, it will also require: • Battery bank with cabling and housing structure • Charge controller • Battery disconnect 1. Basic Principles to Follow When Designing a Quality PV System 1. Select a packaged system that meets the owner's needs. Customer criteria for a system may include reduction in monthly electricity bill, environmental benefits, desire for backup power, initial budget constraints, etc. Size and orient the PV array to provide the expected electrical power and energy. 2. Ensure the roof area or other installation site is capable of handling the desired system size. 3. Specify sunlight and weather resistant materials for all outdoor equipment. 4. Locate the array to minimize shading from foliage, vent pipes, and adjacent structures. 5. Design the system in compliance with all applicable building and electrical codes. 6. Design the system with a minimum of electrical losses due to wiring, fuses, switches, and inverters. 7. Properly house and manage the battery system, should batteries be required. 8. Ensure the design meets local utility interconnection requirements. 1.2. Basic Steps to Follow When Installing a PV System 1. Ensure the roof area or other installation site is capable of handling the desired system size. 2. If roof mounted, verify that the roof is capable of handling additional weight of PV system. Augment roof structure as necessary. 3. Properly seal any roof penetrations with roofing industry approved sealing methods. 4. Install equipment according to manufacturers specifications, using installation requirements and procedures from the manufacturers' specifications. 5. Properly ground the system parts to reduce the threat of shock hazards and induced surges. 6. Check for proper PV system operation by following the checkout procedures on the PV System Installation Checklist. 7. Ensure the design meets local utility interconnection requirements 8. Have final inspections completed by the Authority Having Jurisdiction (AHJ) and the utility (if required).

Friday, 8 May 2015

FACTS: Devices and applications


The concept of FACTS (Flexible Alternating Current Transmission System) refers to a family of power electronics-based devices able to enhance AC system controllability and stability and to increase power transfer capability. The design of the different schemes and configurations of FACTS devices is based on the combination of traditional power system components (such as transformers, reactors, switches, and capacitors) with power electronics elements (such as various types of transistors and thyristors). Over the last years, the current rating of thyristors has evolved into higher nominal values making power electronics capable of high power applications of tens, hundreds and thousands of MW. FACTS devices, thanks to their speed and flexibility, are able to provide the transmission system with several advantages such as: transmission capacity enhancement, power flow control, transient stability improvement, power oscillation damping, voltage stability and control. Depending on the type and rating of the selected device and on the specific voltage level and local network conditions, a transmission capacity enhancement of up to 40-50% may be achieved by installing a FACTS element. In comparison to traditional mechanically-driven devices, FACTS controllers are also not subject to wear and require a lower maintenance. Costs, complexity and reliability issues represent nowadays the main barriers to the integration of these promising technologies from the TSOs' perspective. Further FACTS penetration will depend on the technology providers' ability to overcome these barriers, thanks to more standardization, interoperability and economies of scale. In the present day scenario, transmission systems are becoming increasingly stressed, more difficult to operate, and more insecure with unscheduled power flows and greater losses because of growing demand for electricity and restriction on the construction of new lines. However, many high-voltage transmission systems are operating below their thermal ratings due to constraints, such as voltage and stability limits. Now, more advanced technology is used for reliable and operation of transmission and distribution in power system. To achieve both reliable and benefit economically, it has become clearer that more efficient utilization and control of the existing transmission system infrastructure is required. Improved utilization of the existing power system is provided through the application of advanced control technologies. Power electronics has developed the flexible AC transmission system (FACTS) devices. FACTS devices are effective and capable of increasing the power transfer capability of a line and support the power system to work with comfortable margins of stability. FACTS devices are used in transmission system to control and utilize the flexibility and system performance. To achieve all, the insertion of FACTS devices required in plant in order to control the main parameters namely voltage.
CLASSIFICATION OF FACTS DEVICES : The FACTS controllers are of different types. These are classified according to their connection like shunt connected controllers, series connected and combined series and shunt connected controllers. The main types of FACTS devices are described below: A. Static Var Compensators (SVC) The most important shunt FACTS device such as Static Var Compensator, it have been dynamic voltage problems. The accuracy, availability and fast response enable Static Var Compensators to provide high performance steady state and transient voltage control compared with classical shunt compensation. Static Var Compensators are also used improve the transient stability, damp power swings and reduce system losses by using reactive power control. B. Static Synchronous Compensator (STATCOM) STATCOM’s are GTO (gate turn-off type thyristor) based SVC’s. They do not require large inductive and capacitive components to provide inductive or capacitive reactive power to high voltage transmission systems as required in Static Var Compensators (SVC). STATCOM requires less area. Another advantage is the higher reactive output at low system voltages where a STATCOM can be considered as a current source independent from the system voltage. C. Static Synchronous Series Compensator (SSSC) Voltage sourced converter (VSC) based series onnected FACTS controller known as SSSC. It can inject a voltage with controllable magnitude and phase angle at the line frequency and found to be more capable of handling power flow control, improvement of transient. D. Thyristor Controlled Series Compensators (TCSC) TCSC is an extension of conventional series capacitors but only addition of thyristor-controlled reactor with it. Connecting a reactance in parallel with a series capacitor enables a continuous and rapidly variable series compensation system. The main advantages of TCSC’s are increased real transfer power, power oscillations damping, sub-synchronous resonances damping, and power flow line control E. Unified Power Flow Controller (UPFC) UPFC is combination of shunt connected device (STATCOM) and a series branch (SSSC) in the transmission line via its DC link. This device is most versatile FACTS device. It can not only perform the function of STATCOM ,TCSC and phase prevent faults, they can mitigate the effects of faults and make electricity supply more secure by reducing the number of line trips. The advantages of using FACTS devices in electrical transmission systems are described below. • More utilization of existing transmission system • Reliability of Transmission system increases. • More Increased transient and dynamic stability of the system. • Increased more quality ofsupply for large industries • Beneficial for Environment. A. More utilization of existing transmission system In all the countries, the power demand is increasing day by day to transfer the electrical power and controlling the load flow of the transmission system is very necessary .this can be achieved by more load centers which can change frequently. Addition of new transmission line is very costly to take the increased load on the system; in that case FACTS devices are much economical to meet the increased load on the same transmission lines. B. More Increased transient and dynamic stability of the system The Long transmission lines are inter-connected with grids to absorb the changing the loading of the transmission line and it is also seen that there should be no line fault creates in the line / transmission system. By doing this the power flow is reduced and transmission line can be trip. By the use of FACTS devices high power transfer capacity is increased at the same time line tripling faults are also reduces. C. Increased more quality of supply for large industries New industries wants good quality of electric supply , constant voltage with less fluctuation and desired frequency as mentioned by electricity department . Reduce voltage, variation in frequency or loss of electric power can reduce the manufacturing of the industry and cause to high economical loss .FACTS devices can helps to provide the required quality of supply. D. Beneficial for Environment FACTS devices are becoming environmentally friendly. FACTS devices does not produce any type of waste hazard material so they are pollution free. These devices help us to deliver the electrical power more economically with better use of existing transmission lines while reducing the cost of new transmission line and generating more power. E. Increased transmission system reliability and availability Transmission system reliability and availability is affected by many different factors. Although FACTS devices had ability to reduce such factors and improves the system reliability and availability.

Thursday, 7 May 2015

Genetic Algorithm (GA): An evolutionary approach of optimization


The pioneering work of J. H. Holland in the 1970’s proved to be a significant contribution for scientific and engineering applications. Since then, the output of research work in this field has grown exponentially although the contributions have been, and are largely initiated, from academic institutions world-wide. It is only very recently that we have been able to acquire some material that comes from industry. The concept of this is somehow not clearly understood. However, the obvious obstacle that may drive engineers away from using GA is the difficulty of speeding up the computational process, as well as the intrinsic nature of randomness that leads to a problem of performance assurance. Nevertheless, GA development has now reached a stage of maturity, thanks to the effort made in the last few years by academics and engineers all over the world. It has blossomed rapidly due to the easy availability of low-cost but fastspeed small computers. Those problems once considered to be “hard” or even “impossible,” in the past are no longer a problem as far as computation is concerned. Therefore, complex and conflicting problems that require simultaneous solutions, which in the past were considered deadlocked problems, can now be obtained with GA. Furthermore, the GA is not considered a mathematically guided algorithm. The optima obtained is evolved from generation to generation without stringent mathematical formulation such as the traditional gradient-type of optimizing procedure. In fact, CA is much different in that context. It is merely a stochastic, discrete event and a nonlinear process. The obtained optima is an end product containing the best elements of previous generations where the attributes of a stronger individual tend to be carried forward into the following generation. The rule of the game is “survival of the fittest will win.” All living organisms consist of cells, and each cell contains the same set of one or more chromosomes—strings of DNA—that serve as a "blueprint" for the organism. A chromosome can be conceptually divided into genes— each of which encodes a particular protein. Very roughly, one can think of a gene as encoding a trait, such as eye color. The different possible "settings" for a trait (e.g., blue, brown, hazel) are called alleles. Each gene is located at a particular locus (position) on the chromosome. Many organisms have multiple chromosomes in each cell. The complete collection of genetic material (all chromosomes taken together) is called the organism's genome. The term genotype refers to the particular set of genes contained in a genome. Two individuals that have identical genomes are said to have the same genotype. The genotype gives rise, under fetal and later development, to the organism's phenotype—its physical and mental characteristics, such as eye color, height, brain size, and intelligence. Organisms whose chromosomes are arrayed in pairs are called diploid; organisms whose chromosomes are unpaired are called haploid. In nature, most sexually reproducing species are diploid, including human beings, who each have 23 pairs of chromosomes in each somatic (non−germ) cell in the body. During sexual reproduction, recombination (or crossover) occurs: in each parent, genes are exchanged between each pair of chromosomes to form a gamete (a single chromosome), and then gametes from the two parents pair up to create a full set of diploid chromosomes. In haploid sexual reproduction, genes are exchanged between the two parents' single−strand chromosomes. Offspring are subject to mutation, in which single nucleotides (elementary bits of DNA) are changed from parent to offspring, the changes often resulting from copying errors. The fitness of an organism is typically defined as the probability that the organism will live to reproduce (viability) or as a function of the number of offspring the organism has (fertility). In genetic algorithms, the term chromosome typically refers to a candidate solution to a problem, often encoded as a bit string. The "genes" are either single bits or short blocks of adjacent bits that encode a particular element of the candidate solution (e.g., in the context of multiparameter function optimization the bits encoding a particular parameter might be considered to be a gene). An allele in a bit string is either 0 or 1; for larger alphabets more alleles are possible at each locus. Crossover typically consists of exchanging genetic material between two singlechromosome haploid parents. Mutation consists of flipping the bit at a randomly chosen locus (or, for larger alphabets, replacing a the symbol at a randomly chosen locus with a randomly chosen new symbol). Genetic Algorithms have been widely used commercially. Optimizing train routing was an early application. More recently fighter planes have used GAs to optimize wing designs. I have used GAs extensively at work to generate solutions to problems that have an extremely large search space. Many problems are unlikely to benefit from GAs. I disagree with Thomas that they are too hard to understand. A GA is actually very simple. We found that there is a huge amount of knowledge to be gained from optimizing the GA to a particular problem that might be difficult and as always managing large amounts of parallel computation continue to be a problem for many programmers. A problem that would benefit from a GA is going to have the following characteristics: A good way to encode potential solutions A way to compute an a numerical score to evaluate the quality of the solution A large multi-dimensional search space where the answer is non-obvious A good solution is good enough and a perfect solution is not required There are many problems that could probably benefit from GAs and in the future they will probably be more widely deployed. I believe that GAs are used in cutting edge engineering more than people think however most people (like my company does) guards those secrets extremely closely. It is only long after the fact that it is revealed that GAs were used.

Ant Colony optimization: An natural approach for optimizing Engineering processes


Ant as a single individual has a very limited effectiveness. But as a part of a well-organised colony, it becomes one powerful agent, working for the development of the colony. The ant lives for the colony and exists only as a part of it. Each ant is able to communicate, learn, cooperate, and all together they are capable of develop themselves and colonise a large area. They manage such great successes by increasing the number of individuals and being exceptionally well organised. The self organising principles they are using allow a highly coordinated behaviour of the colony. Pierre Paul Grassé, a French entomologist, was one of the first researchers who investigate the social behaviour of insects. He discoveredi that these insects are capable to react to what he called ³significant stimuli," signals that activate a genetically encoded reaction. He observed that the effects of these reactions can act as new significant stimuli for both the insect that produced them and for the other insects in the colony. Grassé used the term stigmergy to describe this particular type of indirect communication in which ³the workers are stimulated by the performance they have achieved´. Stigmergy is defined as a method of indirect communication in a self-organizing emergent system where its individual parts communicate with one another by modifying their local environment. Ants communicate to one another by laying down pheromones along their trails, so where ants go within and around their ant colony is a stigmergic system. In many ant species, ants walking from or to a food source, deposit on the ground a substance called pheromone. Other ants are able to smell this pheromone, and its presence influences the choice of their path, that is, they tend to follow strong pheromone concentrations. The pheromone deposited on the ground forms a pheromone trail, which allows the ants to find good sources of food that have been previously identified by other ants. Pheromones represent in some ways the common memory. The fact that it is external and not a part of the ants / agents, confers to it an easy access for everyone. The memory is saved in without regarding the configuration of the ground, the number of ants, etc. It is totally independent, and still remains extremely simple. In our implementation we will see that two different types of pheromones are used. The first one is represented in red and is let by the ants which do not carry the food. We will call it the Away pheromone, as it means that the ant is going away from the nest. Oppositely, the ants which carry the food to bring it back to the nest let a blue trace behind them, the Back pheromone. Pheromones just proceed to one task: nature will take care of it in the real life, although it is a simple process in algorithms. In course of time, a global reduction of the pheromones by a certain factor is applied, simulating the evaporation process. Thus the non-succeeding path will see their concentration of pheromones reduced, although good solutions will stay full of pheromones as the ants keep using it.
Ants communicate information by leaving pheromone tracks. A moving ant leaves, in varying quantities, some pheromone on the ground to mark its way. While an isolated ant moves essentially at random, an ant encountering a previously laid trail is able to detect it and decide with high probability to follow it, thus reinforcing the track with its own pheromone. The collective behavior that emerges is thus a positive feedback: where the more the ants following a track, the more attractive that track becomes for being followed; thus the probability with which an ant chooses a path increases with the number of ants that previously chose the same path. This elementary ant's behavior inspired the development of ant colony optimization by Marco Dorigo in 1992, constructing a meta-heuristic stochastic combinatorial computational methodology belonging to a family of related meta-heuristic methods such as simulated annealing, Tabu search and genetic algorithms. This book covers in twenty chapters state of the art methods and applications of utilizing ant colony optimization algorithms. New methods and theory such as multi colony ant algorithm based upon a new pheromone arithmetic crossover and a repulsive operator, new findings on ant colony convergence, and a diversity of engineering and science applications from transportation, water resources, electrical and computer science.

Nanotechnology and its applications


Nanotechnology can be defined as the science and engineering involved in the design, synthesis, characterization, and application of materials and devices whose smallest functional organization in at least one dimension is on the nanometer scale or one billionth of a meter. At these scales, consideration of individual molecules and interacting groups of molecules in relation to the bulk macroscopic properties of the material or device becomes important, since it is control over the fundamental molecular structure that allows control over the macroscopic chemical and physical properties. Applications to medicine and physiology imply materials and devices designed to interact with the body at subcellular (i.e., molecular) scales with a high degree of specificity. This can potentially translate into targeted cellular and tissue-specific clinical applications designed to achieve maximal therapeutic affects with minimal side effects. In this review the main scientific and technical aspects of nanotechnology are introduced and some of its potential clinical applications. Nanotechnology ("nanotech") is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Most benefits of nanotechnology depend on the fact that it is possible to tailor the essential structures of materials at the nanoscale to achieve specific properties, thus greatly extending the well-used toolkits of materials science. Using nanotechnology, materials can effectively be made to be stronger, lighter, more durable, more reactive, more sieve-like, or better electrical conductors, among many other traits. There already exist over 800 everyday commercial products that rely on nanoscale materials and processes: Nanoscale additives in polymer composite materials for baseball bats, tennis rackets, motorcycle helmets, automobile bumpers, luggage, and power tool housings can make them simultaneously lightweight, stiff, durable, and resilient. Nanoscale additives to or surface treatments of fabrics help them resist wrinkling, staining, and bacterial growth, and provide lightweight ballistic energy deflection in personal body armor. Nanoscale thin films on eyeglasses, computer and camera displays, windows, and other surfaces can make them water-repellent, antireflective, self-cleaning, resistant to ultraviolet or infrared light, antifog, antimicrobial, scratch-resistant, or electrically conductive. Nanoscale materials in cosmetic products provide greater clarity or coverage; cleansing; absorption; personalization; and antioxidant, anti-microbial, and other health properties in sunscreens, cleansers, complexion treatments, creams and lotions, shampoos, and specialized makeup. Nano-engineered materials in the food industry include nanocomposites in food containers to minimize carbon dioxide leakage out of carbonated beverages, or reduce oxygen inflow, moisture outflow, or the growth of bacteria in order to keep food fresher and safer, longer. Nanosensors built into plastic packaging can warn against spoiled food. Nanosensors are being developed to detect salmonella, pesticides, and other contaminates on food before packaging and distribution. Nanotechnology is already in use in many computing, communications, and other electronics applications to provide faster, smaller, and more portable systems that can manage and store larger and larger amounts of information. These continuously evolving applications include: Nanoscale transistors that are faster, more powerful, and increasingly energy-efficient; soon your computer’s entire memory may be stored on a single tiny chip. Magnetic random access memory (MRAM) enabled by nanometer‐scale magnetic tunnel junctions that can quickly and effectively save even encrypted data during a system shutdown or crash, enable resume‐play features, and gather vehicle accident data. Displays for many new TVs, laptop computers, cell phones, digital cameras, and other devices incorporate nanostructured polymer films known as organic light-emitting diodes, or OLEDs. OLED screens offer brighter images in a flat format, as well as wider viewing angles, lighter weight, better picture density, lower power consumption, and longer lifetimes. Other computing and electronic products include Flash memory chips for iPod nanos; ultraresponsive hearing aids; antimicrobial/antibacterial coatings on mouse/keyboard/cell phone casings; conductive inks for printed electronics for RFID/smart cards/smart packaging; more life-like video games; and flexible displays for e-book readers. The difficulty of meeting the world’s energy demand is compounded by the growing need to protect our environment. Many scientists are looking into ways to develop clean, affordable, and renewable energy sources, along with means to reduce energy consumption and lessen toxicity burdens on the environment. Prototype solar panels incorporating nanotechnology are more efficient than standard designs in converting sunlight to electricity, promising inexpensive solar power in the future. Nanostructured solar cells already are cheaper to manufacture and easier to install, since they can use print-like manufacturing processes and can be made in flexible rolls rather than discrete panels. Newer research suggests that future solar converters might even be “paintable.” Nanotechnology is improving the efficiency of fuel production from normal and low-grade raw petroleum materials through better catalysis, as well as fuel consumption efficiency in vehicles and power plants through higher-efficiency combustion and decreased friction. Nano-bioengineering of enzymes is aiming to enable conversion of cellulose into ethanol for fuel, from wood chips, corn stalks (not just the kernels, as today), unfertilized perennial grasses, etc. Nanotechnology is already being used in numerous new kinds of batteries that are less flammable, quicker-charging, more efficient, lighter weight, and that have a higher power density and hold electrical charge longer. One new lithium-ion battery type uses a common, nontoxic virus in an environmentally benign production process. Nanostructured materials are being pursued to greatly improve hydrogen membrane and storage materials and the catalysts needed to realize fuel cells for alternative transportation technologies at reduced cost. Researchers are also working to develop a safe, lightweight hydrogen fuel tank. Various nanoscience-based options are being pursued to convert waste heat in computers, automobiles, homes, power plants, etc., to usable electrical power. An epoxy containing carbon nanotubes is being used to make windmill blades that are longer, stronger, and lighter-weight than other blades to increase the amount of electricity that windmills can generate. Researchers are developing wires containing carbon nanotubes to have much lower resistance than the high-tension wires currently used in the electric grid and thus reduce transmission power loss. To power mobile electronic devices, researchers are developing thin-film solar electric panels that can be fitted onto computer cases and flexible piezoelectric nanowires woven into clothing to generate usable energy on-the-go from light, friction, and/or body heat. Energy efficiency products are increasing in number and kinds of application. In addition to those noted above, they include more efficient lighting systems for vastly reduced energy consumption for illumination; lighter and stronger vehicle chassis materials for the transportation sector; lower energy consumption in advanced electronics; low-friction nano-engineered lubricants for all kinds of higher-efficiency machine gears, pumps, and fans; light-responsive smart coatings for glass to complement alternative heating/cooling schemes; and high-light-intensity, fast-recharging lanterns for emergency crews. Nanotechnology has the real potential to revolutionize a wide array of medical and biotechnology tools and procedures so that they are more personalized, portable, cheaper, safer, and easier to administer. Below are some examples of important advances in these areas. Quantum dots are semiconducting nanocrystals that can enhance biological imaging for medical diagnostics. When illuminated with ultraviolet light, they emit a wide spectrum of bright colors that can be used to locate and identify specific kinds of cells and biological activities. These crystals offer optical detection up to 1,000 times better than conventional dyes used in many biological tests, such as MRIs, and render significantly more information. Nanotechnology has been used in the early diagnosis of atherosclerosis, or the buildup of plaque in arteries. Researchers have developed an imaging technology to measure the amount of an antibody-nanoparticle complex that accumulates specifically in plaque. Clinical scientists are able to monitor the development of plaque as well as its disappearance. In addition to contributing to building and maintaining lighter, smarter, more efficient, and “greener” vehicles, aircraft, and ships, nanotechnology offers various means to improve the transportation infrastructure: Nano-engineering of steel, concrete, asphalt, and other cementations materials, and their recycled forms, offers great promise in terms of improving the performance, resiliency, and longevity of highway and transportation infrastructure components while reducing their cost. New systems may incorporate innovative capabilities into traditional infrastructure materials, such as the ability to generate or transmit energy. Nanoscale sensors and devices may provide cost-effective continuous structural monitoring of the condition and performance of bridges, tunnels, rails, parking structures, and pavements over time. Nanoscale sensors and devices may also support an enhanced transportation infrastructure that can communicate with vehicle-based systems to help drivers maintain lane position, avoid collisions, adjust travel routes to circumnavigate congestion, and other such activities.

Tuesday, 5 May 2015

Modelling and Simulation: For Engineering project


Modelling and Simulation Model: A system of postulates, data and interfaces presented as a mathematical description of an entity or proceedings or state of affair. (Development of equations, constraints and logic rules) Simulation: Exercising the model and obtaining results. (Implementation of the model) Simulation has emerged as the Third methodology of exploring the truth. It would complement the theory and experimental methodology. Simulation will never replace them. Simulations are applicable in the following situations: 1. When operating conditions change e.g. temperature, pressure, etc 2. When non-controllable factors change e.g. weather, earthquake 3. Dependence of variation of critical factors e.g. fatigue, resonance may be destructive. 4. How sensitive is one factor to the changes in another? 5. Other benefits: a) Useful in design b) Study effects of constraints c) Increase understanding 6. Pitfalls: An assumption, which the owner can’t model or verbalise; so when two independent models clash, contradictory results arise Advantages of simulation Saves manpower, material Useful even if not possible by other means Saves money with fast, consistent answers Could be used for education after establishing Increased flexibility, accuracy, range of operation New results not available before Improved results due to standardization Increased understanding Explicitly stated assumptions and constraint Requirements/ skill required Computer Skill/Expertise Time for implementation Drawbacks: Modelling errors at different levels – Scientific model of reality – Mathematical model – Discrete numerical model – Application program model – Computational model Input errors: Out of range inputs can give spurious results Precision errors: Limits in the precision Phases of development of simulation model Real system to mathematical model Algorithm to solve mathematical model Implementation on a computer Validation – User with I/O – Model – Evaluations Simulation

Tuesday, 28 April 2015

Cryptography: An Introduction


The art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message intoplain text. Encrypted messages can sometimes be broken by cryptanalysis, also called codebreaking, although modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mailmessages, credit card information, and corporate data. One of the most popular cryptography systems used on the Internet is Pretty Good Privacybecause it's effective and free. Cryptography systems can be broadly classified into symmetric-key systems that use a single key that both the sender and recipient have, and public-keysystems that use two keys, a public key known to everyone and a private key that only the recipient of messages uses. Cryptography is the science of writing in secret code and is an ancient art; the first documented use of cryptography in writing dates back to circa 1900 B.C. when an Egyptian scribe used non-standard hieroglyphs in an inscription. Some experts argue that cryptography appeared spontaneously sometime after writing was invented, with applications ranging from diplomatic missives to war-time battle plans. It is no surprise, then, that new forms of cryptography came soon after the widespread development of computer communications. In data and telecommunications, cryptography is necessary when communicating over any untrusted medium, which includes just about any network, particularly the Internet. Within the context of any application-to-application communication, there are some specific security requirements, including: • Authentication: The process of proving one's identity. (The primary forms of host-to-host authentication on the Internet today are name-based or address-based, both of which are notoriously weak.) • Privacy/confidentiality: Ensuring that no one can read the message except the intended receiver. • Integrity: Assuring the receiver that the received message has not been altered in any way from the original. • Non-repudiation: A mechanism to prove that the sender really sent this message. Cryptography, then, not only protects data from theft or alteration, but can also be used for user authentication. There are, in general, three types of cryptographic schemes typically used to accomplish these goals: secret key (or symmetric) cryptography, public-key (or asymmetric) cryptography, and hash functions, each of which is described below. In all cases, the initial unencrypted data is referred to as plaintext. It is encrypted into ciphertext, which will in turn (usually) be decrypted into usable plaintext. In many of the descriptions below, two communicating parties will be referred to as Alice and Bob; this is the common nomenclature in the crypto field and literature to make it easier to identify the communicating parties. If there is a third or fourth party to the communication, they will be referred to as Carol and Dave. Mallory is a malicious party, Eve is an eavesdropper, and Trent is a trusted third party. There are several ways of classifying cryptographic algorithms. For purposes of this paper, they will be categorized based on the number of keys that are employed for encryption and decryption, and further defined by their application and use. The three types of algorithms that will be discussed are (Figure 1): • Secret Key Cryptography (SKC): Uses a single key for both encryption and decryption • Public Key Cryptography (PKC): Uses one key for encryption and another for decryption • Hash Functions: Uses a mathematical transformation to irreversibly "encrypt" information • So, why are there so many different types of cryptographic schemes? Why can't we do everything we need with just one? • The answer is that each scheme is optimized for some specific application(s). Hash functions, for example, are well-suited for ensuring data integrity because any change made to the contents of a message will result in the receiver calculating a different hash value than the one placed in the transmission by the sender. Since it is highly unlikely that two different messages will yield the same hash value, data integrity is ensured to a high degree of confidence. • Secret key cryptography, on the other hand, is ideally suited to encrypting messages, thus providing privacy and confidentiality. The sender can generate a session key on a per-message basis to encrypt the message; the receiver, of course, needs the same session key to decrypt the message. • Key exchange, of course, is a key application of public-key cryptography (no pun intended). Asymmetric schemes can also be used for non-repudiation and user authentication; if the receiver can obtain the session key encrypted with the sender's private key, then only this sender could have sent the message. Public-key cryptography could, theoretically, also be used to encrypt messages although this is rarely done because secret-key cryptography operates about 1000 times faster than public-key cryptography.