Optics

on Senin, 01 November 2010


Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.

Most optical phenomena can be accounted for using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however, often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation.

Some phenomena depend on the fact that light has both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modeled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems.

Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fiber optics.


Source : www.wikipedia.com

Electromagnetism

on Kamis, 28 Oktober 2010

Electromagnetism is one of the four fundamental interactions of nature. The other three are the strong interaction, the weak interaction and gravitation. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields.

Electromagnetism is responsible for practically all the phenomena encountered in daily life, with the exception of gravity. Ordinary matter takes its form as a result of intermolecular forces between individual molecules in matter. Electromagnetism is also the force which holds electrons and protons together inside atoms, which are the building blocks of molecules. This governs the processes involved in chemistry, which arise from interactions between the electrons orbiting atoms.

Electromagnetism manifests as both electric fields and magnetic fields. Both fields are simply different aspects of electromagnetism, and hence are intrinsically related. Thus, a changing electric field generates a magnetic field; conversely a changing magnetic field generates an electric field. This effect is called electromagnetic induction, and is the basis of operation for electrical generators, induction motors, and transformers. Mathematically speaking, magnetic fields and electric fields are convertible with relative motion as a four vector.

Electric fields are the cause of several common phenomena, such as electric potential (such as the voltage of a battery) and electric current (such as the flow of electricity through a flashlight). Magnetic fields are the cause of the force associated with magnets.

In quantum electrodynamics, electromagnetic interactions between charged particles can be calculated using the method of Feynman diagrams, in which we picture messenger particles called virtual photons being exchanged between charged particles. This method can be derived from the field picture through perturbation theory.

The theoretical implications of electromagnetism led to the development of special relativity by Albert Einstein in 1905.

History of Electromagnetic Theory

Originally electricity and magnetism were thought of as two separate forces. This view changed, however, with the publication of James Clerk Maxwell's 1873 Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be regulated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:

  1. Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: unlike charges attract, like ones repel.
  2. Magnetic poles (or states of polarization at individual points) attract or repel one another in a similar way and always come in pairs: every north pole is yoked to a south pole.
  3. An electric current in a wire creates a circular magnetic field around the wire, its direction depending on that of the current.
  4. A current is induced in a loop of wire when it is moved towards or away from a magnetic field, or a magnet is moved towards or away from it, the direction of current depending on that of the movement.

While preparing for an evening lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected from magnetic north when the electric current from the battery he was using was switched on and off. This deflection convinced him that magnetic fields radiate from all sides of a wire carrying an electric current, just as light and heat do, and that it confirmed a direct relationship between electricity and magnetism.

At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.

His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.

This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th century mathematical physics. It had far-reaching consequences, one of which was the understanding of the nature of light. Light and other electromagnetic waves take the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.

Ørsted was not the only person to examine the relation between electricity and magnetism. In 1802 Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle by electrostatic charges. Actually, no galvanic current existed in the setup and hence no electromagnetism was present. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community.

Classical Electrodynamics

The scientist William Gilbert proposed, in his De Magnete (1600), that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle, but the link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752. One of the first to discover and publish a link between man-made electric current and magnetism was Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment.[1] Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the subject on a mathematical foundation.

An accurate theory of electromagnetism, known as classical electromagnetism, was developed by various physicists over the course of the 19th century, culminating in the work of James Clerk Maxwell, who unified the preceding developments into a single theory and discovered the electromagnetic nature of light. In classical electromagnetism, the electromagnetic field obeys a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.

One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in a vacuum is a universal constant, dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaces classical kinematics with a new theory of kinematics that is compatible with classical electromagnetism. (For more information, see History of special relativity.)

In addition, relativity theory shows that in moving frames of reference a magnetic field transforms to a field with a nonzero electric component and vice versa; thus firmly showing that they are two sides of the same coin, and thus the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity.)

The Photoelectric Effect

In another paper published in that same year, Albert Einstein undermined the very foundations of classical electromagnetism. His theory of the photoelectric effect (for which he won the Nobel prize for physics) posited that light could exist in discrete particle-like quantities, which later came to be known as photons. Einstein's theory of the photoelectric effect extended the insights that appeared in the solution of the ultraviolet catastrophe presented by Max Planck in 1900. In his work, Planck showed that hot objects emit electromagnetic radiation in discrete packets, which leads to a finite total energy emitted as black body radiation. Both of these results were in direct contradiction with the classical view of light as a continuous wave, although it is now known that the photoelectric effect does not, in fact, compel one to any conclusion about light being made of "photons", as discussed in the photoelectric effect article.[citation needed] Planck's and Einstein's theories were progenitors of quantum mechanics, which, when formulated in 1925, necessitated the invention of a quantum theory of electromagnetism. This theory, completed in the 1940s, is known as quantum electrodynamics (or "QED"), and, in situations where perturbation theory is applicable, is one of the most accurate theories known to physics.

Electromagnetic Phenomena

With the exception of gravitation, electromagnetic phenomena as described by quantum electrodynamics (which includes as a limiting case classical electrodynamics) account for almost all physical phenomena observable to the unaided human senses, including light and other electromagnetic radiation, all of chemistry, most of mechanics (excepting gravitation), and of course magnetism and electricity. Magnetic monopoles (and "Gilbert" dipoles) are not strictly electromagnetic phenomena, since in standard electromagnetism, magnetic fields are generated not by true "magnetic charge" but by currents. There are, however, condensed matter analogs of magnetic monopoles in exotic materials (spin ice) created in the laboratory.


Source : www.wikipedia.com

Laws of Thermodynamics

on Rabu, 20 Oktober 2010

The laws of thermodynamics describe the transport of heat and work in thermodynamic processes. These laws have become some of the most important fundamental laws in physics and other sciences associated with thermodynamics.

Classical thermodynamics, which is focused on systems in thermodynamic equilibrium, can be considered separately from non-equilibrium thermodynamics. This article focuses on classical or thermodynamic equilibrium thermodynamics.

The four principles:
  • The zeroth law of thermodynamics, which underlies the basic definition of temperature.
  • The first law of thermodynamics, which mandates conservation of energy, and states in particular that the flow of heat is a form of energy transfer.
  • The second law of thermodynamics, which states that the entropy of an isolated macroscopic system never decreases, or (equivalently) that perpetual motion machines are impossible.
  • The third law of thermodynamics, which concerns the entropy of a perfect crystal at absolute zero temperature, and which implies that it is impossible to cool a system all the way to exactly absolute zero.

here have been suggestions of additional laws, but none of them have anything like the generality of the accepted laws, and they are not mentioned in standard textbooks.

Zeroth Law

"If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other".

When two systems, each in its own thermodynamic equilibrium, are put in purely thermal connection, radiative or material, with each other, there will be a net exchange of heat between them unless or until they are in thermal equilibrium. That is the state of having equal temperature. Although this concept of thermodynamics is fundamental, the need to state it explicitly was not widely perceived until the first third of the 20th century, long after the first three principles were already widely in use. Hence it was numbered zero—before the subsequent three. The Zeroth Law implies that thermal equilibrium, viewed as a binary relation, is a transitive relation. Since a system in thermodynamic equilibrium is defined to be in thermal equilibrium with itself, and, if a system is in thermal equilibrium with another, the latter is in thermal equilibrium with the former. Thermal equilibrium is furthermore an equivalence relation: If a system A is in thermal equilibrium with both systems B and C, then systems B and C are in thermal equilibrium with each other. In other words, if A is the same temperature (in kelvin) as both B and C, then B and C have to be at the same temperature as each other.

First Law

"Energy can be neither created nor destroyed. It can only change forms.
In any process in an isolated system, the total energy remains the same.
For a thermodynamic cycle the net heat supplied to the system equals the net work done by the system".

The First Law states that energy cannot be created or destroyed; rather, the amount of energy lost in a steady state process cannot be greater than the amount of energy gained. This is the statement of conservation of energy for a thermodynamic system. It refers to the two ways that a closed system transfers energy to and from its surroundings – by the process of heat transfer and the process of mechanical work. The rate of gain or loss in the stored energy of a system is determined by the rates of these two processes. In open systems, the flow of matter is another energy transfer mechanism, and extra terms must be included in the expression of the first law.

The First Law clarifies the nature of energy. It is a stored quantity which is independent of any particular process path, i.e., it is independent of the system history. If a system undergoes a thermodynamic cycle, whether it becomes warmer, cooler, larger, or smaller, then it will have the same amount of energy each time it returns to a particular state. Mathematically speaking, energy is a state function and infinitesimal changes in the energy are exact differentials.

All laws of thermodynamics but the First are statistical and simply describe the tendencies of macroscopic systems. For microscopic systems with few particles, the variations in the parameters become larger than the parameters themselves[clarification needed], and the assumptions of thermodynamics become meaningless.

Fundamental thermodynamic relation

The first law can be expressed as the fundamental thermodynamic relation:

Heat supplied to a system = increase in internal energy of the system + work done by the system

Increase in internal energy of a system = heat supplied to the system - work done by the system

dU = T dS - p dV

where we know TdS=δQ and pdV=δW.

Therefore we can say δQ=dU+δW, where:
U is internal energy
T is temperature
S is entropy
p is pressure
V is volume

This is a statement of conservation of energy: The net change in internal energy (dU) equals the heat energy that flows in (TdS), minus the energy that flows out via the system performing work (pdV).


Second Law


Consider two isolated systems in separate but nearby regions of space, each in thermodynamic equilibrium in itself (but not in equilibrium with each other). Then let some event break the isolation that separates the two systems, so that they become able to exchange matter or energy. Then wait until the exchanging systems reach mutual thermodynamic equilibrium. The sum of the entropies of the initial two isolated systems is less than or equal to the entropy of the final exchanging systems. In the process of reaching a new thermodynamic equilibrium, entropy has increased (or at least has not decreased). Both matter and energy exchanges can contribute to the entropy increase.

In a few words, the second law states "spontaneous natural processes increase entropy overall." Another brief statement is "heat can spontaneously flow from a higher-temperature region to a lower-temperature region, but not the other way around." Nevertheless, energy can be transferred from cold to hot, for example, when a refrigerator cools its contents while warming the surrounding air, though still all transfers as heat are from hot to cold. Heat flows from the cold refrigerator air to the even-colder refrigerant, then the refrigerant is warmed by compression (which requires an external source of energy to do thermodynamic work), then heat flows from the hot refrigerant to the outside air, then the refrigerant cools by expansion to its initial volume (thus doing thermodynamic work on the environment), and the cycle repeats. Entropy is increased also by processes of mixing without transfer of energy as heat.

A way of thinking about the second law is to consider entropy as a measure of ignorance of the microscopic details of the motion and configuration of the system given only predictable reproducibility of bulk or macroscopic behaviour. So, for example, one has less knowledge about the separate fragments of a broken cup than about an intact one, because when the fragments are separated, one does not know exactly whether they will fit together again, or whether perhaps there is a missing shard. Solid crystals, the most regularly structured form of matter, with considerable predictability of microscopic configuration, as well as predictability of bulk behaviour, have low entropy values; and gases, which behave predictably in bulk even when their microscopic motions are unknown, have high entropy values. This is because the positions of the crystal atoms are more predictable than are those of the gas atoms, for a given degree of bulk predictability.

The entropy of an isolated macroscopic system never decreases. However, a microscopic system may exhibit fluctuations of entropy opposite to that stated by the Second Law (see Maxwell's demon and Fluctuation Theorem).

Third Law

As temperature approaches absolute zero, the entropy of a system approaches a constant minimum.

Briefly, this postulates that entropy is temperature dependent and results in the formulation of the idea of absolute zero.


Source: www.wikipedia.com

Mechanics

on Kamis, 14 Oktober 2010

Mechanics (Greek Μηχανική) is the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The discipline has its roots in several ancient civilizations (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo, Kepler, and especially Newton, laid the foundation for what is now known as classical mechanics.

Classical versus Quantum

The major division of the mechanics discipline separates classical mechanics from quantum mechanics.

Historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newton's Laws of motion in Principia Mathematica, while quantum mechanics didn't appear until 1900. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.

Quantum mechanics is of a wider scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers. Quantum mechanics has superseded classical mechanics at the foundational level and is indispensable for the explanation and prediction of processes at molecular and (sub)atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult in quantum mechanics and hence remains useful and well used.

Einsteinian versus Newtonian

Analogous to the quantum versus classical reformation, Einstein's general and special theories of relativity have expanded the scope of mechanics beyond the mechanics of Newton and Galileo, and made fundamental corrections to them, that become significant and even dominant as speeds of material objects approach the speed of light, which cannot be exceeded. Relativistic corrections are also needed for quantum mechanics, although General relativity has not been integrated; the two theories remain incompatible, a hurdle which must be overcome in developing the Grand Unified Theory.

Antiquity

The main theory of mechanics in antiquity was Aristotelian mechanics. A later developer in this tradition was Hipparchus.

Medieval Age

In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus. This led to the development of the theory of impetus by 14th century French Jean Buridan, which developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies.

On the question of a body subject to a constant (uniform) force, the 12th century Jewish-Arab Nathanel (Iraqi, of Baghdad) stated that constant force imparts constant acceleration, while the main properties are uniformly accelerated motion (as of falling bodies) was worked out by the 14th century Oxford Calculators.

Early Modern Age

Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.

There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and the systematic mathematics therein did not and could not have been stated earlier because calculus had not been developed. However, many of the ideas, particularly as pertain to inertia (impetus) and falling bodies had been developed and stated by earlier researchers, both the then-recent Galileo and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.

Modern Age

Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th century ideas.

Types of Mechanical Bodies

Thus the often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.

Other distinctions between the various sub-disciplines of mechanics, concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.

Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.

For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.

Sub-disciplines in Mechanics

The following are two lists of various subjects that are studied in mechanics.

Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.

Classical mechanics

The following are described as forming Classical mechanics:

  • Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
  • Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
  • Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
  • Celestial mechanics, the motion of heavenly bodies: planets, comets, stars, galaxies, etc.
  • Astrodynamics, spacecraft navigation, etc.
  • Solid mechanics, elasticity, the properties of deformable bodies.
  • Fracture mechanics
  • Acoustics, sound ( = density variation propagation) in solids, fluids and gases.
  • Statics, semi-rigid bodies in mechanical equilibrium
  • Fluid mechanics, the motion of fluids
  • Soil mechanics, mechanical behavior of soils
  • Continuum mechanics, mechanics of continua (both solid and fluid)
  • Hydraulics, mechanical properties of liquids
  • Fluid statics, liquids in equilibrium
  • Applied mechanics, or Engineering mechanics
  • Biomechanics, solids, fluids, etc. in biology
  • Biophysics, physical processes in living organisms
  • Statistical mechanics, assemblies of particles too large to be described in a deterministic way
  • Relativistic or Einsteinian mechanics, universal gravitation

Quantum Mechanics

The following are categorized as being part of Quantum mechanics:

  • Particle physics, the motion, structure, and reactions of particles
  • Nuclear physics, the motion, structure, and reactions of nuclei
  • Condensed matter physics, quantum gases, solids, liquids, etc.
  • Quantum statistical mechanics, large assemblies of particles


Source: www.wikipedia.com

How to Learn Classical Mechanics

on Senin, 11 Oktober 2010

Classical Mechanics-an easily confused territory, but do you know, it is the most interesting topic in physics because it is not at all formula dependent. Believe it or not: All Mechanics (Classical) that you shall ever learn is based on only FOUR laws and nothing else...absolutely nothing,, the three laws of Newton and the law of conservation of energy.

Step:

  1. Understand the motion. (very very important). Go on reading the problem step by step until you understand every thing that is going on in the situation. Mechanics is all about study of motion. So visualize the motion.
  2. NEVER blindly apply formulas. While learning any formula in physics you must keep in mind two things: where to apply the formula and where it cannot be applied AND how on earth we got the stupid formula.
  3. Eg: The formulas of elastic and inelastic collisions are applicable only for free bodies. Momentum is conserved only if INTERNAL forces are acting. Potential Energy is defined only for conservative forces.Work done=Change in Kinetic Energy.... to name a few.
  4. Everything you will study in classical mechanics HAS to follow Newton's laws, so, if you are confused at a situation, try applying Newton's laws to know what forces are being applied where and there effects.
  5. Try simplifying the situation as much as you can. Try changing reference frames. But at the end the problem, don't neglect the assumptions you have taken. For eg. It is easier to analyze motion of objects in a car if you are sitting inside the car and observing from that frame of reference because you don't have to consider motion of the car, but at the end of the problem all the answers you will get will be in the car frame, so convert them in the required frame by finally consider the motion of the car.
  6. Remember don't by heart formulas until and unless you know where they came from. So if ever you are stuck up, you can easily derive the formula or even a simpler version of that formula for the specific problem.
  7. Have fun with Mechanics!!!

Tips:

  • You must know how to apply Newton's laws. They are a must. So ask you teacher to teach them properly.
  • Don't hesitate to ask doubts while learning. You will remember a concept more easily if you ever had a doubt with it and got it clarified personally.
  • Always bombard the teacher with questions like: "Why to apply only this formula?", "Why only this?", "Why not that?", "Is there any other way to do it?"
  • If you don't understand whether to apply a formula or not, fall back to a previous concept you used to derive that formula (That's why I always say you should know where the formula comes from) You can fall as back as to Newton's laws for help.
  • Find out other ways to do a problem no matter how complicated they are. They will certainly help with some other similar situations
  • Don't Hesitate to write things down, you will see the problem ore clearly then.

Warnings:

  • Don't involve yourself too much in Mechanics or you will get addicted to it
  • Classical Mechanics is only applicable at low velocities.


Source: www.wikihow.com

Electricity

on Rabu, 06 Oktober 2010

Electricity (from the New Latin ēlectricus, "amber-like") is a general term encompassing a variety of phenomena resulting from the presence and flow of electric charge. These include many easily recognizable phenomena, such as lightning and static electricity, but in addition, less familiar concepts, such as the electromagnetic field and electromagnetic induction.

In general usage, the word "electricity" adequately refers to a number of physical effects. In scientific usage, however, the term is vague, and these related, but distinct, concepts are better identified by more precise terms:

Electric charge: a property of some subatomic particles, which determines their electromagnetic interactions. Electrically charged matter is influenced by, and produces, electromagnetic fields.

Electric current: a movement or flow of electrically charged particles, typically measured in amperes.

Electric field: an influence produced by an electric charge on other charges in its vicinity.
Electric potential: the capacity of an electric field to do work on an electric charge, typically measured in volts.

Electromagnetism: a fundamental interaction between the magnetic field and the presence and motion of an electric charge.

Electrical phenomena have been studied since antiquity, though advances in the science were not made until the seventeenth and eighteenth centuries. Practical applications for electricity however remained few, and it would not be until the late nineteenth century that engineers were able to put it to industrial and residential use. The rapid expansion in electrical technology at this time transformed industry and society. Electricity's extraordinary versatility as a source of energy means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is the backbone of modern industrial society, and is expected to remain so for the foreseeable future.

Source : www.wikipedia.com

How to Understand Classical Physics

on Minggu, 03 Oktober 2010

There are five main steps to understanding the science of basic physics: mechanics, electromagnetism, waves, optics, and heat. Each of these areas are based around a number of laws developed by certain clever people in the 16th, 17th, and 18th centuries. This article will give you an extremely brief overview of these laws and some ways in which they have been applied to understanding our world.

Steps:

1.    Motion of projectiles in two dimensions: A projectile moving with a constant gravitational force, without air resistance, will have a constant acceleration in the direction of this force.

2.    Know that this object's horizontal velocity will be constant while the vertical component constantly changing.

3.  Newton's first law of motion: An object in uniform motion tends to stay in motion unless acted upon by an external force. Take for example throwing a basketball when there is absolutely no air resistance.

4.   Resolve its velocity (not to be confused with acceleration) into horizontal and vertical components.

·         In the vertical component, the initial velocity is eventually reduced until it becomes negative (falls towards the earth) because the force of gravity gives it an acceleration towards the earth.

·         In the horizontal component, since there is no force acting against it its acceleration will be zero, therefore its velocity remains constant.

Source: www.wikihow.com


Thermodynamic System

on Jumat, 01 Oktober 2010

A thermodynamic system is a precisely defined macroscopic region of the universe, often called a physical system, that is studied using the principles of thermodynamics.

All space in the universe outside the thermodynamic system is known as the surroundings, the environment, or a reservoir. A system is separated from its surroundings by a boundary which may be notional or real, but which by convention delimits a finite volume. Exchanges of work, heat, or matter between the system and the surroundings may take place across this boundary. Thermodynamic systems are often classified by specifying the nature of the exchanges that are allowed to occur across its boundary.

A thermodynamic system is characterized and defined by a set of thermodynamic parameters associated with the system. The parameters are experimentally measurable macroscopic properties, such as volume, pressure, temperature, electric field, and others.

The set of thermodynamic parameters necessary to uniquely define a system is called the thermodynamic state of a system. The state of a system is expressed as a functional relationship, the equation of state, between its parameters. A system is in thermodynamic equilibrium when the state of the system does not change with time.

Originally, in 1824, Sadi Carnot described a thermodynamic system as the working substance under study.


Thermodynamics describes the physics of matter using the concept of the thermodynamic system, a region of the universe that is under study. All quantities, such as pressure or mechanical work, in an equation refer to the system unless labeled otherwise. As thermodynamics is fundamentally concerned with the flow and balance of energy and matter, systems are distinguished depending on the kinds of interaction they undergo and the types of energy they exchange with the surrounding environment.  

Isolated systems are completely isolated from their environment. They do not exchange heat, work or matter with their environment. An example of an isolated system is a completely insulated rigid container, such as a completely insulated gas cylinder. Closed systems are able to exchange energy (heat and work) but not matter with their environment. A greenhouse is an example of a closed system exchanging heat but not work with its environment. Whether a system exchanges heat, work or both is usually thought of as a property of its boundary. Open systems may exchange any form of energy as well as matter with their environment. A boundary allowing matter exchange is called permeable. The ocean would be an example of an open system.

In practice, a system can never be absolutely isolated from its environment, because there is always at least some slight coupling, such as gravitational attraction. In analyzing a system in steady-state, the energy into the system is equal to the energy leaving the system [1].

An example system is the system of hot liquid water and solid table salt in a sealed, insulated test tube held in a vacuum (the surroundings). The test tube constantly loses heat in the form of black-body radiation, but the heat loss progresses very slowly. If there is another process going on in the test tube, for example the dissolution of the salt crystals, it will probably occur so quickly that any heat lost to the test tube during that time can be neglected. Thermodynamics in general does not measure time, but it does sometimes accept limitations on the time frame of a process.

History
The first to develop the concept of a thermodynamic system was the French physicist Sadi Carnot whose 1824 Reflections on the Motive Power of Fire studied what he called the working substance, e.g., typically a body of water vapor, in steam engines, in regards to the system's ability to do work when heat is applied to it. The working substance could be put in contact with either a heat reservoir (a boiler), a cold reservoir (a stream of cold water), or a piston (to which the working body could do work by pushing on it). In 1850, the German physicist Rudolf Clausius generalized this picture to include the concept of the surroundings, and began referring to the system as a "working body." In his 1850 manuscript On the Motive Power of Fire, Clausius wrote:“ "With every change of volume (to the working body) a certain amount work must be done by the gas or upon it, since by its expansion it overcomes an external pressure, and since its compression can be brought about only by an exertion of external pressure. To this excess of work done by the gas or upon it there must correspond, by our principle, a proportional excess of heat consumed or produced, and the gas cannot give up to the "surrounding medium" the same amount of heat as it receives." ”

The article Carnot heat engine shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine; below, we see the Carnot engine as is typically modeled in current use:


Boundary
A system boundary is a real or imaginary volumetric demarcation region drawn around a thermodynamic system across which quantities such as heat, mass, or work can flow.[1] In short, a thermodynamic boundary is a division between a system and its surroundings.

Boundaries can also be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in an engine, a fixed boundary means the piston is locked at its position; as such, a constant volume process occurs. In that same engine, a moveable boundary allows the piston to move in and out. Boundaries may be real or imaginary. For closed systems, boundaries are real while for open system boundaries are often imaginary. A boundary may be adiabatic, isothermal, diathermal, insulating, permeable, or semipermeable.

In practice, the boundary is simply an imaginary dotted line drawn around a volume when there is going to be a change in the internal energy of that volume. Anything that passes across the boundary that effects a change in the internal energy needs to be accounted for in the energy balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824; it can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics; it could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics.

For an engine, a fixed boundary means the piston is locked at its position; as such, a constant volume process occurs. In that same engine, a moveable boundary allows the piston to move in and out. For closed systems, boundaries are real while for open system boundaries are often imaginary.

Surroundings
The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment, and the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in analysis of the system, except in regards to these interactions.

Open System
In open systems, matter may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: the increase in the internal energy of a system is equal to the amount of energy added to the system by matter flowing in and by heating, minus the amount lost by matter flowing out and in the form of work done by the system. The first law for open systems is given by:

dU = dUin + dQ - dUout -dW

where Uin is the average internal energy entering the system and Uout is the average internal energy leaving the system. 

The region of space enclosed by open system boundaries is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of matter into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of matter out as if it were driving a piston of fluid. There are then two types of work performed: flow work described above which is performed on the fluid (this is also often called PV work) and shaft work which may be performed on some mechanical device.