Thursday 26 January 2012
HERBARIUM
In botany, a herbarium (plural: herbaria ) – sometimes known by the Anglicized term herbar – is a collection of preserved plant specimens. These specimens may be whole plants or plant parts: these will usually be in a dried form, mounted on a sheet, but depending upon the material may also be kept in alcohol or other preservative. The same term is often used in mycology to describe an equivalent collection of preserved fungi, otherwise known as a fungarium . The term can also refer to the building where the specimens are stored, or the scientific institute that not only stores but researches these specimens. The specimens in a herbarium are often used as reference material in describing plant taxa ; some specimens may be types . A xylarium is a herbarium specialising in specimens of wood. A hortorium (as in the Liberty Hyde Bailey Hortorium) is one specialising in preserved specimens of cultivated plant. Specimen preservation Preparing a plant for mounting To preserve their form and color, plants collected in the field are spread flat on sheets of newsprint and dried, usually in a plant press, between blotters or absorbent paper. The specimens, which are then mounted on sheets of stiff white paper, are labeled with all essential data, such as date and place found, description of the plant, altitude, and special habitat conditions. The sheet is then placed in a protective case. As a precaution against insect attack, the pressed plant is frozen or poisoned and the case disinfected. Certain groups of plants are soft, bulky, or otherwise not amenable to drying and mounting on sheets. For these plants, other methods of preparation and storage may be used. For example, conifer cones and palm fronds may be stored in labeled boxes. Representative flowers or fruits may be pickled in formaldehyde to preserve their three-dimensional structure. Small specimens, such as mosses and lichens, are often air-dried and packaged in small paper envelopes. No matter the method of preservation, detailed information on where and when the plant was collected, habitat, color (since it may fade over time), and the name of collector is usually included. Collections management A large herbarium may have hundreds of cases filled with specimens. Most herbaria utilize a standard system of organizing their specimens into herbarium cases. Specimen sheets are stacked in groups by the species to which they belong and placed into a large lightweight folder that is labelled on the bottom edge. Groups of species folders are then placed together into larger, heavier folders by genus. The genus folders are then sorted by taxonomic family according to the standard system selected for use by the herbarium and placed into pigeonholes in herbarium cabinets. Locating a specimen filed in the herbarium requires knowing the nomenclature and classification used by the herbarium. It also requires familiarity with possible name changes that have occurred since the specimen was collected, since the specimen may be filed under an older name. Modern herbaria often maintain electronic databases of their collections. Many herbaria have initiatives to digitize specimens to produce a virtual herbarium . These records and images are made publicly accessible via the Internet when possible. Uses Herbaria are essential for the study of plant taxonomy , the study of geographic distributions, and the stabilizing of nomenclature. Thus it is desirable to include in a specimen as much of the plant as possible (e.g., flowers, stems, leaves, seed, and fruit). Linnaeus' herbarium now belongs to the Linnean Society in England. Specimens housed in herbaria may be used to catalogue or identify the flora of an area. A large collection from a single area is used in writing a field guide or manual to aid in the identification of plants that grow there. With more specimens available, the author of the guide will better understand the variability of form in the plants and the natural distribution over which the plants grow. Herbaria also preserve a historical record of change in vegetation over time. In some cases, plants become extinct in one area, or may become extinct altogether. In such cases, specimens preserved in an herbarium can represent the only record of the plant's original distribution. Environmental scientists make use of such data to track changes in climate and human impact. Many kinds of scientists use herbaria to preserve voucher specimens; representative samples of plants used in a particular study to demonstrate precisely the source of their data. They may also be a repository of viable seeds for rare species.[1] Largest herbaria The Swedish Museum of Natural History (S) . Main article: List of herbaria Many universities, museums, and botanical gardens maintain herbaria. Herbaria have also proven very useful as sources of plant DNA for use in taxonomy and molecular systematics . The largest herbaria in the world, in approximate order of decreasing size, are: Muséum National d'Histoire Naturelle (P) (Paris, France) New York Botanical Garden (NY) (Bronx, New York, USA) Komarov Botanical Institute (LE) (St. Petersburg, Russia) Royal Botanic Gardens (K) (Kew, England, UK) Conservatoire et Jardin botaniques de la Ville de Genève (G) (Geneva, Switzerland) Missouri Botanical Garden (MO) (St. Louis, Missouri, USA) British Museum of Natural History (BM) (London, England, UK) Harvard University (HUH) (Cambridge, Massachusetts, USA) Swedish Museum of Natural History (S) (Stockholm, Sweden) United States National Herbarium (Smithsonian Institution ) (US) (Washington, DC, USA) Nationaal Herbarium Nederland (L) (Leiden, the Netherlands) Université Montpellier (MPU) (Montpellier, France) Université Claude Bernard (LY) (Villeurbane Cedex, France) Herbarium Universitatis Florentinae (FI) (Florence, Italy) National Botanic Garden of Belgium (BR) (Meise, Belgium) University of Helsinki (H) (Helsinki, Finland) Botanischer Garten und Botanisches Museum Berlin- Dahlem, Zentraleinrichtung der Freien Universität Berlin (B) (Berlin, Germany) The Field Museum (F) (Chicago, Illinois, USA) University of Copenhagen (C) (Copenhagen, Denmark) Chinese National Herbarium, (Chinese Academy of Sciences) (PE) (Beijing, People's Republic of China) University and Jepson Herbaria (UC/JEPS) (Berkeley, California, USA) Herbarium Bogoriense (BO) (Bogor, West Java, Indonesia) Royal Botanic Garden, Edinburgh (E) (Edinburgh, Scotland, UK) See also Herbal List of herbaria Plant collecting Plant taxonomy Systematics Virtual herbarium External links Wikimedia Commons has media related to: Herbarium For links to a specific herbarium or institution, see the List of herbaria Index Herbariorum Linnean Herbarium Lamarck's Herbarium (online database with 20.000 sheets in HD) French Herbaria Network
NOMENCULATURE
Nomenclature is a term that applies to either a list of names or terms, or to the system of principles, procedures and terms related to naming - which is the assigning of a word or phrase to a particular object or property. [1][clarification needed] The principles of naming vary from the relatively informal conventions of everyday speech to the internationally- agreed principles, rules and recommendations that govern the formation and use of the specialist terms used in scientific and other disciplines. Naming "things" is a part of our general communication using words and language: it is an aspect of everyday taxonomy as we distinguish the objects of our experience, together with their similarities and differences, which we identify , name and classify. The use of names, as the many different kinds of nouns embedded in different languages, connects nomenclature to theoretical linguistics, while the way we mentally structure the world in relation to word meanings and experience relates to the philosophy of language . Onomastics, the study of proper names and their origins, includes: anthroponymy , concerned with human names, including personal names, surnames and nicknames; toponymy the study of place names; and etymology , the derivation, history and use of names as revealed through comparative and descriptive linguistics . The scientific need for simple, stable and internationally- accepted systems for naming objects of the natural world has generated many formal nomenclatural systems. Probably the best known of these nomenclatural systems are the five codes of biological nomenclature that govern the Latinized scientific names of organisms. Definition & criteria Nomenclature is a system of words used in particular discipline. It is used in respect of giving names systematically following the rules to all known living.It is applied to many chemical components, mainly used in carbon and hydrogen components. Etymology The word nomenclature is derived from the Latin nomen - name, calare - to call; the Ancient Greek ονοματοκλήτωρ from όνομα or onoma meaning name and equivalent to the Old English nama and Old High German namo which is derived from Sanskrit nama. The Latin term nomenclatura refers to a list of names as does the word nomenclator which can also indicate a provider or announcer of names. Onomastics and nomenclature Main article: Onomastics The study of proper names is known as onomastics,[2] which has a wide-ranging scope encompassing all names, all languages, all geographical and cultural regions. The distinction between onomastics and nomenclature is not readily clear: onomastics is an unfamiliar discipline to most people and the use of nomenclature in an academic sense is also not commonly known. Although the two fields integrate, nomenclature concerns itself more with the rules and conventions that are used for the formation of names.[citation needed] Naming as a cultural activity Main article: Philosophy of language Names provide us with a way of structuring and mapping the world in our minds so, in some way, they mirror or represent the objects of our experience. Names, words, language and meaning Main articles: Proper name (philosophy) and Semantics Elucidating the connections between language (especially names and nouns), meaning and the way we perceive the world has provided a rich field of study for philosophers and linguists. Relevant areas of study include: the distinction between proper names and proper nouns;[3] and the relationship between names, [4] their referents, [5] meanings (semantics), and the structure of language. Folk taxonomy Main articles: Folk taxonomy and Binomial nomenclature Modern scientific taxonomy has been described as "basically a Renaissance codification of folk taxonomic principles."[6] Formal scientific nomenclatural and classification systems are exemplified by biological classification. All classification systems are established for a purpose. The scientific classification system anchors each organism within the nested hierarchy of internationally-accepted classification categories. Maintenance of this system involves formal rules of nomenclature and periodic international meetings of review. This modern system evolved from the folk taxonomy of pre-history. [7] Folk taxonomy can be illustrated through the Western tradition of horticulture and gardening. Unlike scientific taxonomy, folk taxonomies serve many purposes. Examples in horticulture would be the grouping of plants, and naming of these groups, according to their properties and uses: annuals, biennials and perennials (nature of life cycle); vegetables , fruits, culinary herbs and spices (culinary use); herbs, trees and shrubs (growth habit); wild and cultivated plants (whether they are managed or not), and weeds (whether they are considered to be a nuisance or not) and so on. Folk taxonomy is generally associated with the way rural or indigenous peoples use language to make sense of and organise the objects around them. Ethnobiology frames this interpretation through either "utilitarianists" like Bronislaw Malinowski who maintain that names and classifications reflect mainly material concerns, and "intellectualists" like Claude Lévi-Strauss who hold that they spring from innate mental processes.[8] The literature of ethnobiological classifications was reviewed in 2006.[9] Folk classification is defined by the way in which members of a language community name and categorize plants and animals whereas ethnotaxonomy refers to the hierarchical structure, organic content, and cultural function of biological classification that ethnobiologists find in every society around the world. [10] Ethnographic studies of the naming and classification of animals and plants in non- Western societies have revealed some general principles that indicate pre- scientific man’s conceptual and linguistic method of organising the biological world in a hierarchical way. [11][12][13][14] Such studies indicate that the urge to classify is a basic human instinct.[15][16] in all languages natural groups of organisms are distinguished (present- day taxa) these groups are arranged into more inclusive groups or ethnobiological categories in all languages there are about five or six ethnobiological categories of graded inclusiveness these groups (ethnobiological categories) are arranged hierarchically, generally into mutually exclusive ranks the ranks at which particular organisms are named and classified is often similar in different cultures The levels are — moving from the most to least inclusive: level 1 - "unique beginner" --e.g. plant or animal. A single all- inclusive name rarely used in folk taxonomies but loosely equivalent to an original living thing, a "common ancestor" level 2 - “life form” --------------e.g. tree, bird, grass and fish These are usually primary lexemes (basic linguistic units) loosely equivalent to a phylum or major biological division. level 3 - "generic name" ------e.g. oak, pine, robin, catfish This is the most numerous and basic building block of all folk taxonomies, the most frequently referred to, the most important psychologically, and among the first learned by children. These names can usually be associated directly with a second level group. Like life-form names these are primary lexemes. level 4 - "specific name" ------e.g. white fir, post oak More or less equivalent to species. A secondary lexeme and generally less frequent than generic names. level 5 - "varietal name"--------e.g. baby lima bean, butter lima bean. In almost all cultures objects are named using one or two words equivalent to "kind" (genus) and "particular kind" (species). [6] When made up of two words (a binomial) the name usually consists of a noun (like salt, dog or star) and an adjectival second word that helps describe the first, and therefore makes the name, as a whole, more "specific", for example, lap dog, sea salt, or film star. The meaning of the noun used for a common name may have been lost or forgotten (whelk, elm, lion, shark, pig) but when the common name is extended to two or more words much more is conveyed about the organism's use, appearance or other special properties (sting ray, poison apple, giant stinking hogweed, hammerhead shark). These noun-adjective binomials are just like our own names with a family or surname like Simpson and another adjectival Christian- or forename name that specifies which Simpson, say Homer Simpson. It seems reasonable to assume that the form of scientific names we call binomial nomenclature is derived from this simple and practical way of constructing common names - but with the use of Latin as a universal language. In keeping with the "utilitarianist" view other authors maintain that ethnotaxonomies resemble more a "complex web of resemblances" than a neat hierarchy.
METABOLISM
Metabolism (from Greek: μεταβολή "metabolē",
"change" or Greek: μεταβολισμός metabolismos,
"outthrow") is the set of chemical reactions that happen in the cells of living organisms to sustain life. These processes allow organisms to grow and
reproduce, maintain their
structures, and respond to
their environments.
Metabolism is usually divided
into two categories. Catabolism breaks down organic matter, for example
to harvest energy in cellular respiration. Anabolism uses energy to construct
components of cells such as proteins and nucleic acids. The chemical reactions of
metabolism are organized into metabolic pathways , in which one chemical is transformed
through a series of steps into
another chemical, by a
sequence of enzymes . Enzymes are crucial to
metabolism because they
allow organisms to drive
desirable reactions that
require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. As enzymes act as catalysts they allow these reactions to
proceed quickly and
efficiently. Enzymes also
allow the regulation of metabolic pathways in
response to changes in the cell's environment or signals from other cells. The metabolism of an
organism determines which
substances it will find nutritious and which it will find poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals.[1] The speed of metabolism, the metabolic rate, influences how much food an organism will require,
and also affects how it is able
to obtain that food. A striking feature of
metabolism is the similarity of
the basic metabolic pathways
and components between
even vastly different species. [2] For example, the set of carboxylic acids that are best known as the intermediates in
the citric acid cycle are present in all known organisms, being
found in species as diverse as
the unicellular bacteria Escherichia coli and huge multicellular organisms like elephants.[3] These striking similarities in metabolic
pathways are likely due to
their early appearance in evolutionary history , and being retained because of their efficacy. [4][5] Key biochemicals Further information: Biomolecule, cell (biology) and biochemistry Structure of a triacylglycerol lipid Most of the structures that
make up animals, plants and
microbes are made from three
basic classes of molecule: amino acids, carbohydrates and lipids (often called fats). As these molecules are vital
for life, metabolic reactions
either focus on making these
molecules during the
construction of cells and
tissues, or breaking them down and using them as a
source of energy, in the
digestion and use of food.
Many important biochemicals
can be joined together to
make polymers such as DNA and proteins. These macromolecules are essential. Type of molecule Name of monomer forms Name of polymer forms Examples of polymer forms Amino acids Amino acids Proteins (also called polypeptides) Fibrous proteins and globular
proteins Carbohydrates Monosaccharides Polysaccharides Starch, glycogen and cellulose Nucleic acids Nucleotides Polynucleotides DNA and RNA Amino acids and proteins Proteins are made of amino acids arranged in a linear chain and joined together by peptide bonds. Many proteins are the enzymes that catalyze the chemical reactions in
metabolism. Other proteins
have structural or mechanical
functions, such as the proteins
that form the cytoskeleton , a system of scaffolding that maintains the cell shape.[6] Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle .[7] Lipids Lipids are the most diverse group of biochemicals. Their
main structural uses are as
part of biological membranes such as the cell membrane, or as a source of energy. [7] Lipids are usually defined as hydrophobic or amphipathic biological molecules that will
dissolve in organic solvents such as benzene or chloroform.[8] The fats are a large group of compounds
that contain fatty acids and glycerol ; a glycerol molecule attached to three fatty acid esters is a triacylglyceride .[9] Several variations on this basic
structure exist, including
alternate backbones such as sphingosine in the sphingolipids, and hydrophilic groups such as phosphate in phospholipids. Steroids such as cholesterol are another major class of lipids that are made in cells.[10] Carbohydrates Glucose can exist in both a straight-chain and ring form. Carbohydrates are aldehydes or ketones with many hydroxyl groups that can exist as straight chains or
rings. Carbohydrates are the
most abundant biological
molecules, and fill numerous
roles, such as the storage and
transport of energy (starch, glycogen ) and structural components (cellulose in plants, chitin in animals).[7] The basic carbohydrate units
are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked
together to form polysaccharides in almost limitless ways.
VERNALISATION
Vernalization (from Latin: vernus, of the spring) is the acquisition of a plant's ability to flower or germinate in the spring by exposure to the prolonged cold of winter. After vernalization, plants have acquired the ability to flower, but they may require additional seasonal cues or weeks of growth before they will actually flower. Many plants grown in temperate climates require vernalization and must experience a period of low winter temperature to initiate or accelerate the flowering process. This ensures that reproductive development and seed production occurs in spring and summer, rather than in autumn.[1] The needed cold is often expressed in chill hours. Typical vernalization temperatures are between 5 and 10 degrees Celsius (40 and 50 degrees Fahrenheit). For many perennial plants, such as fruit tree species, a period of cold is needed to break dormancy, prior to flowering. Many monocarpic annuals and biennials, including some ecotypes of Arabidopsis thaliana [2] and winter cereals such as wheat, must go through a prolonged period of cold before flowering occurs. History of vernalization research In the history of agriculture , farmers observed a traditional distinction between "winter cereals," whose seeds require chilling, and "spring cereals," whose seeds can be sown in spring and flower soon thereafter. [3] The word "vernalization" is a translation of "Jarovization," a word coined by Trofim Lysenko to describe a chilling process he used to make the seeds of winter cereals behave like spring cereals ("Jarovoe" in Russian).[3] Scientists had also discussed how some plants needed cold temperatures to flower, as early as the 18th century, with the German plant physiologist Gustav Gassner often mentioned for his 1918 paper.[3][4] Lysenko's 1928 paper on vernalization and plant physiology drew wide attention due to its practical consequences for Russian agriculture. Severe cold and lack of winter snow had destroyed many early winter wheat seedlings. By treating wheat seeds with moisture as well as cold, Lysenko induced them to bear a crop when planted in spring.[4] Later however, Lysenko inaccurately asserted that the vernalized state could be inherited - i.e., that the offspring of a vernalized plant would behave as if they themselves had also been vernalized and would not require vernalization in order to flower quickly. [5] Early research on vernalization focused on plant physiology; the increasing availability of molecular biology has made it possible to unravel its underlying mechanisms.[5] For example, longer days as well as cold temperatures are required for winter wheat plants to go from the vegetative to the reproductive state; the three interacting genes are called VRN1, VRN2, and FT (VRN3). [6] Due to plant flowering requiring the successful co- operation of several metabolic pathways , computer models that incorporate vernalization have also been made. [7] Vernalization in Arabidopsis thaliana Arabidopsis thaliana rosette before vernalization, with no floral spike Arabidopsis thaliana, also known as "thale cress," is a much-studied model species. In 2000, the entire genome of its five chromosomes was completely sequenced. [8] Some variants, called "winter annuals", require vernalization to flower; others ("summer annuals") do not.[9] The genes that underlie this difference in plant physiology have been intensively studied. [5] The reproductive phase change of A. thalliana occurs by a sequence of two related events: first, the bolting transition (flower stalk elongates), then the floral transition (first flower appears).[10] Bolting is a robust predictor of flower formation, and hence a good indicator for vernalization research.[10] In arabidopsis winter annuals, the apical meristem is the part of the plant that needs to be chilled. Vernalization of the meristem appears to confer competence to respond to floral inductive signals on the meristem. A vernalized meristem retains competence for as long as 300 days in the absence of an inductive signal. [9] Before vernalization, flowering is repressed by the action of a gene called Flowering Locus Controller (FLC).[1] Vernalization activates a gene called Frigida (FRI), which progressively turns off FLC expression over a period of six weeks. Since vernalization also occurs in flc mutants (lacking FLC), vernalization must also activate a non-FLC pathway. [11] A day-length mechanism is also important.
Monday 23 January 2012
PHOTOPERIODISM
Photoperiodism is the physiological reaction of
organisms to the length of
day or night. It occurs in plants and animals. Photoperiodism can also be
defined as the developmental
responses of plants to the
relative lengths of the light
and dark periods. Here it
should be emphasized that photoperiodic effects relate
directly to the timing of both
the light and dark periods. In plants Many flowering plants use a photoreceptor protein , such as phytochrome or cryptochrome , to sense seasonal changes in night
length, or photoperiod, which
they take as signals to flower.
In a further subdivision,
obligate photoperiodic plants
absolutely require a long or short enough night before
flowering, whereas
facultative photoperiodic
plants are more likely to
flower under the appropriate
light conditions, but will eventually flower regardless
of night length. Photoperiodic flowering
plants are classified as long-
day plants or short-day
plants, though the regulatory
mechanism is actually
governed by hours of darkness, not the length of
the day. Modern biologists believe [citation needed] that it is the coincidence of the active
forms of phytochrome or
cryptochrome, created by
light during the daytime, with
the rhythms of the circadian clock that allows plants to measure the length of the
night. Other than flowering,
photoperiodism in plants
includes the growth of stems
or roots during certain
seasons, or the loss of leaves. Long-day plants A long-day plant requires
fewer than a certain number
of hours of darkness in each
24-hour period to induce
flowering. These plants
typically flower in the northern hemisphere during late spring or early summer as
days are getting longer. In the
northern hemisphere, the
longest day of the year is on
or about 21 June (solstice). After that date, days grow
shorter (i.e. nights grow
longer) until 21 December
(solstice). This situation is
reversed in the southern
hemisphere (i.e. longest day is 21 December and shortest day
is 21 June). In some parts of
the world, however, "winter"
or "summer" might refer to
rainy versus dry seasons,
respectively, rather than the coolest or warmest time of
year. Some long-day obligate plants
are:Carnation (Dianthus) Henbane (Hyoscyamus) Oat (Avena) Ryegrass (Lolium) Clover (Trifolium) Bellflower (Campanula carpatica) Some long-day facultative
plants are: Pea (Pisum sativum) Barley (Hordeum vulgare) Lettuce (Lactuca sativa) Wheat (Triticum aestivum, spring wheat cultivars) Turnip (Brassica rapa) Arabidopsis thaliana (model organism) Short-day plants Short-day plants flower when
the night is longer than a
critical length. They cannot
flower under long days or if a
pulse of artificial light is shone
on the plant for several minutes during the middle of
the night; they require a
consolidated period of
darkness before floral
development can begin.
Natural nighttime light, such as moonlight or lightning, is
not of sufficient brightness or
duration to interrupt
flowering. In general, short-day (i.e.
long-night) plants flower as
days grow shorter (and
nights grow longer) after 21
June in the northern
hemisphere, which is during summer or fall. The length of
the dark period required to
induce flowering differs
among species and varieties of
a species. Photoperiod affects flowering
when the shoot is induced to
produce floral buds instead of
leaves and lateral buds. Note
that some species must pass
through a "juvenile" period during which they cannot be
induced to flower—common
cocklebur is an example of a
plant species with a
remarkably short period of
juvenility and plants can be induced to flower when quite
small. Some short-day obligate
plants are: Chrysanthemum Coffee Poinsettia Strawberry Tobacco, var. Maryland Mammouth Common duckweed , (Lemna minor) Cocklebur (Xanthium) Maize – tropical cultivars only[citation needed] Some short-day facultative
plants are: Hemp (Cannabis) Cotton (Gossypium) Rice Sugar cane Day-neutral plants Day-neutral plants, such as cucumbers, roses and tomatoes, do not initiate
flowering based on
photoperiodism at all; they
flower regardless of the night
length. They may initiate
flowering after attaining a certain overall developmental
stage or age, or in response to
alternative environmental
stimuli, such as vernalisation (a period of low
temperature), rather than in
response to photoperiod.
ETHYLENE
Ethylene (IUPAC name: ethene ) is a gaseous organic compound with the formula C2H4. It is the simplest alkene (older name: olefin from its
oil-forming property). Because
it contains a carbon-carbon double bond, ethylene is classified as an unsaturated hydrocarbon . Ethylene is widely used in industry and is also a plant hormone.[3] Ethylene is the most produced organic compound in the world; global production of
ethylene exceeded 107 million tonnes in 2005.[4] To meet the ever increasing demand for
ethylene, sharp increases in
production facilities are added
globally, particularly in the Persian Gulf countries and in China.[5] Structure and
properties Orbital description of bonding between ethylene and a transition metal. This hydrocarbon has four hydrogen atoms bound to a pair of carbon atoms that are connected by a double bond. All six atoms that comprise
ethylene are coplanar. The H-C- H angle is 119°, close to the 120° for ideal sp² hybridized carbon. The molecule is also
relatively rigid: rotation about
the C-C bond is a high energy
process that requires breaking
the π-bond. The π-bond in the ethylene
molecule is responsible for its
useful reactivity. The double
bond is a region of high electron density , thus it is susceptible to attack by
electrophiles. Many reactions
of ethylene are catalyzed by
transition metals, which bind
transiently to the ethylene
using both the π and π* orbitals. Being a simple molecule,
ethylene is spectroscopically
simple. Its UV-vis spectrum is still used as a test of theoretical methods. [6] Uses Major industrial reactions of
ethylene include in order of
scale: 1) polymerization , 2) oxidation , 3) halogenation and hydrohalogenation , 4) alkylation , 5) hydration , 6) oligomerization, and 7) hydroformylation . In the United States and Europe, approximately 90% of
ethylene is used to produce
three chemical compounds— ethylene oxide , ethylene dichloride, and ethylbenzene —and a variety of kinds of polyethylene .[7] Main industrial uses of ethylene. Clockwise from the upper right: its conversions to ethylene oxide , precursor to ethylene glycol, to ethylbenzene , precursor to styrene , to various kinds of polyethylene , to ethylene dichloride, precursor to vinyl chloride. Polymerization See also: Ziegler-Natta catalyst and Polyethylene Polyethylenes of various
types consume more than half
of world ethylene supply.
Polyethylene, also called
polythene, is the world's most
widely-used plastic, being primarily used to make films
used in packaging, carrier bags and trash liners. Linear alpha- olefins, produced by oligomerization (formation of short polymers) are used as precursors, detergents, plasticisers, synthetic lubricants, additives, and also as co-monomers in the
production of polyethylenes. [7] Oxidation Ethylene is oxidized to produce ethylene oxide , a key raw material in the
production of surfactants and detergents by ethoxylation . Ethylene oxide also
hydrolyzed to produce ethylene glycol , widely used as an automotive antifreeze as
well as higher molecular
weight glycols and glycol ethers. Main article: Wacker process Ethylene undergoes oxidation
by palladium to give acetaldehyde . This conversion remains a major industrial process (10M kg/y). [8] The process proceeds via the initial
complexation of ethylene to a
Pd(II) center. Halogenation and
hydrohalogenation Major intermediates from the halogenation and hydrohalogenation of ethylene include ethylene dichloride, ethyl chloride and ethylene dibromide . The addition of chlorine entails
"oxychlorination," i.e. chlorine
itself is not used. Some
products derived from this
group are polyvinyl chloride , trichloroethylene , perchloroethylene , methyl chloroform, polyvinylidiene chloride and copolymers , and ethyl bromide .[9] Alkylation Major chemical intermediates
from the alkylation with ethylene is ethylbenzene , precursor to styrene . Styrene is used principally in polystyrene for packaging and insulation, as well as in styrene-butadiene rubber for tires and footwear. On a
smaller scale, ethyl toluene, ethylanilines, 1,4-hexadiene,
and aluminium alkyls. Products of these
intermediates include polystyrene , unsaturated polyesters and ethylene- propylene terpolymers .[9] Oxo reaction The hydroformylation (oxo reaction) of ethylene results in propionaldehyde , a precursor to propionic acid and n-propyl alcohol.[9] Hydration Ethylene can be hydrated to
give ethanol, but this method is rarely used industrially. Niche uses An example of a niche use is
as an anesthetic agent (in an 85% ethylene/15% oxygen ratio).[10] It can also be used to hasten fruit ripening, as well as a welding gas. [7][11] Production In 2006, global ethylene
production was 109 million tonnes.[12] By 2010 ethylene was produced by at least 117 companies in 55 countries.[5] Ethylene is produced in the petrochemical industry by steam cracking . In this process, gaseous or light liquid
hydrocarbons are heated to
750–950 °C, inducing numerous free radical reactions followed by immediate quench to stop these reactions. This process
converts large hydrocarbons
into smaller ones and
introduces unsaturation.
Ethylene is separated from the
resulting complex mixture by repeated compression and distillation. In a related process used in oil refineries,
high molecular weight
hydrocarbons are cracked
over zeolite catalysts. Heavier feedstocks, such as naphtha and gas oils require at least
two "quench towers"
downstream of the cracking
furnaces to recirculate
pyrolysis-derived gasoline and
process water. When cracking a mixture of ethane and
propane, only one water quench tower is required. [9] The areas of an ethylene plant
are: 1. steam cracking furnaces: 2. primary and secondary
heat recovery with quench; 3. a dilution steam recycle
system between the
furnaces and the quench
system; 4. primary compression of the
cracked gas (3 stages of
compression); 5. hydrogen sulfide and
carbon dioxide removal
(acid gas removal); 6. secondary compression (1
or 2 stages); 7. drying of the cracked gas; 8. cryogenic treatment; 9. all of the cold cracked gas
stream goes to the
demethanizer tower. The
overhead stream from the
demethanizer tower
consists of all the hydrogen and methane that was in
the cracked gas stream.
Cryogenically (−250 °F
(−157 °C)) treating this
overhead stream separates
hydrogen from methane. Methane recovery is critical
to the economical operation
of an ethylene plant. 0. the bottom stream from
the demethanizer tower
goes to the deethanizer
tower. The overhead
stream from the
deethanizer tower consists of all the C2,'s that were in the cracked gas stream. The
C2 stream contains acetylene, which is
explosive above 200 kPa (29 psi).[13] If the partial pressure of acetylene is
expected to exceed these
values, the C 2 stream is partially hydrogenated. The
C2's then proceed to a C2 splitter. The product
ethylene is taken from the
overhead of the tower and
the ethane coming from
the bottom of the splitter is
recycled to the furnaces to be cracked again; 1. the bottom stream from
the de-ethanizer tower
goes to the depropanizer
tower. The overhead
stream from the
depropanizer tower consists of all the C 3's that were in the cracked gas
stream. Before feeding the
C3's to the C3 splitter, the stream is hydrogenated to
convert the methylacetylene and propadiene (allene) mix. This stream is then sent to
the C3 splitter. The overhead stream from the
C3 splitter is product propylene and the bottom
stream is propane which is
sent back to the furnaces
for cracking or used as fuel. 2. The bottom stream from
the depropanizer tower is
fed to the debutanizer
tower. The overhead
stream from the
debutanizer is all of the C 4's that were in the cracked
gas stream. The bottom
stream from the
debutanizer (light pyrolysis
gasoline) consists of
everything in the cracked gas stream that is C 5 or heavier. [9] Since ethylene production is
energy intensive, much effort
has been dedicated to
recovering heat from the gas
leaving the furnaces. Most of
the energy recovered from the cracked gas is used to
make high pressure (1200
psig) steam. This steam is in
turn used to drive the
turbines for compressing
cracked gas, the propylene refrigeration compressor, and
the ethylene refrigeration
compressor. An ethylene
plant, once running, does not
need to import steam to drive
its steam turbines. A typical world scale ethylene plant
(about 1.5 billion pounds of
ethylene per year) uses a
45,000 horsepower (34,000
kW) cracked gas compressor,
a 30,000 hp (22,000 kW) propylene compressor, and a
15,000 hp (11,000 kW)
ethylene compressor.
CYTOKININ
Cytokinins (CK) are a class of plant growth substances (phytohormones ) that promote cell division , or cytokinesis , in plant roots and shoots. They are involved
primarily in cell growth and differentiation , but also affect apical dominance, axillary bud growth, and leaf senescence. Folke Skoog discovered their effects using coconut milk in the 1940s at the University of Wisconsin–Madison. [1] There are two types of
cytokinins: adenine-type
cytokinins represented by kinetin , zeatin, and 6- benzylaminopurine , and phenylurea-type cytokinins
like diphenylurea and
thidiazuron (TDZ). Most
adenine-type cytokinins are
synthesized in roots. [2]Cambium and other actively dividing tissues also synthesize cytokinins. [3] No phenylurea cytokinins have been found in plants.[4] Cytokinins participate in local
and long-distance signalling,
with the same transport
mechanism as purines and nucleosides.[5] Typically, cytokinins are transported in the xylem .[2] Cytokinins act in concert with auxin , another plant growth hormone.[2] Mode of Action The ratio of auxin to
cytokinin plays an important
role in the effect of cytokinin
on plant growth. Cytokinin
alone has no effect on parenchyma cells. When cultured with auxin but no
cytokinin, they grow large
but do not divide. When
cytokinin is added, the cells
expand and differentiate.
When cytokinin and auxin are present in equal levels, the
parenchyma cells form an
undifferentiated callus. More cytokinin induces growth of shoot buds, while more auxin induces root formation.[2] Cytokinins are involved in
many plant processes,
including cell division and
shoot and root
morphogenesis. They are
known to regulate axillary bud growth and apical
dominance. The "direct
inhibition hypothesis" posits
that these effects result from
the cytokinin to auxin ratio.
This theory states that auxin from apical buds travels down
shoots to inhibit axillary bud
growth. This promotes shoot
growth, and restricts lateral
branching. Cytokinin moves
from the roots into the shoots, eventually signaling
lateral bud growth. Simple
experiments support this
theory. When the apical bud is
removed, the axillary buds
are uninhibited, lateral growth increases, and plants
become bushier. Applying
auxin to the cut stem again inhibits lateral dominance.[2] While cytokinin action in vascular plants is described as pleiotropic, this class of plant hormones specifically induces
the transition from apical
growth to growth via a
three-faced apical cell in moss protonema. This bud induction can be pinpointed to differentiation of a specific single cell, and thus is a very specific effect of cytokinin. [6] Cytokinins have been shown
to slow aging of plant organs
by preventing protein breakdown, activating
protein synthesis, and
assembling nutrients from nearby tissues. [2] A study that regulated leaf senescence
in tobacco leaves found that
wild-type leaves yellowed
while transgenic leaves remained mostly green. It
was hypothesized that
cytokinin may affect
enzymes that regulate protein synthesis and degradation. [7] Biosynthesis Adenosine phosphate-
isopentenyltransferase (IPT) catalyses the first reaction in the biosynthesis of isoprene cytokinins. It may use ATP, ADP, or AMP as substrates and may use dimethylallyl diphosphate (DMAPP) or hydroxymethylbutenyl
diphosphate (HMBDP) as prenyl donors .[8] This reaction is the rate-limiting step in cytokinin biosynthesis.
DMAPP and HMBDP used in
cytokinin biosynthesis are
produced by the
methylerythritol phosphate pathway (MEP). [8] Cytokinins can also be
produced by recycled tRNAs in plants and bacteria.[8][9] tRNAs with anticodons that start with a uridine and carrying an already-
prenylated adenosine adjacent
to the anticodon release on
degradation the adenosine as a cytokinin. [8] The prenylation of these adenines is carried out
by tRNA- isopentenyltransferase .[9] Auxin is known to regulate
the biosynthesis of cytokinin. [10] Uses Because cytokinin promotes
plant cell division and growth,
produce farmers use it to
increase crops. One study
found that applying cytokinin
to cotton seedlings led to a 5– 10% yield increase under drought conditions. [11] Cytokinins have recently been
found to play a role in plant
pathogenesis.
MESISTEM
A meristem is the tissue in most plants consisting of undifferentiated cells
(meristematic cells ), found in zones of the plant where
growth can take place. The meristematic cells give rise
to various organs of the plant,
and keep the plant growing.
The Shoot Apical Meristem
(SAM) gives rise to organs like
the leaves and flowers. The cells of the apical meristems -
SAM and RAM (Root Apical
Meristem) - divide rapidly and
are considered to be
indeterminate, in that they do
not possess any defined end fate. In that sense, the
meristematic cells are
frequently compared to the stem cells in animals, that have an analogous behavior
and function. The term meristem was first
used in 1858 by Karl Wilhelm von Nägeli (1817–1891) in his book Beiträge zur Wissenschaftlichen Botanik. [1] It is derived from the Greek
word merizein (μερίζειν),
meaning to divide, in
recognition of its inherent
function. In general, differentiated
plant cells cannot divide or
produce cells of a different
type. Therefore, cell division in the meristem is required to
provide new cells for
expansion and differentiation
of tissues and initiation of
new organs, providing the
basic structure of the plant body. Meristematic cells are
incompletely or not at all differentiated , and are capable of continued cellular division
(youthful). Furthermore, the
cells are small and protoplasm fills the cell completely. The vacuoles are extremely small. The cytoplasm does not contain differentiated plastids (chloroplasts or chromoplasts), although they are present in rudimentary
form (proplastids). Meristematic cells are packed
closely together without
intercellular cavities. The cell
wall is a very thin primary cell
wall. Maintenance of the cells
requires a balance between
two antagonistic processes:
organ initiation and stem cell
population renewal. Meristematic zones Apical meristems are the
completely undifferentiated
(indeterminate) meristems in
a plant. These differentiate
into three kinds of primary
meristems. The primary meristems in turn produce the
two secondary meristem
types. These secondary
meristems are also known as
lateral meristems because
they are involved in lateral growth. At the meristem summit,
there is a small group of
slowly dividing cells, which is
commonly called the central
zone. Cells of this zone have a
stem cell function and are essential for meristem
maintenance. The proliferation
and growth rates at the
meristem summit usually
differ considerably from those
at the periphery. Meristems also are induced in
the roots of legumes such as soybean , Lotus japonicus, pea, and Medicago truncatula after infection with soil bacteria
commonly called Rhizobium. Cells of the inner or outer
cortex in the so-called
"window of nodulation" just
behind the developing root tip
are induced to divide. The
critical signal substance is the lipo-oligosaccharide Nod- factor, decorated with side
groups to allow specificity of
interaction. The Nod factor
receptor proteins NFR1 and
NFR5 were cloned from
several legumes including Lotus japonicus, Medicago
truncatula and soybean
(Glycine max). Regulation of
nodule meristems utilizes long
distance regulation commonly
called "Autoregulation of Nodulation" (AON). This
process involves a leaf-
vascular tissue located LRR receptor kinases (LjHAR1, GmNARK and MtSUNN), CLE peptide signalling , and KAPP interaction, similar to that
seen in the CLV1,2,3 system.
LjKLAVIER also exhibits a
nodule regulation phenotype though it is not yet known
how this relates to the other
AON receptor kinases Apical meristems Organisation of an apical meristem (growing tip) 1 - Central zone 2 - Peripheral zone 3 - Medullary (i.e. central) meristem 4 - Medullary tissue The apical meristem , or growing tip, is a completely undifferentiated meristematic tissue found in the buds and growing tips of roots in plants. Its main function is to begin growth of new cells in
young seedlings at the tips of
roots and shoots (forming
buds, among other things).
Specifically, an active apical
meristem lays down a growing root or shoot behind itself, pushing itself forward.
Apical meristems are very
small, compared to the
cylinder-shaped lateral
meristems (see 'Secondary
Meristems' below). Apical meristems are
composed of several layers.
The number of layers varies
according to plant type. In
general the outermost layer is
called the tunica while the innermost layers are the corpus . In monocots, the tunica determine the physical
characteristics of the leaf edge
and margin. In dicots, layer two of the corpus determine
the characteristics of the edge
of the leaf. The corpus and
tunica play a critical part of
the plant physical appearance
as all plant cells are formed from the meristems. Apical
meristems are found in two
locations: the root and the
stem. Some Arctic plants have
an apical meristem in the
lower/middle parts of the plant. It is thought that this
kind of meristem evolved
because it is advantageous in
Arctic conditions[citation needed]. Shoot apical meristems The source of all above-
ground organs. Cells at the
shoot apical meristem summit
serve as stem cells to the
surrounding peripheral region,
where they proliferate rapidly and are incorporated
into differentiating leaf or
flower primordia. The shoot apical meristem is
the site of most of the
embryogenesis in flowering
plants. Primordia of leaves, sepals, petals, stamens and
ovaries are initiated here at
the rate of one every time
interval, called a plastochron. It is where the first
indications that flower
development has been
evoked are manifested. One of
these indications might be the
loss of apical dominance and the release of otherwise
dormant cells to develop as
axillary shoot meristems, in
some species in axils of
primordia as close as two or
three away from the apical dome. The shoot apical
meristem consists of 4 distinct
cell groups: -. Stem cells The immediate daughter
cells of the stem cells A subjacent organising
centre Founder cells for organ
initiation in surrounding
regions The four distinct zones
mentioned above are
maintained by a complex
signalling pathway. In Arabidopsis thaliana , 3 interacting CLAVATA genes
are required to regulate the
size of the stem cell reservoir in the shoot apical meristem
by controlling the rate of cell division .[2] CLV1 and CLV2 are predicted to form a receptor
complex (of the LRR receptor
like kinase family) to which CLV3 is a ligand.[3][4][5] CLV3 shares some homology with the ESR proteins of maize, with a short 14 amino acid region being conserved between the proteins. [6][7] Proteins that contain these
conserved regions have been
grouped into the CLE family of proteins.[6][7] CLV1 has been shown to
interact with several cytoplasmic proteins that are most likely involved in downstream signalling . For example, the CLV complex has
been found to be associated
with Rho/Rac small GTPase- related proteins.[2] These proteins may act as an
intermediate between the CLV
complex and a mitogen- activated protein kinase (MAPK), which is often
involved in signalling cascades. [8] KAPP is a kinase-associated protein phosphatase that has
been shown to interact with CLV1.[9] KAPP is thought to act as a negative regulator of
CLV1 by dephosphorylating it. [9] Another important gene in
plant meristem maintenance is
WUSCHEL (shortened to WUS),
which is a target of CLV signalling.[10]WUS is expressed in the cells below
the stem cells of the meristem
and its presence prevents the differentiation of the stem cells.[10] CLV1 acts to promote cellular differentiation by
repressing WUS activity
outside of the central zone
containing the stem cells. [10]STM also acts to prevent the differentiation of stem
cells by repressing the
expression of Myb genes that
are involved in cellular differentiation.
Sunday 22 January 2012
STROKE
A stroke , also known as a cerebrovascular accident (CVA ), is the rapid loss of brain function(s) due to disturbance in the blood supply to the brain. This can be due to ischemia (lack of blood flow) caused by
blockage (thrombosis, arterial embolism), or a hemorrhage (leakage of blood). [1] As a result, the affected area of the
brain cannot function, which
might result in an inability to move one or more limbs on
one side of the body , inability to understand or formulate speech, or an inability to see one side of the visual field .[2] A stroke is a medical emergency and can cause permanent neurological damage, complications, and
death. It is the leading cause
of adult disability in the United States and Europe and the second leading cause of death worldwide. [3]Risk factors for stroke include old age, hypertension (high blood pressure), previous stroke or transient ischemic attack (TIA), diabetes, high cholesterol, cigarette smoking and atrial fibrillation .[2] High blood pressure is the most
important modifiable risk factor of stroke. [2] A silent stroke is a stroke that does not have any outward
symptoms, and the patients
are typically unaware they
have suffered a stroke.
Despite not causing
identifiable symptoms, a silent stroke still causes damage to
the brain, and places the
patient at increased risk for
both transient ischemic attack and major stroke in the
future. Conversely, those who
have suffered a major stroke
are at risk of having silent strokes. [4] In a broad study in 1998, more than 11 million
people were estimated to
have experienced a stroke in
the United States.
Approximately 770,000 of
these strokes were symptomatic and 11 million
were first-ever silent MRI
infarcts or hemorrhages. Silent strokes typically cause lesions which are detected via the use
of neuroimaging such as MRI. Silent strokes are estimated to
occur at five times the rate of symptomatic strokes. [5][6] The risk of silent stroke
increases with age, but may
also affect younger adults and
children, especially those with acute anemia.[5][7] An ischemic stroke is
occasionally treated in a
hospital with thrombolysis (also known as a "clot
buster"), and some
hemorrhagic strokes benefit
from neurosurgery. Treatment to recover any lost
function is termed stroke rehabilitation, ideally in a stroke unit and involving health professions such as speech and language therapy, physical therapy and occupational therapy . Prevention of recurrence may
involve the administration of antiplatelet drugs such as aspirin and dipyridamole , control and reduction of hypertension , and the use of statins. Selected patients may benefit from carotid endarterectomy and the use of anticoagulants.[2] Classification A slice of brain from the autopsy of a person who suffered an acute middle cerebral artery (MCA) stroke Strokes can be classified into
two major categories: ischemic and hemorrhagic.[8] Ischemic strokes are those
that are caused by
interruption of the blood
supply, while hemorrhagic
strokes are the ones which
result from rupture of a blood vessel or an abnormal vascular structure. About 87% of
strokes are caused by
ischemia, and the remainder
by hemorrhage. Some
hemorrhages develop inside
areas of ischemia ("hemorrhagic
transformation"). It is
unknown how many
hemorrhages actually start as ischemic stroke. [2] Ischemic Main articles: Cerebral infarction and Brain ischemia In an ischemic stroke, blood
supply to part of the brain is
decreased, leading to
dysfunction of the brain tissue
in that area. There are four
reasons why this might happen: 1. Thrombosis (obstruction of
a blood vessel by a blood
clot forming locally) 2. Embolism (obstruction due
to an embolus from
elsewhere in the body, see below), [2] 3. Systemic hypoperfusion
(general decrease in blood supply, e.g., in shock)[9] 4. Venous thrombosis.[10] Stroke without an obvious
explanation is termed
"cryptogenic" (of unknown
origin); this constitutes 30-40% of all ischemic strokes. [2][11] There are various classification
systems for acute ischemic
stroke. The Oxford
Community Stroke Project
classification (OCSP, also
known as the Bamford or Oxford classification) relies
primarily on the initial
symptoms; based on the
extent of the symptoms, the
stroke episode is classified as total anterior circulation
infarct (TACI), partial anterior circulation infarct (PACI), lacunar infarct (LACI) or posterior circulation infarct (POCI). These four entities
predict the extent of the
stroke, the area of the brain
affected, the underlying cause, and the prognosis.[12][13] The TOAST (Trial of Org 10172 in Acute Stroke Treatment)
classification is based on
clinical symptoms as well as
results of further
investigations; on this basis, a
stroke is classified as being due to (1) thrombosis or
embolism due to atherosclerosis of a large artery, (2) embolism of cardiac origin, (3) occlusion of a small
blood vessel, (4) other
determined cause, (5)
undetermined cause (two
possible causes, no cause
identified, or incomplete investigation). [2][14] Hemorrhagic Main articles: Intracranial hemorrhage and intracerebral hemorrhage An intraparenchymal bleed (bottom arrow) with surrounding edema (top arrow) Intracranial hemorrhage is the
accumulation of blood
anywhere within the skull
vault. A distinction is made
between intra-axial hemorrhage (blood inside the brain) and extra-axial hemorrhage (blood inside the skull but outside the brain).
Intra-axial hemorrhage is due
to intraparenchymal hemorrhage or intraventricular hemorrhage (blood in the ventricular
system). The main types of
extra-axial hemorrhage are epidural hematoma (bleeding between the dura mater and the skull, subdural hematoma (in the subdural space) and subarachnoid hemorrhage (between the arachnoid mater and pia mater). Most of the hemorrhagic stroke
syndromes have specific
symptoms (e.g., headache, previous head injury). Signs and symptoms Stroke symptoms typically
start suddenly, over seconds
to minutes, and in most cases
do not progress further. The
symptoms depend on the area
of the brain affected. The more extensive the area of
brain affected, the more
functions that are likely to be
lost. Some forms of stroke can
cause additional symptoms.
For example, in intracranial hemorrhage, the affected area
may compress other
structures. Most forms of
stroke are not associated with headache, apart from subarachnoid hemorrhage and
cerebral venous thrombosis
and occasionally intracerebral
hemorrhage. Early recognition Various systems have been
proposed to increase
recognition of stroke by
patients, relatives and
emergency first responders. A systematic review , updating a previous systematic review
from 1994, looked at a
number of trials to evaluate
how well different physical examination findings are able to predict the presence or
absence of stroke. It was
found that sudden-onset face
weakness, arm drift (i.e., if a
person, when asked to raise
both arms, involuntarily lets one arm drift downward) and
abnormal speech are the
findings most likely to lead to
the correct identification of a
case of stroke (+ likelihood ratio of 5.5 when at least one of these is present). Similarly,
when all three of these are
absent, the likelihood of
stroke is significantly
decreased (– likelihood ratio of 0.39).[15] While these findings are not perfect for
diagnosing stroke, the fact
that they can be evaluated
relatively rapidly and easily
make them very valuable in
the acute setting. Proposed systems include FAST (stroke) (face, arm, speech, and time),[16] as advocated by the Department of Health (United Kingdom) and The Stroke Association , the American Stroke
Association
(www.strokeassociation.org) ,
National Stroke Association
(US www.stroke.org), the Los
Angeles Prehospital Stroke Screen (LAPSS) [17] and the Cincinnati Prehospital Stroke Scale (CPSS).[18] Use of these scales is recommended by professional guidelines.[19] For people referred to the emergency room , early recognition of stroke is
deemed important as this can
expedite diagnostic tests and
treatments. A scoring system
called ROSIER (recognition of
stroke in the emergency room) is recommended for
this purpose; it is based on
features from the medical
history and physical examination. [19][20] Subtypes If the area of the brain
affected contains one of the
three prominent central nervous system pathways — the spinothalamic tract, corticospinal tract, and dorsal column (medial lemniscus), symptoms may include: hemiplegia and muscle weakness of the face numbness reduction in sensory or
vibratory sensation initial flaccidity
(hypotonicity), replaced by
spasticity (hypertonicity),
hyperreflexia, and obligatory synergies. [21] In most cases, the symptoms
affect only one side of the
body ( unilateral). Depending on the part of the brain
affected, the defect in the
brain is usually on the
opposite side of the body.
However, since these
pathways also travel in the spinal cord and any lesion there can also produce these
symptoms, the presence of
any one of these symptoms
does not necessarily indicate a
stroke. In addition to the above CNS
pathways, the brainstem give rise to most of the twelve cranial nerves . A stroke affecting the brain stem and
brain therefore can produce
symptoms relating to deficits
in these cranial nerves: altered smell, taste, hearing,
or vision (total or partial) drooping of eyelid ( ptosis) and weakness of ocular muscles decreased reflexes: gag,
swallow, pupil reactivity
to light decreased sensation and
muscle weakness of the
face balance problems and nystagmus altered breathing and heart
rate weakness in sternocleidomastoid muscle with inability to turn head
to one side weakness in tongue
(inability to protrude and/
or move from side to side) If the cerebral cortex is involved, the CNS pathways
can again be affected, but also
can produce the following
symptoms: aphasia (difficulty with verbal expression, auditory
comprehension, reading
and/or writing Broca's or Wernicke's area typically involved) dysarthria (motor speech disorder resulting from
neurological injury) apraxia (altered voluntary movements) visual field defect memory deficits
(involvement of temporal lobe) hemineglect (involvement of parietal lobe) disorganized thinking,
confusion, hypersexual gestures (with
involvement of frontal
lobe) anosognosia (persistent
denial of the existence of a,
usually stroke-related,
deficit) If the cerebellum is involved, the patient may have the
following: trouble walking altered movement
coordination vertigo and or disequilibrium Associated symptoms Loss of consciousness, headache, and vomiting
usually occurs more often in
hemorrhagic stroke than in
thrombosis because of the
increased intracranial pressure
from the leaking blood compressing the brain. If symptoms are maximal at
onset, the cause is more likely
to be a subarachnoid
hemorrhage or an embolic
stroke.
CERIBRAL HEMMORAGE
A cerebral hemorrhage or haemorrhage (or intracerebral hemorrhage , ICH ) is a subtype of intracranial hemorrhage that occurs within the brain tissue itself. Intracerebral
hemorrhage can be caused by brain trauma, or it can occur spontaneously in hemorrhagic stroke . Non-traumatic intracerebral hemorrhage is a
spontaneous bleeding into the brain tissue.[1] A cerebral hemorrhage is an intra-axial hemorrhage ; that is, it occurs within the
brain tissue rather than
outside of it. The other
category of intracranial
hemorrhage is extra-axial
hemorrhage, such as epidural, subdural, and subarachnoid hematomas, which all occur within the skull but outside of
the brain tissue. There are two
main kinds of intra-axial
hemorrhages: intraparenchymal hemorrhage and intraventricular hemorrhages. As with other types of hemorrhages within
the skull, intraparenchymal
bleeds are a serious medical emergency because they can increase intracranial pressure, which if left untreated can
lead to coma and death. The mortality rate for intraparenchymal bleeds is over 40%. [2] Signs and symptoms Patients with
intraparenchymal bleeds have
symptoms that correspond to
the functions controlled by
the area of the brain that is damaged by the bleed. [3] Other symptoms include those
that indicate a rise in intracranial pressure due to a large mass putting pressure on the brain.[3] Intracerebral hemorrhages are often
misdiagnosed as subarachnoid hemorrhages due to the similarity in symptoms and
signs. A severe headache
followed by vomiting is one
of the more common
symptoms of intracerebral
hemorrhage. Some patients may also go into a coma
before the bleed is noticed. Causes CT scan showing hemorrhage in the posterior fossa[1] Intracerebral bleeds are the
second most common cause of stroke , accounting for 30–60% of hospital admissions for stroke. [1]High blood pressure raises the risks of spontaneous
intracerebral hemorrhage by two to six times. [1] More common in adults than in
children, intraparenchymal
bleeds due to trauma are
usually due to penetrating head trauma, but can also be due to depressed skull fractures. Acceleration- deceleration trauma,[4][5][6] rupture of an aneurysm or arteriovenous malformation (AVM), and bleeding within a tumor are additional causes. Amyloid angiopathy is a not
uncommon cause of
intracerebral hemorrhage in
patients over the age of 55. A
very small proportion is due
to cerebral venous sinus thrombosis. Infection with the k serotype of Streptococcus mutans may also be a risk factor, due to its
prevalence in stroke patients
and production of collagen- binding protein.[7] Risk factors for ICH include: [8] Hypertension Diabetes Menopause Current cigarette smoking Alcoholic drinks (≥2/day) Tramautic intracerebral
Hematomas are divided into
acute and delayed. Acute
intracerebral Hematomas
occur at the time of the injury
while delayed intracerebral Hematomas have been
reported from as early as 6
hours post injury to as long as
several weeks. It is important
to keep in mind that
intracerebral Hematomas can be delayed because if
symptoms begin to appear
several weeks after the
injury, concussion is no longer
considered and the symptoms
may not be connected to the injury. Diagnosis Spontaneous ICH with hydrocephalus on CT scan[1] Intraparenchymal
hemorrhage can be recognized
on CT scans because blood appears brighter than other
tissue and is separated from
the inner table of the skull by
brain tissue. The tissue
surrounding a bleed is often
less dense than the rest of the brain due to edema, and therefore shows up darker on
the CT scan. Treatment Treatment depends
substantially of the type of
ICH. Rapid CT scan and other diagnostic measures are used
to determine proper
treatment, which may include
both medication and surgery. Medication Antihypertensive therapy
in acute phases. The AHA/
ASA and EUSI guidelines
(American Heart
Association/American
Stroke Association guidelines and the European
Stroke Initiative guidelines)
have recommended
antihypertensive therapy
to stabilize the mean arterial pressure at 110 mmHg. One paper showed
the efficacy of this
antihypertensive therapy
without worsening
outcome in patients of
hypertensive intracerebral hemorrhage within 3 hours onset.[9] Giving Factor VIIa within 4 hours limits the bleeding
and formation of a hematoma. However, it also increases the risk of thromboembolism.[10] Mannitol is effective in acutely reducing raised
intracranial pressure. Acetaminophen may be needed to avoid hyperthermia , and to relieve headache. [10] Frozen plasma, vitamin K , protamine, or platelet transfusions are given in case of a coagulopathy.[10] Fosphenytoin or other anticonvulsant is given in case of seizures or lobar hemorrhage.[10] H2 antagonists or proton
pump inhibitors are
commonly given for stress
ulcer prophylaxis, a
condition somehow linked with ICH. [10] Corticosteroids, in concert with antihypertensives, reduces swelling. [11] Surgery Surgery is required if the hematoma is greater than 3 cm (1 in), if there is a
structural vascular lesion or lobar hemorrhage in a young patient.[10] A catheter may be passed into the brain vasculature to close off or dilate blood vessels, avoiding invasive surgical procedures.[12] Aspiration by stereotactic surgery or endoscopic drainage may be used in basal ganglia hemorrhages, although successful reports are limited.[10] Other treatment Tracheal intubation is indicated in patients with
decreased level of
consciousness or other risk of airway obstruction. [10] IV fluids are given to maintain fluid balance, using normotonic rather than hypotonic fluids. [10] Prognosis The risk of death from an
intraparenchymal bleed in
traumatic brain injury is
especially high when the
injury occurs in the brain stem.[2] Intraparenchymal bleeds within the medulla oblongata are almost always fatal, because they cause
damage to cranial nerve X, the vagus nerve , which plays an important role in blood circulation and breathing.[4] This kind of hemorrhage can
also occur in the cortex or subcortical areas, usually in
the frontal or temporal lobes when due to head injury, and
sometimes in the cerebellum. [4][13] For spontaneous ICH seen on
CT scan, the death rate
(mortality ) is 34–50% by 30 days after the insult, [1] and half of the deaths occur in the first 2 days. [14] The inflammatory response
triggered by stroke has been
viewed as harmful, focusing
on the influx and migration of
blood-borne leukocytes,
neutrophils, and macrophages. New area of interest are the Mast Cells.[15] Epidemiology It accounts for 20% of all cases
of cerebrovascular disease in the US, behind cerebral thrombosis (40%) and cerebral embolism (30%).[16] It is two or more times more
prevalent in black American patients than it is in white.
Saturday 21 January 2012
AUXIN
Auxins are a class of plant hormones (or plant growth substances) with some morphogen-like characteristics. Auxins have a
cardinal role in coordination of
many growth and behavioral
processes in the plant's life
cycle and are essential for
plant body development. Auxins and their role in plant
growth were first described
by the Dutch scientist Frits Went.[1]Kenneth V. Thimann isolated this phytohormone
and determined its chemical
structure as indole-3-acetic acid. Went and Thiman then co-authored a book on plant
hormones, Phytohormones, in
1937. Native auxins indole-3-acetic acid (IAA) is the most abundant and the basic auxin natively occurring and functioning in plants. It generates the majority of auxin effects in intact plants, and is the most potent native auxin. There are three more native — endogenous auxins [2] All auxins are compounds with aromatic ring and a carboxylic acid group [3]: 4-chloroindole-3-acetic acid (4-CI-IAA) 2-phenylacetic acid (PAA) Indole-3-butyric acid (IBA) For representatives of synthetic auxins see chapter Synthetic auxins Overview Auxins derive their name
from the Greek word αυξειν (auxein - "to grow/increase").
They were the first of the
major plant hormones to be discovered. The (dynamic and to
environment responsive) pattern of auxin distribution within the plant is a key
factor for plant growth, its
reaction to its environment,
and specifically for
development of plant organs [4][5] (such as leaves or flowers ). It is achieved through very complex and
well coordinated active transport of auxin molecules
from cell to cell throughout
the plant body — by the so- called polar auxin transport .[4] Thus, a plant can (as a whole)
react to external conditions
and adjust to them, without
requiring a nervous system . Auxins typically act in concert
with, or in opposition to,
other plant hormones. For
example, the ratio of auxin to cytokinin in certain plant tissues determines initiation of
root versus shoot buds. On the molecular level, all
auxins are compounds with
an aromatic ring and a carboxylic acid group. [3] The most important member of
the auxin family is indole-3- acetic acid (IAA). [2] IAA generates the majority of
auxin effects in intact plants,
and is the most potent native
auxin. And as native auxin, its
stability is controlled in many
ways in plants, from synthesis, through possible conjugation to degradation of its molecules, always
according to the requirements
of the situation. However,
molecules of IAA are
chemically labile in aqueous solution, so it is not used
commercially as a plant
growth regulator. The four naturally
occurring (endogenous)
auxins are IAA, 4- chloroindole-3-acetic acid, phenylacetic acid and indole-3-butyric acid ; only these four were found to be synthesized by plants. [2] However, most of the
knowledge described so far
in auxin biology and as
described in the article
below, apply basically to
IAA; the other three endogenous auxins seems
to have rather marginal
importance for intact plants
in natural environments.
Alongside endogenous
auxins, scientists and manufacturers have
developed many synthetic
compounds with auxinic
activity. Synthetic auxin analogs include 1-naphthaleneacetic acid, 2,4- dichlorophenoxyacetic acid (2,4-D),[2] and many others. Some synthetic auxins, such as
2,4-D and 2,4,5- trichlorophenoxyacetic acid (2,4,5-T), are used also as herbicides. Broad-leaf plants (dicots), such as dandelions, are much more susceptible to
auxins than narrow-leaf
plants (monocots) such as grasses and cereal crops, so these synthetic auxins are
valuable as synthetic
herbicides. Auxins are also often used to
promote initiation of
adventitious roots, and are the active ingredient of the
commercial preparations used
in horticulture to root stem cuttings. They can also be used to promote uniform flowering and fruit set, and to prevent premature fruit drop. Hormonal activity Auxins coordinate
development at all levels in
plants, from the cellular level, through organs, and
ultimately to the whole plant. The plant cell wall is made up of cellulose, proteins, and, in many cases, lignin. It is very firm and prevents any sudden expansion of cell volume (and, without the contribution of auxins, any expansion at all). Molecular mechanisms Auxin molecules present in
cells may trigger responses
directly through stimulation
or inhibition of the expression of sets of certain genes.[6] or by means independent of
gene expression. One of the pathways leading
to the changes of gene
expression involves the
reception of auxin by TIR1
protein. In 2005, the F-box protein TIR1, which is part of the ubiquitin ligase complex SCFTIR1, was demonstrated to be an auxin receptor. [7] Upon binding of auxin, TIR1 recruits
specific transcriptional repressors (the Aux/IAA
repressors) for ubiquitination by the SCF complex. This marking process leads to
the degradation of the Aux/
IAAs repressors by the proteasome. The degradation of the repressors leads, in
turn, to potentiation of auxin response factor-mediated transcription of specific genes in response to auxins. [8]) Another protein, auxin-
binding protein 1 (ABP1), is a putative receptor for different signaling pathway,
but its role is as yet unclear.
Electrophysiological
experiments with protoplasts and anti-ABP1 antibodies
suggest ABP1 may have a
function at the plasma membrane, and cells can possibly use ABP1 proteins to
respond to auxin through
means faster and independent
of gene expression. On a cellular level On the cellular level, auxin is
essential for cell growth , affecting both cell division and cellular expansion. Auxin
concentration level, together
with other local factors,
contributes to cell differentiation and specification of the cell fate. Depending on the specific
tissue, auxin may promote
axial elongation (as in shoots),
lateral expansion (as in root
swelling), or isodiametric
expansion (as in fruit growth). In some cases
(coleoptile growth), auxin-
promoted cellular expansion
occurs in the absence of cell
division. In other cases, auxin-
promoted cell division and cell expansion may be closely
sequenced within the same
tissue (root initiation, fruit
growth). In a living plant,
auxins and other plant
hormones nearly always appear to interact to
determine patterns of plant
development. Organ patterns Growth and division of plant
cells together result in growth
of tissue, and specific tissue growth contributes to the
development of plant organs. Growth of cells contributes to
the plant's size, unevenly
localized growth produces
bending, turning and
directionalization of organs-
for example, stems turning toward light sources
(phototropism), roots growing in response to
gravity ( gravitropism ), and other tropisms originated because cells on one side grow
faster than the cells on the
other side of the organ. So,
precise control of auxin
distribution between
different cells has paramount importance to the resulting
form of plant growth and
organization. Uneven distribution of
auxin To cause growth in the
required domains, auxins
must of necessity be active
preferentially in them. Auxins
are not synthesized in all cells
(even if cells retain the potential ability to do so, only
under specific conditions will
auxin synthesis be activated
in them). For that purpose,
auxins have to be not only
translocated toward those sites where they are needed,
but also they must have an
established mechanism to
detect those sites. For that
purpose, auxins have to be
translocated toward those sites where they are needed.
Translocation is driven
throughout the plant body,
primarily from peaks of shoots to peaks of roots (from up to down). For long distances, relocation
occurs via the stream of fluid
in phloem vessels, but, for short-distance transport, a
unique system of coordinated
polar transport directly from
cell to cell is exploited. This
short-distance, active
transport exhibits some morphogenetic properties. This process, the polar auxin transport , is directional, very strictly regulated, and
based in uneven distribution
of auxin efflux carriers on the
plasma membrane, which
send auxins in the proper
direction. Pin-formed (PIN) proteins are vital in transporting auxin. [5][9] The regulation of PIN protein
localisation in a cell determines
the direction of auxin
transport from cell, and
concentrated effort of many
cells creates peaks of auxin, or auxin maxima (regions
having cells with higher auxin - a maximum). [5] Proper and timely auxin maxima within
developing roots and shoots
are necessary to organise the development of the organ. [4] [10][11] Surrounding auxin maxima are cells with low
auxin troughs, or auxin
minima. For example, in the
Arabidopsis fruit, auxin
minima have been shown to
be important for its tissue development.
DANDRUFF
Dandruff [1] (Latin: Pityriasis simplex capillitii [1]) is the shedding of dead skin cells from the scalp (not to be confused with a dry scalp).
Dandruff is sometimes caused
by frequent exposure to
extreme heat and cold. As it is
normal for skin cells to die and
flake off, a small amount of flaking is normal and
common; about 487,000 cells/ cm2 get released normally after detergent treatment. [2] Some people, however, either chronically or as a result of certain triggers, experience an unusually large amount of
flaking, up to 800,000 cells/ cm2, which can also be accompanied by redness and irritation. Most cases of dandruff can be easily treated
with specialized shampoos. Zoomed version of microscopic picture of human dandruff Dandruff is a common scalp
disorder affecting almost half
of the population at the pre-
pubertal age and of any sex
and ethnicity. In some
cultures dandruff is considered aesthetically displeasing. It
often causes itching. It has
been well established that keratinocytes play a key role in the expression and
generation of immunological
reactions during dandruff
formation. The severity of
dandruff may fluctuate with
season as it often worsens in winter. [2] Those affected by dandruff
find that it can cause social or
self-esteem problems.
Treatment may be important
for both physiological and psychological reasons. [3] Causes As the epidermal layer continually replaces itself, cells
are pushed outward where
they eventually die and flake
off. In most people, these
flakes of skin are too small to
be visible. However, certain conditions cause cell turnover
to be unusually rapid,
especially in the scalp. For
people with dandruff, skin
cells may mature and be shed
in 2–7 days, as opposed to around a month in people
without dandruff. The result
is that dead skin cells are shed
in large, oily clumps, which
appear as white or grayish
patches on the scalp, skin and clothes. Malassezia furfur species causes dandruff Dandruff has been shown to
be the result of three required factors:[4] 1. Skin oil commonly referred
to as sebum or sebaceous secretions[5] 2. The metabolic by-products
of skin micro-organisms
(most specifically Malassezia yeasts )[6][7][8] [9][10] 3. Individual susceptibility Older literature cites the fungus Malassezia furfur (previously known as
Pityrosporum ovale) as the
cause of dandruff. While this
species does occur naturally on
the skin surface of both
healthy people and those with dandruff, in 2007 it was
discovered that the
responsible agent is a scalp
specific fungus, Malassezia globosa,[11] that metabolizes triglycerides present in sebum by the expression of lipase,
resulting in a lipid byproduct oleic acid (OA). During dandruff, the levels of
Malassezia increase by 1.5 to 2 times its normal level. [2] Penetration by OA of the top
layer of the epidermis, the stratum corneum, results in an inflammatory response in
susceptible persons which
disturbs homeostasis and results in erratic cleavage of stratum corneum cells.[8] Rarely, dandruff can be a
manifestation of an allergic
reaction to chemicals in hair
gels, sprays, and shampoos,
hair oils, or sometimes even
dandruff medications like ketoconazole .[citation needed] There is some evidence that
excessive perspiration and climate have significant roles
in the pathogenesis of dandruff.[citation needed] Dandruff composition Dandruff scale is a cluster of
corneocytes, which have
retained a large degree of
cohesion with one another
and detach as such from the
surface of the stratum corneum. The size and
abundance of scales are
heterogeneous from one site
to another and over time.
Parakeratotic cells often make
up part of dandruff. Their numbers are related to the
severity of the clinical
manifestations, which may
also be influenced by seborrhea.[2] Seborrhoeic dermatitis Main article: Seborrhoeic dermatitis Flaking is a symptom of seborrhoeic dermatitis. Joseph Bark notes that "Redness and
itching is actually seborrheic
dermatitis, and it frequently
occurs around the folds of the
nose and the eyebrow areas,
not just the scalp." Dry, thick, well-defined lesions consisting
of large, silvery scales may be
traced to the less common psoriasis of the scalp. The spectrum of dandruff is
difficult to define because it
blurs with seborrhoeic
dermatitis and some other
scaly conditions. The
inflammation and extension of scaling outside the scalp
exclude the diagnosis of
dandruff from seborrhoeic dermatitis.[5] However, many reports suggest a clear link
between the two clinical
entities - the mildest form of
the clinical presentation of
seborrhoeic dermatitis as
dandruff, where the inflammation is minimal and
remain subclinical. Histological
examination reveals the
scattered presence of
lymphoid cells and squirting
capillaries in the papillary dermis with hints of
spongiosis and focal parakeratosis. [12][13] Seasonal changes, stress, and
immuno-suppression seem to affect seborrheic dermatitis. [2] Treatment Shampoos use a combination
of ingredients to control
dandruff. The pathogenesis of
dandruff involves
hyperproliferation of
keratinocytes, resulting in deregulation of keratinization.
The corneocytes clump
together, manifesting as large
flakes of skin. Essentially,
keratolytic agents such as
salicylic acid and sulphur loosen the attachments
between the corneocytes and
allow them to get swiped off. Regulators of
keratinization Zinc pyrithione (ZPT) heals the scalp by normalizing the
epithelial keratinization or
sebum production or both.
Some studies have shown a
significant reduction in the
number of yeasts after use of ZPT, which is an antifungal and antibacterial agent. [14] A study by Warner et al. [15] demonstrates a dramatic
reduction of structural
abnormalities found in
dandruff with the use of ZPT;
the population abundance of
Malassezia decreases, parakeratosis gets eliminated
and corneocytes lipid inclusions are diminished.[2] Steroids The parakeratotic properties
of topical corticosteroids
depend on the structure of the
agent, the vehicle and the skin
onto which it is used.
Corticosteroids work via their anti-inflammatory and antiproliferative effects. [16] Selenium sulfide It is believed that selenium sulfide controls dandruff via its anti Malassezia effect rather than by its antiproliferative
effect, although it has an
effect in reducing cell
turnover. It has anti- seborrheic properties as well as cytostatic effect on cells of
the epidermal and follicular epithelium. The excessive oiliness after use of this agent
has been reported in many
patients as adverse drug
effect. Imidazole antifungal
agents Imidazole topical antifungals
such as ketoconazole act by
blocking the biosynthesis of
ergosterol, the primary sterol
derivative of the fungal cell
membrane. Changes in membrane permeability
caused by ergosterol depletion
are incompatible with fungal growth and survival. [17] Ketoconazole is a broad
spectrum, antimycotic agent
that is active against both
Candida and M. furfur . Of all
the imidazoles, ketoconazole
has become the leading contender among treatment
options because of its
effectiveness in treating
seborrheic dermatitis as well. [2]
CHIKENPOX
Malaria is a mosquito-borne infectious disease of humans and other animals caused by eukaryotic protists of the genus Plasmodium. The disease results from the multiplication
of Plasmodium parasites
within red blood cells, causing symptoms that typically
include fever and headache, in severe cases progressing to coma or death. It is widespread in tropical and subtropical regions, including
much of Sub-Saharan Africa , Asia, and the Americas. Five species of Plasmodium
can infect and be transmitted
by humans. Severe disease is
largely caused by Plasmodium falciparum while the disease caused by Plasmodium vivax , Plasmodium ovale ,[1] and Plasmodium malariae is generally a milder disease that
is rarely fatal. Plasmodium knowlesi is a zoonosis that causes malaria in macaques but can also infect humans. [2] [3] Malaria transmission can be
reduced by preventing
mosquito bites by distribution
of mosquito nets and insect repellents, or by mosquito- control measures such as
spraying insecticides and draining standing water
(where mosquitoes breed).
Despite a clear need, no vaccine offering a high level of protection currently exists.
Efforts to develop one are ongoing.[4] A number of medications are also available
to prevent malaria in travelers
to malaria-endemic countries (prophylaxis ). A variety of antimalarial medications are available. Severe malaria is treated with
intravenous or intramuscular
quinine or, since the
mid-2000s, the artemisinin derivative artesunate,[5] which is superior to quinine in both children and adults.[6] Resistance has developed to
several antimalarial drugs, most notably chloroquine.[7] There were an estimated 225
million cases of malaria worldwide in 2009. [8] An estimated 655,000 people died from malaria in 2010,[9] a 5% decrease from the 781,000
who died in 2009 according to
the World Health Organization's 2011 World Malaria Report, accounting for 2.23% of deaths worldwide. [8] Ninety percent of malaria-
related deaths occur in sub-
Saharan Africa, with the
majority of deaths being
young children. Plasmodium falciparum, the most severe form of malaria, is responsible
for the vast majority of
deaths associated with the disease.[10] Malaria is commonly associated with
poverty, and can indeed be a cause of poverty [11] and a major hindrance to economic development . Signs and symptoms Main symptoms of malaria. [12] Typical fever patterns of malaria Symptoms of malaria include fever , shivering , arthralgia (joint pain), vomiting , anemia (caused by hemolysis ), jaundice, hemoglobinuria, retinal damage,[13] and convulsions . The classic symptom of malaria is cyclical
occurrence of sudden coldness
followed by rigor and then fever and sweating lasting
four to six hours, occurring
every two days in P. vivax
and P. ovale infections, and
every three days for P. malariae.[14]P. falciparum can have recurrent fever every
36–48 hours or a less
pronounced and almost
continuous fever. For reasons
that are poorly understood,
but that may be related to high intracranial pressure, children with malaria
frequently exhibit abnormal posturing, a sign indicating severe brain damage. [15] Malaria has been found to
cause cognitive impairments,
especially in children. It causes
widespread anemia during a period of rapid brain
development and also direct
brain damage. This neurologic
damage results from cerebral
malaria to which children are more vulnerable. [16][17] Cerebral malaria is associated with retinal whitening, [18] which may be a useful clinical
sign in distinguishing malaria
from other causes of fever. [19] Severe malaria is almost
exclusively caused by
Plasmodium falciparum
infection, and usually arises 6– 14 days after infection. [20] Consequences of severe
malaria include coma and death if untreated—young
children and pregnant women
are especially vulnerable. Splenomegaly (enlarged spleen), severe headache, cerebral ischemia, hepatomegaly (enlarged liver), hypoglycemia , and hemoglobinuria with renal failure may occur. Renal failure is a feature of blackwater fever , where hemoglobin from lysed red
blood cells leaks into the urine.
Severe malaria can progress
extremely rapidly and cause
death within hours or days. [20] In the most severe cases of the disease, fatality rates
can exceed 20%, even with
intensive care and treatment. [21] In endemic areas, treatment is often less
satisfactory and the overall
fatality rate for all cases of
malaria can be as high as one in ten.[22] Over the longer term, developmental
impairments have been
documented in children who
have suffered episodes of severe malaria. [23] Cause A Plasmodium sporozoite traverses the cytoplasm of a mosquito midgut epithelial cell in this false-color electron micrograph. Malaria parasites are members of the genus Plasmodium (phylum Apicomplexa ). In humans malaria is caused by P. falciparum, P. malariae, P. ovale , P. vivax and P. knowlesi .[24][25] While P. vivax is responsible for the
largest number of malaria
infections worldwide,
infections by P. falciparum
account for about 90% of the deaths from malaria.[26] Parasitic Plasmodium species
also infect birds, reptiles,
monkeys, chimpanzees and rodents.[27] There have been documented human infections
with several simian species of malaria; however, with the
exception of P. knowlesi,
these are mostly of limited public health importance.[28] Malaria parasites contain apicoplasts, an organelle usually found in plants,
complete with their own
functioning genomes. These
apicoplast are thought to have
originated through the endosymbiosis of algae[29] and play a crucial role in
various aspects of parasite
metabolism e.g. fatty acid bio- synthesis. [30] To date, 466 proteins have been found to be produced by apicoplasts [31] and these are now being
looked at as possible targets
for novel anti-malarial drugs. Life cycle The parasite's secondary hosts are humans and other
vertebrates. Female mosquitoes of the Anopheles genus are the primary, i.e. definitive hosts and act as transmission vectors . Young mosquitoes first ingest the
malaria parasite by feeding on
an infected human carrier and
the infected Anopheles
mosquitoes carry Plasmodium sporozoites in their salivary glands. A mosquito becomes infected when it takes a blood
meal from an infected human.
Once ingested, the parasite gametocytes taken up in the blood will further
differentiate into male or
female gametes and then fuse in the mosquito's gut. This
produces an ookinete that penetrates the gut lining and
produces an oocyst in the gut wall. When the oocyst
ruptures, it releases
sporozoites that migrate
through the mosquito's body
to the salivary glands, where
they are then ready to infect a new human host. This type of
transmission is occasionally
referred to as anterior station transfer.[32] The sporozoites are injected into the skin,
alongside saliva, when the
mosquito takes a subsequent
blood meal. Only female mosquitoes feed
on blood while male
mosquitoes feed on plant nectar,[33] thus males do not transmit the disease. The
females of the Anopheles
genus of mosquito prefer to
feed at night. They usually
start searching for a meal at
dusk, and will continue throughout the night until
taking a meal. Malaria
parasites can also be
transmitted by blood transfusions, although this is rare.[34] Recurrent malaria Malaria recurs after treatment
for three reasons.
Recrudescence occurs when
parasites are not cleared by
treatment, whereas
reinfection indicates complete clearance with new infection
established from a separate
infective mosquito bite; both
can occur with any malaria
parasite species. Relapse is
specific to P. vivax and P. ovale and involves re-
emergence of blood-stage
parasites from latent parasites
(hypnozoites) in the liver.
Describing a case of malaria as
cured by observing the disappearance of parasites
from the bloodstream can,
therefore, be deceptive. The
longest incubation period
reported for a P. vivax infection is 30 years. [20] Approximately one in five of
P. vivax malaria cases in temperate areas involve overwintering by hypnozoites (i.e., relapses
begin the year after the mosquito bite).[35] Pathogenesis Further information: Plasmodium falciparum
biology The life cycle of malaria parasites in the human body. A mosquito infects a person by taking a blood meal. First, sporozoites enter the bloodstream, and migrate to the liver. They infect liver cells (hepatocytes), where they multiply into merozoites, rupture the liver cells, and escape back into the bloodstream. Then, the merozoites infect red blood cells, where they develop into ring forms, trophozoites and schizonts which in turn produce further merozoites.
Sexual forms (gametocytes) are also produced, which, if taken up by a mosquito, will
infect the insect and continue the life cycle. Malaria develops via two
phases: an exoerythrocytic
and an erythrocytic phase. The
exoerythrocytic phase
involves infection of the
hepatic system, or liver, whereas the erythrocytic
phase involves infection of
the erythrocytes, or red blood
cells. When an infected
mosquito pierces a person's
skin to take a blood meal, sporozoites in the mosquito's saliva enter the bloodstream
and migrate to the liver . Within minutes of being
introduced into the human
host, the sporozoites infect hepatocytes , multiplying asexually and
asymptomatically for a period of 8–30 days. [36] Once in the liver, these organisms
differentiate to yield
thousands of merozoites, which, following rupture of
their host cells, escape into the
blood and infect red blood cells, thus beginning the erythrocytic stage of the life cycle. [36] The parasite escapes from the liver undetected by
wrapping itself in the cell
membrane of the infected host liver cell. [37] Within the red blood cells, the
parasites multiply further,
again asexually, periodically
breaking out of their hosts to
invade fresh red blood cells.
Several such amplification cycles occur. Thus, classical
descriptions of waves of
fever arise from simultaneous
waves of merozoites escaping
and infecting red blood cells. Some P. vivax and P. ovale
sporozoites do not
immediately develop into
exoerythrocytic-phase
merozoites, but instead
produce hypnozoites that remain dormant for periods
ranging from several months
(6–12 months is typical) to as
long as three years. After a
period of dormancy, they
reactivate and produce merozoites. Hypnozoites are
responsible for long
incubation and late relapses in
these two species of malaria. [38] The parasite is relatively
protected from attack by the
body's immune system because for most of its human
life cycle it resides within the
liver and blood cells and is
relatively invisible to immune
surveillance. However,
circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P.
falciparum parasite displays
adhesive proteins on the surface of the infected blood
cells, causing the blood cells to
stick to the walls of small
blood vessels, thereby
sequestering the parasite from
passage through the general circulation and the spleen.[39] This "stickiness" is the main
factor giving rise to hemorrhagic complications of malaria. High endothelial venules (the smallest branches of the circulatory system) can
be blocked by the attachment
of masses of these infected
red blood cells. The blockage
of these vessels causes
symptoms such as in placental and cerebral malaria. In
cerebral malaria the
sequestrated red blood cells
can breach the blood brain barrier possibly leading to coma.[40] Micrograph of a placenta from a stillbirth due to maternal malaria. H&E stain. Red blood cells are anuclear; blue/black staining in bright red structures (red blood cells) indicate foreign nuclei from the parasites Although the red blood cell
surface adhesive proteins
(called PfEMP1, for
Plasmodium falciparum
erythrocyte membrane
protein 1) are exposed to the immune system, they do not
serve as good immune
targets, because of their
extreme diversity; there are
at least 60 variations of the
protein within a single parasite and even more
variants within whole parasite populations.[39] The parasite switches between a
broad repertoire of PfEMP1
surface proteins, thus staying
one step ahead of the
pursuing immune system. Some merozoites turn into
male and female gametocytes . If a mosquito pierces the skin
of an infected person, it
potentially picks up
gametocytes within the
blood. Fertilization and sexual
recombination of the parasite occurs in the mosquito's gut.
(Because sexual reproduction
of the parasite defines the definitive host , the mosquito is the definitive host, whereas
humans are the intermediate host.) New sporozoites develop and travel to the
mosquito's salivary gland,
completing the cycle. Pregnant
women are especially
attractive to the mosquitoes, [41] and malaria in pregnant women is an important cause of stillbirths, infant mortality and low birth weight, [42] particularly in P. falciparum
infection, but also in other
species infection, such as P. vivax.
MALERIA
Malaria is a mosquito-borne infectious disease of humans and other animals caused by eukaryotic protists of the genus Plasmodium. The disease results from the multiplication
of Plasmodium parasites
within red blood cells, causing symptoms that typically
include fever and headache, in severe cases progressing to coma or death. It is widespread in tropical and subtropical regions, including
much of Sub-Saharan Africa , Asia, and the Americas. Five species of Plasmodium
can infect and be transmitted
by humans. Severe disease is
largely caused by Plasmodium falciparum while the disease caused by Plasmodium vivax , Plasmodium ovale ,[1] and Plasmodium malariae is generally a milder disease that
is rarely fatal. Plasmodium knowlesi is a zoonosis that causes malaria in macaques but can also infect humans. [2] [3] Malaria transmission can be
reduced by preventing
mosquito bites by distribution
of mosquito nets and insect repellents, or by mosquito- control measures such as
spraying insecticides and draining standing water
(where mosquitoes breed).
Despite a clear need, no vaccine offering a high level of protection currently exists.
Efforts to develop one are ongoing.[4] A number of medications are also available
to prevent malaria in travelers
to malaria-endemic countries (prophylaxis ). A variety of antimalarial medications are available. Severe malaria is treated with
intravenous or intramuscular
quinine or, since the
mid-2000s, the artemisinin derivative artesunate,[5] which is superior to quinine in both children and adults.[6] Resistance has developed to
several antimalarial drugs, most notably chloroquine.[7] There were an estimated 225
million cases of malaria worldwide in 2009. [8] An estimated 655,000 people died from malaria in 2010,[9] a 5% decrease from the 781,000
who died in 2009 according to
the World Health Organization's 2011 World Malaria Report, accounting for 2.23% of deaths worldwide. [8] Ninety percent of malaria-
related deaths occur in sub-
Saharan Africa, with the
majority of deaths being
young children. Plasmodium falciparum, the most severe form of malaria, is responsible
for the vast majority of
deaths associated with the disease.[10] Malaria is commonly associated with
poverty, and can indeed be a cause of poverty [11] and a major hindrance to economic development . Signs and symptoms Main symptoms of malaria. [12] Typical fever patterns of malaria Symptoms of malaria include fever , shivering , arthralgia (joint pain), vomiting , anemia (caused by hemolysis ), jaundice, hemoglobinuria, retinal damage,[13] and convulsions . The classic symptom of malaria is cyclical
occurrence of sudden coldness
followed by rigor and then fever and sweating lasting
four to six hours, occurring
every two days in P. vivax
and P. ovale infections, and
every three days for P. malariae.[14]P. falciparum can have recurrent fever every
36–48 hours or a less
pronounced and almost
continuous fever. For reasons
that are poorly understood,
but that may be related to high intracranial pressure, children with malaria
frequently exhibit abnormal posturing, a sign indicating severe brain damage. [15] Malaria has been found to
cause cognitive impairments,
especially in children. It causes
widespread anemia during a period of rapid brain
development and also direct
brain damage. This neurologic
damage results from cerebral
malaria to which children are more vulnerable. [16][17] Cerebral malaria is associated with retinal whitening, [18] which may be a useful clinical
sign in distinguishing malaria
from other causes of fever. [19] Severe malaria is almost
exclusively caused by
Plasmodium falciparum
infection, and usually arises 6– 14 days after infection. [20] Consequences of severe
malaria include coma and death if untreated—young
children and pregnant women
are especially vulnerable. Splenomegaly (enlarged spleen), severe headache, cerebral ischemia, hepatomegaly (enlarged liver), hypoglycemia , and hemoglobinuria with renal failure may occur. Renal failure is a feature of blackwater fever , where hemoglobin from lysed red
blood cells leaks into the urine.
Severe malaria can progress
extremely rapidly and cause
death within hours or days. [20] In the most severe cases of the disease, fatality rates
can exceed 20%, even with
intensive care and treatment. [21] In endemic areas, treatment is often less
satisfactory and the overall
fatality rate for all cases of
malaria can be as high as one in ten.[22] Over the longer term, developmental
impairments have been
documented in children who
have suffered episodes of severe malaria. [23] Cause A Plasmodium sporozoite traverses the cytoplasm of a mosquito midgut epithelial cell in this false-color electron micrograph. Malaria parasites are members of the genus Plasmodium (phylum Apicomplexa ). In humans malaria is caused by P. falciparum, P. malariae, P. ovale , P. vivax and P. knowlesi .[24][25] While P. vivax is responsible for the
largest number of malaria
infections worldwide,
infections by P. falciparum
account for about 90% of the deaths from malaria.[26] Parasitic Plasmodium species
also infect birds, reptiles,
monkeys, chimpanzees and rodents.[27] There have been documented human infections
with several simian species of malaria; however, with the
exception of P. knowlesi,
these are mostly of limited public health importance.[28] Malaria parasites contain apicoplasts, an organelle usually found in plants,
complete with their own
functioning genomes. These
apicoplast are thought to have
originated through the endosymbiosis of algae[29] and play a crucial role in
various aspects of parasite
metabolism e.g. fatty acid bio- synthesis. [30] To date, 466 proteins have been found to be produced by apicoplasts [31] and these are now being
looked at as possible targets
for novel anti-malarial drugs. Life cycle The parasite's secondary hosts are humans and other
vertebrates. Female mosquitoes of the Anopheles genus are the primary, i.e. definitive hosts and act as transmission vectors . Young mosquitoes first ingest the
malaria parasite by feeding on
an infected human carrier and
the infected Anopheles
mosquitoes carry Plasmodium sporozoites in their salivary glands. A mosquito becomes infected when it takes a blood
meal from an infected human.
Once ingested, the parasite gametocytes taken up in the blood will further
differentiate into male or
female gametes and then fuse in the mosquito's gut. This
produces an ookinete that penetrates the gut lining and
produces an oocyst in the gut wall. When the oocyst
ruptures, it releases
sporozoites that migrate
through the mosquito's body
to the salivary glands, where
they are then ready to infect a new human host. This type of
transmission is occasionally
referred to as anterior station transfer.[32] The sporozoites are injected into the skin,
alongside saliva, when the
mosquito takes a subsequent
blood meal. Only female mosquitoes feed
on blood while male
mosquitoes feed on plant nectar,[33] thus males do not transmit the disease. The
females of the Anopheles
genus of mosquito prefer to
feed at night. They usually
start searching for a meal at
dusk, and will continue throughout the night until
taking a meal. Malaria
parasites can also be
transmitted by blood transfusions, although this is rare.[34] Recurrent malaria Malaria recurs after treatment
for three reasons.
Recrudescence occurs when
parasites are not cleared by
treatment, whereas
reinfection indicates complete clearance with new infection
established from a separate
infective mosquito bite; both
can occur with any malaria
parasite species. Relapse is
specific to P. vivax and P. ovale and involves re-
emergence of blood-stage
parasites from latent parasites
(hypnozoites) in the liver.
Describing a case of malaria as
cured by observing the disappearance of parasites
from the bloodstream can,
therefore, be deceptive. The
longest incubation period
reported for a P. vivax infection is 30 years. [20] Approximately one in five of
P. vivax malaria cases in temperate areas involve overwintering by hypnozoites (i.e., relapses
begin the year after the mosquito bite).[35] Pathogenesis Further information: Plasmodium falciparum
biology The life cycle of malaria parasites in the human body. A mosquito infects a person by taking a blood meal. First, sporozoites enter the bloodstream, and migrate to the liver. They infect liver cells (hepatocytes), where they multiply into merozoites, rupture the liver cells, and escape back into the bloodstream. Then, the merozoites infect red blood cells, where they develop into ring forms, trophozoites and schizonts which in turn produce further merozoites.
Sexual forms (gametocytes) are also produced, which, if taken up by a mosquito, will
infect the insect and continue the life cycle. Malaria develops via two
phases: an exoerythrocytic
and an erythrocytic phase. The
exoerythrocytic phase
involves infection of the
hepatic system, or liver, whereas the erythrocytic
phase involves infection of
the erythrocytes, or red blood
cells. When an infected
mosquito pierces a person's
skin to take a blood meal, sporozoites in the mosquito's saliva enter the bloodstream
and migrate to the liver . Within minutes of being
introduced into the human
host, the sporozoites infect hepatocytes , multiplying asexually and
asymptomatically for a period of 8–30 days. [36] Once in the liver, these organisms
differentiate to yield
thousands of merozoites, which, following rupture of
their host cells, escape into the
blood and infect red blood cells, thus beginning the erythrocytic stage of the life cycle. [36] The parasite escapes from the liver undetected by
wrapping itself in the cell
membrane of the infected host liver cell. [37] Within the red blood cells, the
parasites multiply further,
again asexually, periodically
breaking out of their hosts to
invade fresh red blood cells.
Several such amplification cycles occur. Thus, classical
descriptions of waves of
fever arise from simultaneous
waves of merozoites escaping
and infecting red blood cells. Some P. vivax and P. ovale
sporozoites do not
immediately develop into
exoerythrocytic-phase
merozoites, but instead
produce hypnozoites that remain dormant for periods
ranging from several months
(6–12 months is typical) to as
long as three years. After a
period of dormancy, they
reactivate and produce merozoites. Hypnozoites are
responsible for long
incubation and late relapses in
these two species of malaria. [38] The parasite is relatively
protected from attack by the
body's immune system because for most of its human
life cycle it resides within the
liver and blood cells and is
relatively invisible to immune
surveillance. However,
circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P.
falciparum parasite displays
adhesive proteins on the surface of the infected blood
cells, causing the blood cells to
stick to the walls of small
blood vessels, thereby
sequestering the parasite from
passage through the general circulation and the spleen.[39] This "stickiness" is the main
factor giving rise to hemorrhagic complications of malaria. High endothelial venules (the smallest branches of the circulatory system) can
be blocked by the attachment
of masses of these infected
red blood cells. The blockage
of these vessels causes
symptoms such as in placental and cerebral malaria. In
cerebral malaria the
sequestrated red blood cells
can breach the blood brain barrier possibly leading to coma.[40] Micrograph of a placenta from a stillbirth due to maternal malaria. H&E stain. Red blood cells are anuclear; blue/black staining in bright red structures (red blood cells) indicate foreign nuclei from the parasites Although the red blood cell
surface adhesive proteins
(called PfEMP1, for
Plasmodium falciparum
erythrocyte membrane
protein 1) are exposed to the immune system, they do not
serve as good immune
targets, because of their
extreme diversity; there are
at least 60 variations of the
protein within a single parasite and even more
variants within whole parasite populations.[39] The parasite switches between a
broad repertoire of PfEMP1
surface proteins, thus staying
one step ahead of the
pursuing immune system. Some merozoites turn into
male and female gametocytes . If a mosquito pierces the skin
of an infected person, it
potentially picks up
gametocytes within the
blood. Fertilization and sexual
recombination of the parasite occurs in the mosquito's gut.
(Because sexual reproduction
of the parasite defines the definitive host , the mosquito is the definitive host, whereas
humans are the intermediate host.) New sporozoites develop and travel to the
mosquito's salivary gland,
completing the cycle. Pregnant
women are especially
attractive to the mosquitoes, [41] and malaria in pregnant women is an important cause of stillbirths, infant mortality and low birth weight, [42] particularly in P. falciparum
infection, but also in other
species infection, such as P. vivax.
Subscribe to:
Posts (Atom)