id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.78
0.88
|
|---|---|---|---|
100
|
Improvements in nutritional value for forage crops from the use of analytical chemistry and rumen fermentation technology have been recorded since 1960; this science and technology gave breeders the ability to screen thousands of samples within a small amount of time, meaning breeders could identify a high performing hybrid quicker. The genetic improvement was mainly in vitro dry matter digestibility (IVDMD) resulting in 0.7-2.5% increase, at just 1% increase in IVDMD a single Bos Taurus also known as beef cattle reported 3.2% increase in daily gains. This improvement indicates plant breeding is an essential tool in gearing future agriculture to perform at a more advanced level.
|
Plant breeding
| 0.857804
|
101
|
Cereal Genomics. Methods in Molecular Biology. Vol.
|
Plant breeding
| 0.857804
|
102
|
(ISBN 9781439802427), CRC Press, Boca Raton, FL, USA, pp 584 Schlegel, Rolf (2007) Concise Encyclopedia of Crop Improvement: Institutions, Persons, Theories, Methods, and Histories (ISBN 9781560221463), CRC Press, Boca Raton, FL, USA, pp 423 Schlegel, Rolf (2014) Dictionary of Plant Breeding, 2nd ed., (ISBN 978-1439802427), CRC Press, Boca Raton, Taylor & Francis Group, Inc., New York, USA, pp 584 Schouten, Henk J.; Krens, Frans A.; Jacobsen, Evert (2006). "Do cisgenic plants warrant less stringent oversight?". Nature Biotechnology.
|
Plant breeding
| 0.857804
|
103
|
Plant breeding is the science of changing the traits of plants in order to produce desired characteristics. It has been used to improve the quality of nutrition in products for humans and animals. The goals of plant breeding are to produce crop varieties that boast unique and superior traits for a variety of applications. The most frequently addressed agricultural traits are those related to biotic and abiotic stress tolerance, grain or biomass yield, end-use quality characteristics such as taste or the concentrations of specific biological molecules (proteins, sugars, lipids, vitamins, fibers) and ease of processing (harvesting, milling, baking, malting, blending, etc.).Plant breeding can be performed through many different techniques ranging from simply selecting plants with desirable characteristics for propagation, to methods that make use of knowledge of genetics and chromosomes, to more complex molecular techniques.
|
Plant breeding
| 0.857804
|
104
|
In molecular biology, an actomyosin contractile ring is a prominent structure during cytokinesis. It forms perpendicular to the axis of the spindle apparatus towards the end of telophase, in which sister chromatids are identically separated at the opposite sides of the spindle forming nuclei (Figure 1). The actomyosin ring follows an orderly sequence of events: identification of the active division site, formation of the ring, constriction of the ring, and disassembly of the ring. It is composed of actin and myosin II bundles, thus the term actomyosin.
|
Actomyosin ring
| 0.857732
|
105
|
In mathematics, a quadratic-linear algebra is an algebra over a field with a presentation such that all relations are sums of monomials of degrees 1 or 2 in the generators. They were introduced by Polishchuk and Positselski (2005, p.101). An example is the universal enveloping algebra of a Lie algebra, with generators a basis of the Lie algebra and relations of the form XY – YX – = 0.
|
Quadratic-linear algebra
| 0.857728
|
106
|
In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values. Although search engines use search algorithms, they belong to the study of information retrieval, not algorithmics.
|
Search algorithms
| 0.857714
|
107
|
Specific applications of search algorithms include: Problems in combinatorial optimization, such as: The vehicle routing problem, a form of shortest path problem The knapsack problem: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. The nurse scheduling problem Problems in constraint satisfaction, such as: The map coloring problem Filling in a sudoku or crossword puzzle In game theory and especially combinatorial game theory, choosing the best move to make next (such as with the minmax algorithm) Finding a combination or password from the whole set of possibilities Factoring an integer (an important problem in cryptography) Optimizing an industrial process, such as a chemical reaction, by changing the parameters of the process (like temperature, pressure, and pH) Retrieving a record from a database Finding the maximum or minimum value in a list or array Checking to see if a given value is present in a set of values
|
Search algorithms
| 0.857714
|
108
|
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions. Fluorescent imaging techniques, as well as electron microscopy, X-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance.
|
Protein chemistry
| 0.85758
|
109
|
Molecular biophysics is a rapidly evolving interdisciplinary area of research that combines concepts in physics, chemistry, engineering, mathematics and biology. It seeks to understand biomolecular systems and explain biological function in terms of molecular structure, structural organization, and dynamic behaviour at various levels of complexity (from single molecules to supramolecular structures, viruses and small living systems). This discipline covers topics such as the measurement of molecular forces, molecular associations, allosteric interactions, Brownian motion, and cable theory. Additional areas of study can be found on Outline of Biophysics. The discipline has required development of specialized equipment and procedures capable of imaging and manipulating minute living structures, as well as novel experimental approaches.
|
Protein chemistry
| 0.85758
|
110
|
Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, ecological, behavioral, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science and evolution. Computational biology has become an important part of developing emerging technologies for the field of biology. Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies.
|
Protein chemistry
| 0.85758
|
111
|
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its folding and its secondary and tertiary structure from its primary structure. Structure prediction is fundamentally different from the inverse problem of protein design. Protein structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine, in drug design, biotechnology and in the design of novel enzymes). Every two years, the performance of current methods is assessed in the CASP experiment (Critical Assessment of Techniques for Protein Structure Prediction). A continuous evaluation of protein structure prediction web servers is performed by the community project CAMEO3D.
|
Protein chemistry
| 0.85758
|
112
|
For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.
|
Protein chemistry
| 0.85758
|
113
|
The table below summarizes how algebraic expressions compare with several other types of mathematical expressions by the type of elements they may contain, according to common but not universal conventions. A rational algebraic expression (or rational expression) is an algebraic expression that can be written as a quotient of polynomials, such as x2 + 4x + 4. An irrational algebraic expression is one that is not rational, such as √x + 4.
|
Algebraic expression
| 0.857402
|
114
|
Usually, π is constructed as a geometric relationship, and the definition of e requires an infinite number of algebraic operations. A rational expression is an expression that may be rewritten to a rational fraction by using the properties of the arithmetic operations (commutative properties and associative properties of addition and multiplication, distributive property and rules for the operations on the fractions). In other words, a rational expression is an expression which may be constructed from the variables and the constants by using only the four operations of arithmetic.
|
Algebraic expression
| 0.857401
|
115
|
In mathematics, an algebraic expression is an expression built up from constant algebraic numbers, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by an exponent that is a rational number). For example, 3x2 − 2xy + c is an algebraic expression. Since taking the square root is the same as raising to the power 1/2, the following is also an algebraic expression: 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} An algebraic equation is an equation involving only algebraic expressions. By contrast, transcendental numbers like π and e are not algebraic, since they are not derived from integer constants and algebraic operations.
|
Algebraic expression
| 0.857401
|
116
|
In 1658, in the first edition of The New World of English Words, it says: Algorithme, (a word compounded of Arabick and Spanish,) the art of reckoning by Cyphers. In 1706, in the sixth edition of The New World of English Words, it says: Algorithm, the Art of computing or reckoning by numbers, which contains the five principle Rules of Arithmetick, viz. Numeration, Addition, Subtraction, Multiplication and Division; to which may be added Extraction of Roots: It is also call'd Logistica Numeralis. Algorism, the practical Operation in the several Parts of Specious Arithmetick or Algebra; sometimes it is taken for the Practice of Common Arithmetick by the ten Numeral Figures. In 1751, in the Young Algebraist's Companion, Daniel Fenning contrasts the terms algorism and algorithm as follows: Algorithm signifies the first Principles, and Algorism the practical Part, or knowing how to put the Algorithm in Practice. Since at least 1811, the term algorithm is attested to mean a "step-by-step procedure" in English.In 1842, in the Dictionary of Science, Literature and Art, it says: ALGORITHM, signifies the art of computing in reference to some particular subject, or in some particular way; as the algorithm of numbers; the algorithm of the differential calculus.
|
Algorithmic problem
| 0.857278
|
117
|
Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Quantum algorithm They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
|
Algorithmic problem
| 0.857278
|
118
|
In mathematics and computer science, an algorithm ( ) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually.
|
Algorithmic problem
| 0.857278
|
119
|
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials. The sides of a polynomial equation contain one or more terms.
|
Mathematical equations
| 0.85657
|
120
|
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
|
Mathematical equations
| 0.85657
|
121
|
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example, x = cos t y = sin t {\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}} are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve. The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
|
Mathematical equations
| 0.85657
|
122
|
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
|
Mathematical equations
| 0.85657
|
123
|
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
|
Mathematical equations
| 0.85657
|
124
|
In the United Kingdom, the original boxes (prior to the introduction of the Happy Meal-sized nugget boxes) were of 6, 9, and 20 nuggets. According to Schur's theorem, since 6, 9, and 20 are (setwise) relatively prime, any sufficiently large integer can be expressed as a (non-negative, integer) linear combination of these three. Therefore, there exists a largest non-McNugget number, and all integers larger than it are McNugget numbers.
|
Coin problem
| 0.856384
|
125
|
One special case of the coin problem is sometimes also referred to as the McNugget numbers. The McNuggets version of the coin problem was introduced by Henri Picciotto, who placed it as a puzzle in Games Magazine in 1987, and included it in his algebra textbook co-authored with Anita Wah. Picciotto thought of the application in the 1980s while dining with his son at McDonald's, working out the problem on a napkin. A McNugget number is the total number of McDonald's Chicken McNuggets in any number of boxes.
|
Coin problem
| 0.856384
|
126
|
Reasons for this may be that silicon is less versatile than carbon in forming compounds, that the compounds formed by silicon are unstable, and that it blocks the flow of heat.Even so, biogenic silica is used by some Earth life, such as the silicate skeletal structure of diatoms. According to the clay hypothesis of A. G. Cairns-Smith, silicate minerals in water played a crucial role in abiogenesis: they replicated their crystal structures, interacted with carbon compounds, and were the precursors of carbon-based life.Although not observed in nature, carbon–silicon bonds have been added to biochemistry by using directed evolution (artificial selection). A heme containing cytochrome c protein from Rhodothermus marinus has been engineered using directed evolution to catalyze the formation of new carbon–silicon bonds between hydrosilanes and diazo compounds.Silicon compounds may possibly be biologically useful under temperatures or pressures different from the surface of a terrestrial planet, either in conjunction with or in a role less directly analogous to carbon. Polysilanols, the silicon compounds corresponding to sugars, are soluble in liquid nitrogen, suggesting that they could play a role in very-low-temperature biochemistry.
|
Hypothetical types of biochemistry
| 0.856269
|
127
|
This may suggest a greater variety of complex carbon compounds throughout the cosmos, providing less of a foundation on which to build silicon-based biologies, at least under the conditions prevalent on the surface of planets. Also, even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (the relative abundance of silicon to carbon in Earth's crust is roughly 925:1), terrestrial life is carbon-based. The fact that carbon is used instead of silicon may be evidence that silicon is poorly suited for biochemistry on Earth-like planets.
|
Hypothetical types of biochemistry
| 0.856269
|
128
|
Silicon, on the other hand, interacts with very few other types of atoms. Moreover, where it does interact with other atoms, silicon creates molecules that have been described as "monotonous compared with the combinatorial universe of organic macromolecules". This is because silicon atoms are much bigger, having a larger mass and atomic radius, and so have difficulty forming double bonds (the double-bonded carbon is part of the carbonyl group, a fundamental motif of carbon-based bio-organic chemistry).
|
Hypothetical types of biochemistry
| 0.856269
|
129
|
Silicon dioxide, also known as silica and quartz, is very abundant in the universe and has a large temperature range where it is liquid. However, its melting point is 1,600 to 1,725 °C (2,912 to 3,137 °F), so it would be impossible to make organic compounds in that temperature, because all of them would decompose. Silicates are similar to silicon dioxide and some have lower melting points than silica. Feinberg and Shapiro have suggested that molten silicate rock could serve as a liquid medium for organisms with a chemistry based on silicon, oxygen, and other elements such as aluminium.
|
Hypothetical types of biochemistry
| 0.856269
|
130
|
Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants. Closely related fields include plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology. Fundamental processes such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration, both parts of plant water relations, are studied by plant physiologists.
|
Plant Physiology
| 0.856078
|
131
|
The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant. Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.
|
Plant Physiology
| 0.856078
|
132
|
Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research. First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology.
|
Plant Physiology
| 0.856078
|
133
|
Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology. Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind.
|
Plant Physiology
| 0.856078
|
134
|
Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research.
|
Plant Physiology
| 0.856078
|
135
|
There may be uncertainty about the shape of a probability distribution because the sample size of the empirical data characterizing it is small. Several methods in traditional statistics have been proposed to account for this sampling uncertainty about the distribution shape, including Kolmogorov–Smirnov and similar confidence bands, which are distribution-free in the sense that they make no assumption about the shape of the underlying distribution. There are related confidence-band methods that do make assumptions about the shape or family of the underlying distribution, which can often result in tighter confidence bands. Constructing confidence bands requires one to select the probability defining the confidence level, which usually must be less than 100% for the result to be non-vacuous.
|
Probability box
| 0.855952
|
136
|
Baudrit, C., and D. Dubois (2006). Practical representations of incomplete probabilistic knowledge. Computational Statistics & Data Analysis 51: 86–108. Baudrit, C., D. Dubois, D. Guyonnet (2006).
|
Probability box
| 0.855952
|
137
|
P-boxes and probability bounds analysis have been used in many applications spanning many disciplines in engineering and environmental science, including: Engineering design Expert elicitation Analysis of species sensitivity distributions Sensitivity analysis in aerospace engineering of the buckling load of the frontskirt of the Ariane 5 launcher ODE models of chemical reactor dynamics Pharmacokinetic variability of inhaled VOCs Groundwater modeling Bounding failure probability for series systems Heavy metal contamination in soil at an ironworks brownfield Uncertainty propagation for salinity risk models Power supply system safety assessment Contaminated land risk assessment Engineered systems for drinking water treatment Computing soil screening levels Human health and ecological risk analysis by the U.S. EPA of PCB contamination at the Housatonic River Superfund site Environmental assessment for the Calcasieu Estuary Superfund site Aerospace engineering for supersonic nozzle thrust Verification and validation in scientific computation for engineering problems Toxicity to small mammals of environmental mercury contamination Modeling travel time of pollution in groundwater Reliability analysis Endangered species assessment for reintroduction of Leadbeater's possum Exposure of insectivorous birds to an agricultural pesticide Climate change projections Waiting time in queuing systems Extinction risk analysis for spotted owl on the Olympic Peninsula Biosecurity against introduction of invasive species or agricultural pests Finite-element structural analysis Cost estimates Nuclear stockpile certification Fracking risks to water pollution
|
Probability box
| 0.855952
|
138
|
The hidden subgroup problem (HSP) is a topic of research in mathematics and theoretical computer science. The framework captures problems such as factoring, discrete logarithm, graph isomorphism, and the shortest vector problem. This makes it especially important in the theory of quantum computing because Shor's quantum algorithm for factoring is an instance of the hidden subgroup problem for finite Abelian groups, while the other problems correspond to finite groups that are not Abelian.
|
Hidden subgroup problem
| 0.855947
|
139
|
The hidden subgroup problem is especially important in the theory of quantum computing for the following reasons. Shor's quantum algorithm for factoring and discrete logarithm (as well as several of its extensions) relies on the ability of quantum computers to solve the HSP for finite Abelian groups. The existence of efficient quantum algorithms for HSPs for certain non-Abelian groups would imply efficient quantum algorithms for two major problems: the graph isomorphism problem and certain shortest vector problems (SVPs) in lattices. More precisely, an efficient quantum algorithm for the HSP for the symmetric group would give a quantum algorithm for the graph isomorphism. An efficient quantum algorithm for the HSP for the dihedral group would give a quantum algorithm for the poly ( n ) {\displaystyle \operatorname {poly} (n)} unique SVP.
|
Hidden subgroup problem
| 0.855947
|
140
|
Many algorithms where quantum speedups occur in quantum computing are instances of the hidden subgroup problem. The following list outlines important instances of the HSP, and whether or not they are solvable.
|
Hidden subgroup problem
| 0.855947
|
141
|
Discrete probability theory deals with events that occur in countable sample spaces. Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins. Classical definition: Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an even number when a dice is rolled", the probability is given by 3 6 = 1 2 {\displaystyle {\tfrac {3}{6}}={\tfrac {1}{2}}} , since 3 faces out of the 6 have even numbers and each face has the same probability of appearing.
|
Mathematical probability
| 0.855788
|
142
|
Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the law of large numbers. This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem.
|
Mathematical probability
| 0.855788
|
143
|
{\displaystyle \sigma ^{2}>0.\,} Then the sequence of random variables Z n = ∑ i = 1 n ( X i − μ ) σ n {\displaystyle Z_{n}={\frac {\sum _{i=1}^{n}(X_{i}-\mu )}{\sigma {\sqrt {n}}}}\,} converges in distribution to a standard normal random variable. For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the Berry–Esseen theorem. For example, the distributions with finite first, second, and third moment from the exponential family; on the other hand, for some random variables of the heavy tail and fat tail variety, it works very slowly or may not work at all: in such cases one may use the Generalized Central Limit Theorem (GCLT).
|
Mathematical probability
| 0.855788
|
144
|
The central limit theorem (CLT) explains the ubiquitous occurrence of the normal distribution in nature, and this theorem, according to David Williams, "is one of the great results of mathematics. "The theorem states that the average of many independent and identically distributed random variables with finite variance tends towards a normal distribution irrespective of the distribution followed by the original random variables. Formally, let X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots \,} be independent random variables with mean μ {\displaystyle \mu } and variance σ 2 > 0.
|
Mathematical probability
| 0.855788
|
145
|
In probability theory, there are several notions of convergence for random variables. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions. Weak convergence A sequence of random variables X 1 , X 2 , … , {\displaystyle X_{1},X_{2},\dots ,\,} converges weakly to the random variable X {\displaystyle X\,} if their respective CDF F 1 , F 2 , … {\displaystyle F_{1},F_{2},\dots \,} converge to the CDF F {\displaystyle F\,} of X {\displaystyle X\,} , wherever F {\displaystyle F\,} is continuous. Weak convergence is also called convergence in distribution.Most common shorthand notation: X n → D X {\displaystyle \displaystyle X_{n}\,{\xrightarrow {\mathcal {D}}}\,X} Convergence in probability The sequence of random variables X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots \,} is said to converge towards the random variable X {\displaystyle X\,} in probability if lim n → ∞ P ( | X n − X | ≥ ε ) = 0 {\displaystyle \lim _{n\rightarrow \infty }P\left(\left|X_{n}-X\right|\geq \varepsilon \right)=0} for every ε > 0.Most common shorthand notation: X n → P X {\displaystyle \displaystyle X_{n}\,{\xrightarrow {P}}\,X} Strong convergence The sequence of random variables X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots \,} is said to converge towards the random variable X {\displaystyle X\,} strongly if P ( lim n → ∞ X n = X ) = 1 {\displaystyle P(\lim _{n\rightarrow \infty }X_{n}=X)=1} .
|
Mathematical probability
| 0.855788
|
146
|
Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more.
|
Mathematical probability
| 0.855788
|
147
|
Beckmann's version of this story has been widely copied in several books and internet sites, usually without his reservations and sometimes with fanciful embellishments. Several attempts to find corroborating evidence for this story, or even for the existence of Valmes, have failed.The proof that four is the highest degree of a general polynomial for which such solutions can be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois prior to dying in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result.
|
Fourth-degree equation
| 0.855622
|
148
|
Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna.The Soviet historian I. Y. Depman (ru) claimed that even earlier, in 1486, Spanish mathematician Valmes was burned at the stake for claiming to have solved the quartic equation. Inquisitor General Tomás de Torquemada allegedly told Valmes that it was the will of God that such a solution be inaccessible to human understanding. However, Petr Beckmann, who popularized this story of Depman in the West, said that it was unreliable and hinted that it may have been invented as Soviet antireligious propaganda.
|
Fourth-degree equation
| 0.855622
|
149
|
Though it is now regarded as pseudoscience, belief in a mystical significance of numbers, known as numerology, permeated ancient and medieval thought. Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today.During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers, and may be seen as extending the concept. Among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. In modern mathematics, number systems are considered important special examples of more general algebraic structures such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance.
|
Numerical value
| 0.855394
|
150
|
Their study or usage is called arithmetic, a term which may also refer to number theory, the study of the properties of numbers. Besides their practical uses, numbers have cultural significance throughout the world. For example, in Western society, the number 13 is often regarded as unlucky, and "a million" may signify "a lot" rather than an exact quantity.
|
Numerical value
| 0.855394
|
151
|
The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field, meaning that every polynomial with complex coefficients has a root in the complex numbers. Like the reals, the complex numbers form a field, which is complete, but unlike the real numbers, it is not ordered. That is, there is no consistent meaning assignable to saying that i is greater than 1, nor is there any meaning in saying that i is less than 1. In technical terms, the complex numbers lack a total order that is compatible with field operations.
|
Numerical value
| 0.855394
|
152
|
These methods use rotamer libraries, which are collections of favorable conformations for each residue type in proteins. Rotamer libraries may contain information about the conformation, its frequency, and the standard deviations about mean dihedral angles, which can be used in sampling. Rotamer libraries are derived from structural bioinformatics or other statistical analysis of side-chain conformations in known experimental structures of proteins, such as by clustering the observed conformations for tetrahedral carbons near the staggered (60°, 180°, -60°) values.
|
Protein folding problem
| 0.855157
|
153
|
Accurate packing of the amino acid side chains represents a separate problem in protein structure prediction. Methods that specifically address the problem of predicting side-chain geometry include dead-end elimination and the self-consistent mean field methods. The side chain conformations with low energy are usually determined on the rigid polypeptide backbone and using a set of discrete side chain conformations known as "rotamers." The methods attempt to identify the set of rotamers that minimize the model's overall energy.
|
Protein folding problem
| 0.855157
|
154
|
Ab initio- or de novo- protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures. There are many possible procedures that either attempt to mimic protein folding or apply some stochastic method to search possible solutions (i.e., global optimization of a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To predict protein structure de novo for larger proteins will require better algorithms and larger computational resources like those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing (such as Folding@home, the Human Proteome Folding Project and Rosetta@Home).
|
Protein folding problem
| 0.855157
|
155
|
Secondary structure prediction is a set of techniques in bioinformatics that aim to predict the local secondary structures of proteins based only on knowledge of their amino acid sequence. For proteins, a prediction consists of assigning regions of the amino acid sequence as likely alpha helices, beta strands (often noted as "extended" conformations), or turns. The success of a prediction is determined by comparing it to the results of the DSSP algorithm (or similar e.g. STRIDE) applied to the crystal structure of the protein. Specialized algorithms have been developed for the detection of specific well-defined patterns such as transmembrane helices and coiled coils in proteins.The best modern methods of secondary structure prediction in proteins were claimed to reach 80% accuracy after using machine learning and sequence alignments; this high accuracy allows the use of the predictions as feature improving fold recognition and ab initio protein structure prediction, classification of structural motifs, and refinement of sequence alignments. The accuracy of current protein secondary structure prediction methods is assessed in weekly benchmarks such as LiveBench and EVA.
|
Protein folding problem
| 0.855157
|
156
|
These groups can therefore interact in the protein structure. Proteins consist mostly of 20 different types of L-α-amino acids (the proteinogenic amino acids). These can be classified according to the chemistry of the side chain, which also plays an important structural role.
|
Protein folding problem
| 0.855157
|
157
|
It follows from the first five pairs of axioms that any complement is unique. The set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual.
|
Boolean algebra (structure)
| 0.855125
|
158
|
A Boolean algebra is a set A, equipped with two binary operations ∧ (called "meet" or "and"), ∨ (called "join" or "or"), a unary operation ¬ (called "complement" or "not") and two elements 0 and 1 in A (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols ⊥ and ⊤, respectively), such that for all elements a, b and c of A, the following axioms hold: Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties). A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required 0 and 1 to be distinct elements in order to exclude this case.
|
Boolean algebra (structure)
| 0.855125
|
159
|
As a counter-example, considering the non-square-free n=60, the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1. Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the collection of all subsets of X which are both open and closed forms a Boolean algebra with the operations ∨ := ∪ (union) and ∧ := ∩ (intersection). If R {\displaystyle R} is an arbitrary ring then its set of central idempotents, which is the set becomes a Boolean algebra when its operations are defined by e ∨ f := e + f − e f {\displaystyle e\vee f:=e+f-ef} and e ∧ f := e f . {\displaystyle e\wedge f:=ef.}
|
Boolean algebra (structure)
| 0.855125
|
160
|
This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top element of this Boolean algebra is the natural number 1 and n, respectively. The complement of a is given by n/a.
|
Boolean algebra (structure)
| 0.855125
|
161
|
A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra. Given any linearly ordered set L with a least element, the interval algebra is the smallest algebra of subsets of L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to ∞. Interval algebras are useful in the study of Lindenbaum–Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra.For any natural number n, the set of all positive divisors of n, defining a ≤ b {\displaystyle a\leq b} if a divides b, forms a distributive lattice.
|
Boolean algebra (structure)
| 0.855125
|
162
|
Starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo logical equivalence). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on κ generators.
|
Boolean algebra (structure)
| 0.855125
|
163
|
This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras: (a ∨ b) ∧ (¬a ∨ c) ∧ (b ∨ c) ≡ (a ∨ b) ∧ (¬a ∨ c) (a ∧ b) ∨ (¬a ∧ c) ∨ (b ∧ c) ≡ (a ∧ b) ∨ (¬a ∧ c)The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with the two operations ∨ := ∪ (union) and ∧ := ∩ (intersection). The smallest element 0 is the empty set and the largest element 1 is the set S itself.After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms:The set A {\displaystyle A} of all subsets of S {\displaystyle S} that are either finite or cofinite is a Boolean algebra and an algebra of sets called the finite–cofinite algebra. If S {\displaystyle S} is infinite then the set of all cofinite subsets of S , {\displaystyle S,} which is called the Fréchet filter, is a free ultrafilter on A .
|
Boolean algebra (structure)
| 0.855125
|
164
|
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1, and is defined by the rules:It has applications in logic, interpreting 0 as false, 1 as true, ∧ as and, ∨ as or, and ¬ as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent.The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input-output behavior. Furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression.The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables).
|
Boolean algebra (structure)
| 0.855125
|
165
|
A filter of the Boolean algebra A is a subset p such that for all x, y in p we have x ∧ y in p and for all a in A we have a ∨ x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the Ultrafilter Theorem and cannot be proven in ZF, if ZF is consistent. Within ZF, it is strictly weaker than the axiom of choice. The Ultrafilter Theorem has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc.
|
Boolean algebra (structure)
| 0.855125
|
166
|
An ideal of the Boolean algebra A is a subset I such that for all x, y in I we have x ∨ y in I and for all a in A we have a ∧ x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called prime if I ≠ A and if a ∧ b in I always implies a in I or b in I. Furthermore, for every a ∈ A we have that a ∧ −a = 0 ∈ I and then a ∈ I or −a ∈ I for every a ∈ A, if I is prime. An ideal I of A is called maximal if I ≠ A and if the only ideal properly containing I is A itself. For an ideal I, if a ∉ I and −a ∉ I, then I ∪ a or I ∪ {−a} is properly contained in another ideal J. Hence, that an I is not maximal and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring A. The dual of an ideal is a filter.
|
Boolean algebra (structure)
| 0.855125
|
167
|
As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces. The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
|
Discrete math
| 0.854955
|
168
|
Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals.
|
Discrete math
| 0.854955
|
169
|
In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right.
|
Discrete math
| 0.854955
|
170
|
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures.
|
Discrete math
| 0.854955
|
171
|
They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
|
Discrete math
| 0.854955
|
172
|
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, not unlike precalculus in this respect.The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
|
Discrete math
| 0.854955
|
173
|
Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems. Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
|
Discrete math
| 0.854955
|
174
|
However, there is no exact definition of the term "discrete mathematics".The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business. Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits.
|
Discrete math
| 0.854955
|
175
|
Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers).
|
Discrete math
| 0.854955
|
176
|
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
|
Discrete math
| 0.854955
|
177
|
Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.
|
Plant biology
| 0.854866
|
178
|
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed.
|
Plant biology
| 0.854866
|
179
|
Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research.
|
Plant biology
| 0.854866
|
180
|
The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology.
|
Plant biology
| 0.854866
|
181
|
Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.
|
Plant biology
| 0.854866
|
182
|
The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
|
Plant biology
| 0.854866
|
183
|
Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants.
|
Plant biology
| 0.854866
|
184
|
A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.
|
Plant biology
| 0.854866
|
185
|
The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine.
|
Plant biology
| 0.854866
|
186
|
Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.
|
Plant biology
| 0.854866
|
187
|
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.
|
Plant biology
| 0.854866
|
188
|
These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information.
|
Biochemistry
| 0.854551
|
189
|
In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations.
|
Biochemistry
| 0.854551
|
190
|
In 1877, Felix Hoppe-Seyler used the term (biochemie in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life.
|
Biochemistry
| 0.854551
|
191
|
Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry.
|
Biochemistry
| 0.854551
|
192
|
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry.
|
Biochemistry
| 0.854551
|
193
|
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
|
Biochemistry
| 0.854551
|
194
|
However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins.
|
Structural proteins
| 0.854386
|
195
|
A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely.
|
Structural proteins
| 0.854386
|
196
|
Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, even Maxwell's equations can be dispensed with and simpler formulations used. On the other hand, a quantum treatment of electromagnetism is important in chemistry. Chemical reactions and chemical bonding are the result of quantum mechanical interactions of electrons around atoms. Quantum considerations are also necessary to explain the behaviour of many electronic devices, for instance the tunnel diode.
|
Introduction to electromagnetism
| 0.854357
|
197
|
Classical physics is still an accurate approximation in most situations involving macroscopic objects. With few exceptions, quantum theory is only necessary at the atomic scale and a simpler classical treatment can be applied. Further simplifications of treatment are possible in limited situations.
|
Introduction to electromagnetism
| 0.854357
|
198
|
Albert Einstein showed that the magnetic field arises through the relativistic motion of the electric field and thus magnetism is merely a side effect of electricity. The modern theoretical treatment of electromagnetism is as a quantum field in quantum electrodynamics. In many situations of interest to electrical engineering, it is not necessary to apply quantum theory to get correct results.
|
Introduction to electromagnetism
| 0.854357
|
199
|
The fundamental law that describes the gravitational force on a massive object in classical physics is Newton's law of gravity. Analogously, Coulomb's law is the fundamental law that describes the force that charged objects exert on one another. It is given by the formula F = k e q 1 q 2 r 2 {\displaystyle F=k_{\text{e}}{q_{1}q_{2} \over r^{2}}} where F is the force, ke is the Coulomb constant, q1 and q2 are the magnitudes of the two charges, and r2 is the square of the distance between them. It describes the fact that like charges repel one another whereas opposite charges attract one another and that the stronger the charges of the particles, the stronger the force they exert on one another. The law is also an inverse square law which means that as the distance between two particles is doubled, the force on them is reduced by a factor of four.
|
Introduction to electromagnetism
| 0.854357
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.