id int32 0 100k | text stringlengths 21 3.54k | source stringlengths 1 124 | similarity float32 0.79 0.89 |
|---|---|---|---|
99,900 | He began under Willenberg's leadership by teaching optics, perspectivity, technical drawing and geography. The third was professor František Antonín Herget, who mainly focused on civil engineering, particularly construction. In September 1776, Maria Theresa allowed Herget to use the Clementinum building; in 1786, the school moved to the new and better building. In 1787, the School of Engineering was established at the decree of Emperor Joseph II. | Czech Technical University in Prague | 0.789124 |
99,901 | František Běhounek, radiologist Christian Doppler, mathematician and physicist Ivan Puluj, physicist and one of the founders of medical radiology Antonín Engel, architect Josef Gerstner, physicist and engineer Václav Havel, statesman, writer and former dissident, who served as the last President of Czechoslovakia Josef Hlávka, architect, main founder of Academy of Science, patron Otakar Husák, CTU graduate, chemist, General, Czechoslovak Legionnaire in Russia and France, fighter from Zborov and Terron, Chairman of President Masaryk's Military Office, Minister of Defence, First Director of the Explosia Semtín factory, prisoner of concentration camps Dachau and Buchenwald, Director of the Synthesia Semtín (1945–1948), political prisoner (Prague Nusle-Pankrác, Mírov 1950–1956) Eva Jiřičná, architect Karel Jonáš, who became Charles Jonas (Wisconsin politician), Czech-American publisher, legislator and Lieutenant Governor of Wisconsin George Klir, computer and systems scientist Karl Kořistka, geographer and technologist František Křižík, inventor, electrical engineer and entrepreneur Ivo Lukačovič, entrepreneur, founder and chairman of Seznam.cz Vladimir Prelog, chemist and Nobel Prize winner Richard Rychtarik, set designer Marie Schneiderová-Zubaníková first female Czech civil engineering graduate (in 1923) Alena Šolcová, mathematician, historian Emil Votoček, chemist Emil Weyr, mathematician Josef Zítek, architect and engineer | Czech Technical University in Prague | 0.789124 |
99,902 | CTU has 8 faculties. The oldest one (Faculty of Civil Engineering) was founded in 1707, while the youngest and most selective faculty (Faculty of Information Technology) was founded in 2009. The university also has 5 university institutes, such as Czech Institute of Informatics, Robotics and Cybernetics, Klokner Institute, Institute of Physical Education and Sport, University Centre for Energy Efficient Buildings and Institute of Experimental and Applied Physics. Other constituent parts include Computing and Information Centre, Technology and Innovation Centre, The Research Centre for Industrial Heritage, Centre for Radiochemistry and Radiation Chemistry, Division of Construction and Investment and Central Library. | Czech Technical University in Prague | 0.789124 |
99,903 | Due to the pace and difficulty of CTU coursework, high percentage of students fail to complete first year of their studies. First year failure rates range from 23% (Faculty of Civil Engineering) to 47% (Faculty of Information Technology). Overall, only 48% of enrolled undergraduate students end up graduating. | Czech Technical University in Prague | 0.789124 |
99,904 | The pixel scale used in astronomy is the angular distance between two objects on the sky that fall one pixel apart on the detector (CCD or infrared chip). The scale s measured in radians is the ratio of the pixel spacing p and focal length f of the preceding optics, s = p / f. (The focal length is the product of the focal ratio by the diameter of the associated lens or mirror.) Because s is usually expressed in units of arcseconds per pixel, because 1 radian equals (180/π) × 3600 ≈ 206,265 arcseconds, and because focal lengths are often given in millimeters and pixel sizes in micrometers which yields another factor of 1,000, the formula is often quoted as s = 206 p / f. | Mega pixel | 0.789124 |
99,905 | In chemistry, mesoionic compounds are one in which a heterocyclic structure is dipolar and where both the negative and the positive charges are delocalized. A completely uncharged structure cannot be written and mesoionic compounds cannot be represented satisfactorily by any one mesomeric structure. Mesoionic compounds are a subclass of betaines. | Mesoionic compounds | 0.789123 |
99,906 | Annals of Global Analysis and Geometry The Journal of Geometric Analysis | Global analysis | 0.789123 |
99,907 | In mathematics, global analysis, also called analysis on manifolds, is the study of the global and topological properties of differential equations on manifolds and vector bundles. Global analysis uses techniques in infinite-dimensional manifold theory and topological spaces of mappings to classify behaviors of differential equations, particularly nonlinear differential equations. These spaces can include singularities and hence catastrophe theory is a part of global analysis. Optimization problems, such as finding geodesics on Riemannian manifolds, can be solved using differential equations, so that the calculus of variations overlaps with global analysis. Global analysis finds application in physics in the study of dynamical systems and topological quantum field theory. | Global analysis | 0.789123 |
99,908 | Up until the 1940s, astronomers could only use the visible and near infrared portions of the optical spectrum for their observations. The first great astronomical discoveries such as the ones made by the famous Italian polymath Galileo Galilei were made using optical telescopes that received light reaching the ground through the optical window. After the 1940s, the development of radio telescopes gave rise to the even more successful field of radio astronomy that utilized the radio window. | Optical window | 0.789122 |
99,909 | The third AI for Good Global Summit took place from 28 May to 31 May, and gave rise to the ITU Focus Group on Artificial Intelligence for Autonomous and Assisted Driving with several Day 0 workshops and VIP events having taken place on May 27. Some of the speakers included: | AI for Good | 0.789122 |
99,910 | The first AI for Good Global summit took place from 7 to 9 June 2017. Speakers at the event included: One of the outcomes of the 2017 Global Summit was the creation of an ITU-T Focus Group on Machine Learning for 5G. | AI for Good | 0.789122 |
99,911 | In 2020 the Global Summit became an online-only event. In 2022, the summit moved to the "Neural Network" community platform. Speakers include: | AI for Good | 0.789122 |
99,912 | AI for Good is a year-round digital platform of the United Nations' International Telecommunication Union, where AI innovators and problem owners learn, discuss and connect to identify practical AI solutions to advance the United Nations Sustainable Development Goals. The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).AI for Good aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, in particular through the Sustainable Development Goals. AI for Good came out of the AI for Good Global Summit 2020 which had been moved online in 2020 due to the COVID-19 Pandemic. | AI for Good | 0.789122 |
99,913 | The ITU-T Focus Group on Machine Learning for 5G Networks (FG-ML5G) was created following discussions at the 2017 AI for Good Global Summit. The FG-ML5G is produced several technology standards in this domain, including Y.3172, Y.3173, Y.3176, which were adopted by ITU-T Study Group 13. The FG-ML5G created the impetus for a new ITU-T Focus Group on Autonomous Networks, which is responsible for i.a. Y.3181. | AI for Good | 0.789122 |
99,914 | The ITU relaunched its Journal ICT Discoveries during the 2018 Global Summit, with the first edition being a special on Artificial Intelligence. | AI for Good | 0.789122 |
99,915 | The 2018 Global Summit led to the creation of the ITU-WHO Focus Group on Artificial Intelligence for Health with the World Health Organization, which created the AI for Health Framework. | AI for Good | 0.789122 |
99,916 | The general definition makes sense for arbitrary coverings and does not require a topology. Let X {\displaystyle X} be a set and let U {\displaystyle {\mathcal {U}}} be a covering of X , {\displaystyle X,} that is, X = ⋃ U . {\textstyle X=\bigcup {\mathcal {U}}.} Given a subset S {\displaystyle S} of X , {\displaystyle X,} the star of S {\displaystyle S} with respect to U {\displaystyle {\mathcal {U}}} is the union of all the sets U ∈ U {\displaystyle U\in {\mathcal {U}}} that intersect S , {\displaystyle S,} that is, Given a point x ∈ X , {\displaystyle x\in X,} we write st ( x , U ) {\displaystyle \operatorname {st} (x,{\mathcal {U}})} instead of st ( { x } , U ) . | Star refinement | 0.789122 |
99,917 | In mathematics, specifically in the study of topology and open covers of a topological space X, a star refinement is a particular kind of refinement of an open cover of X. A related concept is the notion of barycentric refinement. Star refinements are used in the definition of fully normal space and in one definition of uniform space. It is also useful for stating a characterization of paracompactness. | Star refinement | 0.789122 |
99,918 | If p {\displaystyle p} is a paranorm on a vector space X {\displaystyle X} then the map d: X × X → R {\displaystyle d:X\times X\rightarrow \mathbb {R} } defined by d ( x , y ) := p ( x − y ) {\displaystyle d(x,y):=p(x-y)} is a translation-invariant pseudometric on X {\displaystyle X} that defines a vector topology on X . {\displaystyle X.} If p {\displaystyle p} is a paranorm on a vector space X {\displaystyle X} then: the set { x ∈ X: p ( x ) = 0 } {\displaystyle \{x\in X:p(x)=0\}} is a vector subspace of X . {\displaystyle X.} | Fréchet combination | 0.789121 |
99,919 | If d {\displaystyle d} is a translation-invariant pseudometric on a vector space X {\displaystyle X} that induces a vector topology τ {\displaystyle \tau } on X {\displaystyle X} (i.e. ( X , τ ) {\displaystyle (X,\tau )} is a TVS) then the map p ( x ) := d ( x − y , 0 ) {\displaystyle p(x):=d(x-y,0)} defines a continuous paranorm on ( X , τ ) {\displaystyle (X,\tau )} ; moreover, the topology that this paranorm p {\displaystyle p} defines in X {\displaystyle X} is τ . {\displaystyle \tau .} If p {\displaystyle p} is a paranorm on X {\displaystyle X} then so is the map q ( x ) := p ( x ) / . | Fréchet combination | 0.789121 |
99,920 | Every topological vector space (and more generally, a topological group) has a canonical uniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it. If X {\displaystyle X} is a metrizable TVS and d {\displaystyle d} is a metric that defines X {\displaystyle X} 's topology, then its possible that X {\displaystyle X} is complete as a TVS (i.e. relative to its uniformity) but the metric d {\displaystyle d} is not a complete metric (such metrics exist even for X = R {\displaystyle X=\mathbb {R} } ). Thus, if X {\displaystyle X} is a TVS whose topology is induced by a pseudometric d , {\displaystyle d,} then the notion of completeness of X {\displaystyle X} (as a TVS) and the notion of completeness of the pseudometric space ( X , d ) {\displaystyle (X,d)} are not always equivalent. The next theorem gives a condition for when they are equivalent: If M {\displaystyle M} is a closed vector subspace of a complete pseudometrizable TVS X , {\displaystyle X,} then the quotient space X / M {\displaystyle X/M} is complete. | Fréchet combination | 0.789121 |
99,921 | Assume that p ∙ = ( p i ) i = 1 ∞ {\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }} is an increasing sequence of seminorms on X {\displaystyle X} and let p {\displaystyle p} be the Fréchet combination of p ∙ . {\displaystyle p_{\bullet }.} Then p {\displaystyle p} is an F-seminorm on X {\displaystyle X} that induces the same locally convex topology as the family p ∙ {\displaystyle p_{\bullet }} of seminorms.Since p ∙ = ( p i ) i = 1 ∞ {\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }} is increasing, a basis of open neighborhoods of the origin consists of all sets of the form { x ∈ X: p i ( x ) < r } {\displaystyle \left\{x\in X~:~p_{i}(x) 0 {\displaystyle r>0} ranges over all positive real numbers. The translation invariant pseudometric on X {\displaystyle X} induced by this F-seminorm p {\displaystyle p} is This metric was discovered by Fréchet in his 1906 thesis for the spaces of real and complex sequences with pointwise operations. | Fréchet combination | 0.789121 |
99,922 | Pseudometric space A pseudometric space is a pair ( X , d ) {\displaystyle (X,d)} consisting of a set X {\displaystyle X} and a pseudometric d {\displaystyle d} on X {\displaystyle X} such that X {\displaystyle X} 's topology is identical to the topology on X {\displaystyle X} induced by d . {\displaystyle d.} We call a pseudometric space ( X , d ) {\displaystyle (X,d)} a metric space (resp. ultrapseudometric space) when d {\displaystyle d} is a metric (resp. ultrapseudometric). | Fréchet combination | 0.789121 |
99,923 | Geodesy (Clarke 1880) was the first major survey of the subject since the work of Airy. It was well received throughout Europe and it was translated into a number of languages. It contained 14 chapters. Geodetical Operations — Spherical Trigonometry — Least Squares — Theory of the Figure of the Earth — Distances, Azimuths and Triangles on the Spheroid — Geodetic Lines — Measurement of Base Lines — Instruments and observing — Calculation of Triangulation — Calculation of Latitudes and Longitudes — Heights of Stations — Connection of Geodetic and Astronomical Operations — Figure of the Earth — Pendulums. | Clarke ellipsoid | 0.789121 |
99,924 | The basic data was the collection of angle bearings taken from each of the 289 stations towards a number of other stations, typically from three to ten in number. The multiple observations were first subjected to a least squares error analysis to extract the most likely angles and then the triangles formed by the corrected bearings were adjusted simultaneously, again by least squares methods, to find the most likely geometry for the whole mesh. This was an immense undertaking which involved the solution of 920 equations without the aid of matrix methods or digital computers. | Clarke ellipsoid | 0.789121 |
99,925 | His most prestigious award was the Royal Medal of the Royal Society of London in 1887. The text of the citation is as follows: "The medal which, in accordance with the usual rule has been devoted to mathematics and physics, has this year been awarded to Colonel A. Clarke for his comparison of standards of length, and determination of the figure of the earth. Col. | Clarke ellipsoid | 0.789121 |
99,926 | The Society for Earthquake and Civil Engineering Dynamics (SECED) was founded in 1969 to promote the study and practice of earthquake engineering and structural dynamics, including blast, impact and other vibration problems. It also supports study of societal and economic ramifications of major earthquakes.It is the British branch of both the International Association (IAEE) and the European Association of Earthquake Engineering (EAEE). It is an Associated Society of the Institution of Civil Engineers (ICE), and is sponsored by the Institution of Mechanical Engineers (IMechE), the Institution of Structural Engineers (IStructE) and the Geological Society.SECED has organised conferences and lectures (see below). It hosted a 2002 European conference on earthquake engineering in London, and in July 2015 hosted a two-day conference at Homerton College, Cambridge titled Earthquake Risk and Engineering towards a Resilient World. It also organises regular meetings and has published a newsletter since 1987. | Society for Earthquake and Civil Engineering Dynamics | 0.78912 |
99,927 | The physics of magnetic resonance imaging (MRI) concerns fundamental physical considerations of MRI techniques and technological aspects of MRI devices. MRI is a medical imaging technique mostly used in radiology and nuclear medicine in order to investigate the anatomy and physiology of the body, and to detect pathologies including tumors, inflammation, neurological conditions such as stroke, disorders of muscles and joints, and abnormalities in the heart and blood vessels among others. Contrast agents may be injected intravenously or into a joint to enhance the image and facilitate diagnosis. Unlike CT and X-ray, MRI uses no ionizing radiation and is, therefore, a safe procedure suitable for diagnosis in children and repeated runs. | MRI scanner | 0.789119 |
99,928 | One such technique is spiral acquisition—a rotating magnetic field gradient is applied, causing the trajectory in k-space to spiral out from the center to the edge. Due to T2 and T*2 decay the signal is greatest at the start of the acquisition, hence acquiring the center of k-space first improves contrast to noise ratio (CNR) when compared to conventional zig-zag acquisitions, especially in the presence of rapid movement. Since x → {\displaystyle {\vec {x}}} and k → {\displaystyle {\vec {k}}} are conjugate variables (with respect to the Fourier transform) we can use the Nyquist theorem to show that a step in k-space determines the field of view of the image (maximum frequency that is correctly sampled) and the maximum value of k sampled determines the resolution; i.e., F O V ∝ 1 Δ k R e s o l u t i o n ∝ | k max | . {\displaystyle {\rm {FOV}}\propto {\frac {1}{\Delta k}}\qquad \mathrm {Resolution} \propto |k_{\max }|\ .} (These relationships apply to each axis independently.) | MRI scanner | 0.789119 |
99,929 | Metabolic reprogramming in cancer is largely due to the oncogenic activation of signal transduction pathways and transcription factors. Although less well understood, epigenetic mechanisms also contribute to the regulation of metabolic gene expression in cancer. Reciprocally, accumulating evidence suggests that metabolic alterations may affect the epigenome. Understanding the relationship between metabolism and epigenetics in cancer cells may open new avenues for anti-cancer strategies. | Warburg effect (oncology) | 0.789119 |
99,930 | DCA has not been evaluated as a sole cancer treatment yet, as research on the clinical activity of the drug is still in progress, but it has been shown to be effective when used with other cancer treatments. The neurotoxicity and pharmacokinetics of the drug still need to be monitored but if its evaluations are satisfactory it could be very useful as it is an inexpensive small molecule.Lewis C. Cantley and colleagues found that tumor M2-PK, a form of the pyruvate kinase enzyme, promotes the Warburg effect. Tumor M2-PK is produced in all rapidly dividing cells and is responsible for enabling cancer cells to consume glucose at an accelerated rate; on forcing the cells to switch to pyruvate kinase's alternative form by inhibiting the production of tumor M2-PK, their growth was curbed. The researchers acknowledged the fact that the exact chemistry of glucose metabolism was likely to vary across different forms of cancer; however, PKM2 was identified in all of the cancer cells they had tested. This enzyme form is not usually found in quiescent tissue, though it is apparently necessary when cells need to multiply quickly, e.g., in healing wounds or hematopoiesis. | Warburg effect (oncology) | 0.789118 |
99,931 | Herwig Birg has called the inverse relationship between income and fertility a "demo-economic paradox". Evolutionary biology predicts that more successful individuals (and by analogy countries) should seek to develop optimum conditions for their life and reproduction. However, in the last half of the 20th century it has become clear that the economic success of developed countries is being counterbalanced by a demographic failure, a sub-replacement fertility that may prove destructive for their future economies and societies. | Income and fertility | 0.789118 |
99,932 | Digital image correlation has demonstrated uses in the following industries: Automotive Aerospace Biological Industrial Research and Education Government and Military Biomechanics Robotics ElectronicsIt has also been used for mapping earthquake deformation. | Digital image correlation | 0.789118 |
99,933 | For sub-pixel interpolation of the shift, other methods do not simply maximize the correlation coefficient. An iterative approach can also be used to maximize the interpolated correlation coefficient by using non-linear optimization techniques. The non-linear optimization approach tends to be conceptually simpler and can handle large deformations more accurately, but as with most nonlinear optimization techniques, it is slower. | Digital image correlation | 0.789118 |
99,934 | Some properties of this include: (in what follows κ {\displaystyle \kappa } is a cardinal) ℵ 0 → ( ℵ 0 ) k n {\displaystyle \aleph _{0}\rightarrow (\aleph _{0})_{k}^{n}} for all finite n and k (Ramsey's theorem). ℶ n + → ( ℵ 1 ) ℵ 0 n + 1 {\displaystyle \beth _{n}^{+}\rightarrow (\aleph _{1})_{\aleph _{0}}^{n+1}} (Erdős–Rado theorem.) 2 κ ↛ ( κ + ) 2 {\displaystyle 2^{\kappa }\not \rightarrow (\kappa ^{+})^{2}} (Sierpiński theorem) 2 κ ↛ ( 3 ) κ 2 {\displaystyle 2^{\kappa }\not \rightarrow (3)_{\kappa }^{2}} κ → ( κ , ℵ 0 ) 2 {\displaystyle \kappa \rightarrow (\kappa ,\aleph _{0})^{2}} (Erdős–Dushnik–Miller theorem).In choiceless universes, partition properties with infinite exponents may hold, and some of them are obtained as consequences of the axiom of determinacy (AD). For example, Donald A. Martin proved that AD implies ℵ 1 → ( ℵ 1 ) 2 ℵ 1 {\displaystyle \aleph _{1}\rightarrow (\aleph _{1})_{2}^{\aleph _{1}}} | Ordinary partition symbol | 0.789117 |
99,935 | A tetraquark, in particle physics, is an exotic meson composed of four valence quarks. A tetraquark state has long been suspected to be allowed by quantum chromodynamics, the modern theory of strong interactions. A tetraquark state is an example of an exotic hadron which lies outside the conventional quark model classification. A number of different types of tetraquark have been observed. | Tetraquark | 0.789117 |
99,936 | Several tetraquark candidates have been reported by particle physics experiments in the 21st century. The quark contents of these states are almost all qqQQ, where q represents a light (up, down or strange) quark, Q represents a heavy (charm or bottom) quark, and antiquarks are denoted with an overline. The existence and stability of tetraquark states with the qqQQ (or qqQQ) have been discussed by theoretical physicists for a long time, however these are yet to be reported by experiments. | Tetraquark | 0.789117 |
99,937 | In The Wealth of Networks: How Social Production Transforms Markets and Freedom, a book published in 2006 and available under a Creative Commons license on its own wikispace, Yochai Benkler provides an analytic framework for the emergence of the networked information economy that draws deeply on the language and perspectives of information ecology together with observations and analyses of high-visibility examples of successful peer production processes, citing Wikipedia as a prime example. Bonnie Nardi and Vicki O'Day in their book "Information Ecologies: Using Technology with Heart," (Nardi & O’Day 1999) apply the ecology metaphor to local environments, such as libraries and schools, in preference to the more common metaphors for technology as tool, text, or system. | Information ecology | 0.789117 |
99,938 | Information ecology is the application of ecological concepts for modeling the information society. It considers the dynamics and properties of the increasingly dense, complex and important digital informational environment. "Information ecology" often is used as metaphor, viewing the information space as an ecosystem, the information ecosystem. Information ecology also makes a connection to the concept of collective intelligence and knowledge ecology (Pór 2000). Eddy et al. (2014) use information ecology for science-policy integration in ecosystems-based management (EBM). | Information ecology | 0.789117 |
99,939 | Law schools represent another area where the phrase is gaining increasing acceptance, e.g. NYU Law School Conference Towards a Free Information Ecology and a lecture series on Information ecology at Duke University Law School's Center for the Study of the Public Domain. | Information ecology | 0.789117 |
99,940 | In computer science, a generalized suffix tree is a suffix tree for a set of strings. Given the set of strings D = S 1 , S 2 , … , S d {\displaystyle D=S_{1},S_{2},\dots ,S_{d}} of total length n {\displaystyle n} , it is a Patricia tree containing all n {\displaystyle n} suffixes of the strings. It is mostly used in bioinformatics. | Generalised suffix tree | 0.789116 |
99,941 | and the zero matrix of dimension m × n {\displaystyle m\times n} . For example: O 2 × 3 = ( 0 0 0 0 0 0 ) {\displaystyle O_{2\times 3}={\begin{pmatrix}0&0&0\\0&0&0\end{pmatrix}}} .Further ways of classifying matrices are according to their eigenvalues, or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry, have particular matrices that are applied chiefly in these areas. | List of named matrices | 0.789115 |
99,942 | Doubly stochastic matrix — a non-negative matrix such that each row and each column sums to 1 (thus the matrix is both left stochastic and right stochastic) Fisher information matrix — a matrix representing the variance of the partial derivative, with respect to a parameter, of the log of the likelihood function of a random variable. Hat matrix — a square matrix used in statistics to relate fitted values to observed values. Orthostochastic matrix — doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some orthogonal matrix Precision matrix — a symmetric n×n matrix, formed by inverting the covariance matrix. | List of named matrices | 0.789115 |
99,943 | The following matrices find their main application in statistics and probability theory. Bernoulli matrix — a square matrix with entries +1, −1, with equal probability of each. Centering matrix — a matrix which, when multiplied with a vector, has the same effect as subtracting the mean of the components of the vector from every component. Correlation matrix — a symmetric n×n matrix, formed by the pairwise correlation coefficients of several random variables. | List of named matrices | 0.789115 |
99,944 | Edmonds matrix — a square matrix of a bipartite graph. Incidence matrix — a matrix representing a relationship between two classes of objects (usually vertices and edges in the context of graph theory). Laplacian matrix — a matrix equal to the degree matrix minus the adjacency matrix for a graph, used to find the number of spanning trees in the graph. | List of named matrices | 0.789115 |
99,945 | State transition matrix — exponent of state matrix in control systems. Substitution matrix — a matrix from bioinformatics, which describes mutation rates of amino acid or DNA sequences. Supnick matrix — a square matrix used in computer science. Z-matrix — a matrix in chemistry, representing a molecule in terms of its relative atomic geometry. | List of named matrices | 0.789115 |
99,946 | Overlap matrix — a type of Gramian matrix, used in quantum chemistry to describe the inter-relationship of a set of basis vectors of a quantum system. S matrix — a matrix in quantum mechanics that connects asymptotic (infinite past and future) particle states. Scattering matrix - a matrix in Microwave Engineering that describes how the power move in a multiport system. | List of named matrices | 0.789115 |
99,947 | Gell-Mann matrices — a generalization of the Pauli matrices; these matrices are one notable representation of the infinitesimal generators of the special unitary group SU(3). Hamiltonian matrix — a matrix used in a variety of fields, including quantum mechanics and linear-quadratic regulator (LQR) systems. Irregular matrix — a matrix used in computer science which has a varying number of elements in each row. | List of named matrices | 0.789115 |
99,948 | Fundamental matrix (computer vision) — a 3 × 3 matrix in computer vision that relates corresponding points in stereo images. Fuzzy associative matrix — a matrix in artificial intelligence, used in machine learning processes. Gamma matrices — 4 × 4 matrices in quantum field theory. | List of named matrices | 0.789115 |
99,949 | Cabibbo–Kobayashi–Maskawa matrix — a unitary matrix used in particle physics to describe the strength of flavour-changing weak decays. Density matrix — a matrix describing the statistical state of a quantum system. Hermitian, non-negative and with trace 1. | List of named matrices | 0.789115 |
99,950 | Most usage of supercluster in population genetics research articles applies to proposed large groups of human mtDNA haplotype lineages, found by cluster analysis, that are thought to stem from a single distant most recent common ancestor, on a time scale of tens of thousands of years. | Supercluster (genetic) | 0.789115 |
99,951 | Using Fraunhofer diffraction theory, one computes the wave amplitude using the Fourier transform of the aberrated pupil function evaluated at 0,0 (center of the image plane) where the phase factors of the Fourier transform formula are reduced to unity. Since the Strehl ratio refers to intensity, it is found from the squared magnitude of that amplitude: S = | ⟨ e i ϕ ⟩ | 2 = | ⟨ e i 2 π δ / λ ⟩ | 2 {\displaystyle S=|\langle e^{i\phi }\rangle |^{2}=|\langle e^{i2\pi \delta /\lambda }\rangle |^{2}} where i is the imaginary unit, ϕ = 2 π δ / λ {\displaystyle \phi =2\pi \delta /\lambda } is the phase error over the aperture at wavelength λ, and the average of the complex quantity inside the brackets is taken over the aperture A(x,y). The Strehl ratio can be estimated using only the statistics of the phase deviation ϕ {\displaystyle \phi } , according to a formula rediscovered by Mahajan but known long before in antenna theory as the Ruze formula S ≈ e − σ 2 {\displaystyle S\approx {e^{-\sigma ^{2}}}} where sigma (σ) is the root mean square deviation over the aperture of the wavefront phase: σ 2 = ⟨ ( ϕ − ϕ ¯ ) 2 ⟩ {\displaystyle \sigma ^{2}=\langle (\phi -{\bar {\phi }})^{2}\rangle } . | Strehl ratio | 0.789114 |
99,952 | Jade Mirror of the Four Unknowns, Siyuan yujian (四元玉鉴), also referred to as Jade Mirror of the Four Origins, is a 1303 mathematical monograph by Yuan dynasty mathematician Zhu Shijie. Zhu advanced Chinese algebra with this Magnum opus. The book consists of an introduction and three books, with a total of 288 problems. | Jade Mirror of the Four Unknowns | 0.789113 |
99,953 | The first experiment employing this type of cooling was done in 1977 by Arthur Ashkin, who received the 2018 Nobel Prize in Physics for his pioneering work on trapping with optical tweezers. Instead of applying a linear feedback signal, one can also combine position and velocity via u f b ∝ q ( t ) q ˙ ( t ) {\displaystyle u_{fb}\propto q(t){\dot {q}}(t)} to get a signal with twice the frequency of the particle’s oscillation. This way the stiffness of the trap increases when the particle moves out of the trap and decreases when the particle is moving back. | Levitated optomechanics | 0.789112 |
99,954 | Levitated optomechanics is a field of mesoscopic physics which deals with the mechanical motion of mesoscopic particles which are optically or electrically or magnetically levitated. Through the use of levitation, it is possible to decouple the particle's mechanical motion exceptionally well from the environment. This in turn enables the study of high-mass quantum physics, out-of-equilibrium- and nano-thermodynamics and provides the basis for precise sensing applications. | Levitated optomechanics | 0.789112 |
99,955 | In order to use mechanical oscillators in the regime of quantum physics or for sensing applications, low damping of the oscillator's motion and thus high quality factors are desirable. In nano and micromechanics, the Q-factor of a system is often limited by its suspension, which usually demands filigree structures. Nevertheless, the maximally achievable Q-factor usually correlates with the system's size, requiring large systems for achieving high Q-factors. Particle levitation in external fields can alleviate this constraint. This is one of the reasons why the field of levitated optomechanics has become attractive for research on the foundations in physics and for high-precision applications. | Levitated optomechanics | 0.789112 |
99,956 | The external feedback is usually used to cool and control the particle motion. The approximation of a classical harmonic oscillator holds true until one reaches the regime of quantum mechanics, where the quantum harmonic oscillator is the superior approximation and the quantization of the energy levels becomes apparent. The QHO has a ground state of lowest energy where both position and velocity have a minimal variance, determined by the Heisenberg uncertainty principle. Such quantum states are interesting starting conditions for preparing non-Gaussian quantum states, quantum enhanced sensing, matter-wave interferometry or the realization of entanglement in many-particle systems. | Levitated optomechanics | 0.789112 |
99,957 | All these exceptions are not very relevant for chemistry, as the energy differences are quite small and the presence of a nearby atom can change the preferred configuration. The periodic table ignores them and follows idealised configurations. They occur as the result of interelectronic repulsion effects; when atoms are positively ionised, most of the anomalies vanish.The above exceptions are predicted to be the only ones until element 120, where the 8s shell is completed. | Aufbau Principle | 0.789112 |
99,958 | Although in hydrogen there is no energy difference between subshells with the same principal quantum number n, this is not true for the outer electrons of other atoms. In the old quantum theory prior to quantum mechanics, electrons were supposed to occupy classical elliptical orbits. The orbits with the highest angular momentum are 'circular orbits' outside the inner electrons, but orbits with low angular momentum (s- and p-subshell) have high subshell eccentricity, so that they get closer to the nucleus and feel on average a less strongly screened nuclear charge. | Aufbau Principle | 0.789112 |
99,959 | The principle takes its name from German, Aufbauprinzip, "building-up principle", rather than being named for a scientist. It was formulated by Niels Bohr and Wolfgang Pauli in the early 1920s. This was an early application of quantum mechanics to the properties of electrons and explained chemical properties in physical terms. Each added electron is subject to the electric field created by the positive charge of the atomic nucleus and the negative charge of other electrons that are bound to the nucleus. | Aufbau Principle | 0.789112 |
99,960 | The configuration is often abbreviated by writing only the valence electrons explicitly, while the core electrons are replaced by the symbol for the last previous noble gas in the periodic table, placed in square brackets. For phosphorus, the last previous noble gas is neon, so the configuration is abbreviated to 3s2 3p3, where signifies the core electrons whose configuration in phosphorus is identical to that of neon. Electron behavior is elaborated by other principles of atomic physics, such as Hund's rule and the Pauli exclusion principle. | Aufbau Principle | 0.789112 |
99,961 | In 2018, the company launched the Lensa AI app, which is a photo and video editing app. In late November 2022, Lensa's "magic avatars" feature was launched, which, for a fee, uses artificial intelligence and users' uploaded selfies to create portraits of the users in various styles and settings within minutes. | Prisma Labs | 0.789111 |
99,962 | Prisma Labs is a company based in Sunnyvale, California that launched the Prisma and Lensa apps. It was founded in 2016 by Andrey Usoltsev, Alexey Moiseenkov, and a team of Russian developers. Usoltsev is also the CEO. In 2016, the company launched the Prisma app, which uses artificial intelligence to duplicate photos in various artistic styles. | Prisma Labs | 0.789111 |
99,963 | Controlled-rate and slow freezing, also known as slow programmable freezing (SPF), is a technique where cells are cooled to around -196 °C over the course of several hours. Slow programmable freezing was developed during the early 1970s, and eventually resulted in the first human frozen embryo birth in 1984. Since then, machines that freeze biological samples using programmable sequences, or controlled rates, have been used for human, animal, and cell biology – "freezing down" a sample to better preserve it for eventual thawing, before it is frozen, or cryopreserved, in liquid nitrogen. | Slow cooling | 0.789111 |
99,964 | Enriching uranium is difficult because the isotopes are practically identical in chemistry and very similar in weight: U-235 is only 1.26% lighter than U-238 (note this applies only to uranium metal). Centrifuges need to work with a gas rather than a solid, and the gas used here is uranium hexafluoride. The relative mass difference between 235UF6 and 238UF6 is less than 0.86%. On the other hand, separation efficiency in a centrifuge depends on absolute mass difference. | Zippe centrifuge | 0.789111 |
99,965 | Meanwhile, Wenninger (1983) found a way to represent these infinite duals, in a manner suitable for making models (of some finite portion). The concept of duality here is closely related to the duality in projective geometry, where lines and edges are interchanged. Projective polarity works well enough for convex polyhedra. But for non-convex figures such as star polyhedra, when we seek to rigorously define this form of polyhedral duality in terms of projective polarity, various problems appear. Because of the definitional issues for geometric duality of non-convex polyhedra, Grünbaum (2007) argues that any proper definition of a non-convex polyhedron should include a notion of a dual polyhedron. | Dual tiling | 0.78911 |
99,966 | The field of quantum technology was first outlined in a 1997 book by Gerard J. Milburn, which was then followed by a 2003 article by Jonathan P. Dowling and Gerard J. Milburn, as well as a 2003 article by David Deutsch.Many devices already available are fundamentally reliant on the effects of quantum mechanics. These include laser systems, transistors and semiconductor devices, as well as other devices such as MRI imagers. The UK Defence Science and Technology Laboratory (DSTL) grouped these devices as 'quantum 1.0' to differentiate them from what it dubbed 'quantum 2.0', which it defined as a class of devices that actively create, manipulate, and read out quantum states of matter using the effects of superposition and entanglement. | Quantum technology | 0.78911 |
99,967 | Quantum technology is an emerging field of physics and engineering, encompassing technologies that rely on the properties of quantum mechanics, especially quantum entanglement, quantum superposition, and quantum tunneling. Quantum computing, sensors, cryptography, simulation, measurement, and imaging are all examples of emerging quantum technologies. The development of quantum technology also heavily impacts established fields such as space exploration. | Quantum technology | 0.78911 |
99,968 | Quantum sensors are expected to have a number of applications in a wide variety of fields including positioning systems, communication technology, electric and magnetic field sensors, gravimetry as well as geophysical areas of research such as civil engineering and seismology. | Quantum technology | 0.78911 |
99,969 | Quantum secure communication is a method that is expected to be 'quantum safe' in the advent of quantum computing systems that could break current cryptography systems using methods such as Shor's algorithm. These methods include quantum key distribution (QKD), a method of transmitting information using entangled light in a way that makes any interception of the transmission obvious to the user. Another method is the quantum random number generator, which is capable of producing truly random numbers unlike non-quantum algorithms that merely imitate randomness. | Quantum technology | 0.78911 |
99,970 | Quantum computers are expected to have a number of important uses in computing fields such as optimization and machine learning. They are perhaps best known for their expected ability to carry out Shor's algorithm, which can be used to factorize large numbers and is an important process in the securing of data transmissions. | Quantum technology | 0.78911 |
99,971 | Quantum simulators are types of quantum computers used to simulate a real world system and can be used to simulate chemical compounds or solve high energy physics problems. Quantum simulators are simpler to build as opposed to general purpose quantum computers because complete control over every component is not necessary. Current quantum simulators under development include ultracold atoms in optical lattices, trapped ions, arrays of superconducting qubits, and others. | Quantum technology | 0.78911 |
99,972 | The other direction of the theorem can be proven by showing that there exists a deterministic Muller automaton that recognizes a given ω-regular language. The union of finitely many deterministic Muller automata can be easily constructed; therefore without loss of generality we assume that the given ω-regular language is of the form αβω. Consider an ω-word w=a1a2... ∈ αβω. Let w(i,j) be the finite segment ai+1,...,aj-1aj of w. For building a Muller automaton for αβω, we introduce the following two concepts with respect to w. Favor A time j favors time i if j > i, w(0,i) ∈ αβ*, and w(i,j) ∈ β*. | McNaughton's Theorem | 0.789109 |
99,973 | In McNaughton's original paper, the theorem was stated as: "An ω-event is regular if and only if it is finite-state." In modern terminology, ω-events are commonly referred to as ω-languages. Following McNaughton's definition, an ω-event is a finite-state event if there exists a deterministic Muller automaton that recognizes it. | McNaughton's Theorem | 0.789109 |
99,974 | In automata theory, McNaughton's theorem refers to a theorem that asserts that the set of ω-regular languages is identical to the set of languages recognizable by deterministic Muller automata. This theorem is proven by supplying an algorithm to construct a deterministic Muller automaton for any ω-regular language and vice versa. This theorem has many important consequences. Since (non-deterministic) Büchi automata and ω-regular languages are equally expressive, the theorem implies that Büchi automata and deterministic Muller automata are equally expressive. Since complementation of deterministic Muller automata is trivial, the theorem implies that Büchi automata/ω-regular languages are closed under complementation. | McNaughton's Theorem | 0.789109 |
99,975 | This construction is known to be optimal. There is a purely algebraic proof of McNaughton's theorem. == Reference list == | McNaughton's Theorem | 0.789109 |
99,976 | One direction of the theorem can be proven by showing that any given Muller automaton recognizes an ω-regular language. Suppose A = (Q,Σ,δ,q0,F) is a deterministic Muller automaton. The union of finitely many ω-regular languages produces an ω-regular language; therefore it can be assumed without loss of generality that the Muller acceptance condition F contains exactly one set of states {q1, ... ,qn}. Let α be the regular language whose elements will take A from q0 to q1. | McNaughton's Theorem | 0.789109 |
99,977 | The key difficulty with Fresnel's aether hypothesis arose from the juxtaposition of the two well-established theories of Newtonian dynamics and Maxwell's electromagnetism. Under a Galilean transformation the equations of Newtonian dynamics are invariant, whereas those of electromagnetism are not. Basically this means that while physics should remain the same in non-accelerated experiments, light would not follow the same rules because it is travelling in the universal "aether frame". Some effect caused by this difference should be detectable. | Luminiferous ether | 0.789109 |
99,978 | This was the first step that would lead to the full development of quantum mechanics, in which the wave-like nature and the particle-like nature of light are both considered as valid descriptions of light. A summary of Einstein's thinking about the aether hypothesis, relativity and light quanta may be found in his 1909 (originally German) lecture "The Development of Our Views on the Composition and Essence of Radiation".Lorentz on his side continued to use the aether hypothesis. In his lectures of around 1911, he pointed out that what "the theory of relativity has to say ... can be carried out independently of what one thinks of the aether and the time". | Luminiferous ether | 0.789109 |
99,979 | Instead of suggesting that the mechanical properties of objects changed with their constant-velocity motion through an undetectable aether, Einstein proposed to deduce the characteristics that any successful theory must possess in order to be consistent with the most basic and firmly established principles, independent of the existence of a hypothetical aether. He found that the Lorentz transformation must transcend its connection with Maxwell's equations, and must represent the fundamental relations between the space and time coordinates of inertial frames of reference. In this way he demonstrated that the laws of physics remained invariant as they had with the Galilean transformation, but that light was now invariant as well. | Luminiferous ether | 0.789109 |
99,980 | Aether theory was dealt another blow when the Galilean transformation and Newtonian dynamics were both modified by Albert Einstein's special theory of relativity, giving the mathematics of Lorentzian electrodynamics a new, "non-aether" context. Unlike most major shifts in scientific thought, special relativity was adopted by the scientific community remarkably quickly, consistent with Einstein's later comment that the laws of physics described by the Special Theory were "ripe for discovery" in 1905. Max Planck's early advocacy of the special theory, along with the elegant formulation given to it by Hermann Minkowski, contributed much to the rapid acceptance of special relativity among working scientists. Einstein based his theory on Lorentz's earlier work. | Luminiferous ether | 0.789109 |
99,981 | Both polar caps show spiral troughs, which recent analysis of SHARAD ice penetrating radar has shown are a result of roughly perpendicular katabatic winds that spiral due to the Coriolis Effect.The seasonal frosting of some areas near the southern ice cap results in the formation of transparent 1 m thick slabs of dry ice above the ground. With the arrival of spring, sunlight warms the subsurface and pressure from subliming CO2 builds up under a slab, elevating and ultimately rupturing it. This leads to geyser-like eruptions of CO2 gas mixed with dark basaltic sand or dust. This process is rapid, observed happening in the space of a few days, weeks or months, a rate of change rather unusual in geology—especially for Mars. The gas rushing underneath a slab to the site of a geyser carves a spider-like pattern of radial channels under the ice.In 2018, Italian scientists reported the discovery of a subglacial lake on Mars, 1.5 km (0.93 mi) below the surface of the southern polar layered deposits (not under the visible permanent ice cap), and about 20 km (12 mi) across, the first known stable body of water on the planet. | Martian polar ice caps | 0.789107 |
99,982 | The physics of this model is similar to ideas put forth to explain dark plumes erupting from the surface of Triton.Research, published in January 2010 using HiRISE images, found that some of the channels in spiders grow larger as they go uphill since gas is doing the erosion. The researchers also found that the gas flows to a crack that has occurred at a weak point in the ice. As soon as the sun rises above the horizon, gas from the spiders blows out dust which is blown by wind to form a dark fan shape. Some of the dust gets trapped in the channels. Eventually frost covers all the fans and channels until the next spring when the cycle repeats. | Martian polar ice caps | 0.789107 |
99,983 | Let (D, g) be the unit disc D ⊂ R2 equipped with the Euclidean metric, and let (D, h) be the same disc equipped with a hyperbolic metric as in the Poincaré disc model of hyperbolic geometry. Then, although the two structures are diffeomorphic via the identity map i: D → D, i is not a geodesic map, since g-geodesics are always straight lines in R2, whereas h-geodesics can be curved. On the other hand, when the hyperbolic metric on D is given by the Klein model, the identity i: D → D is a geodesic map, because hyperbolic geodesics in the Klein model are (Euclidean) straight line segments. | Geodesic map | 0.789107 |
99,984 | In mathematics—specifically, in differential geometry—a geodesic map (or geodesic mapping or geodesic diffeomorphism) is a function that "preserves geodesics". More precisely, given two (pseudo-)Riemannian manifolds (M, g) and (N, h), a function φ: M → N is said to be a geodesic map if φ is a diffeomorphism of M onto N; and the image under φ of any geodesic arc in M is a geodesic arc in N; and the image under the inverse function φ−1 of any geodesic arc in N is a geodesic arc in M. | Geodesic map | 0.789107 |
99,985 | Thrust is a vector quantity, and the direction of the thrust has a large impact on the size of gravity losses. For instance, gravity loss on a rocket of mass m would reduce a 3mg thrust directed upward to an acceleration of 2g. However, the same 3mg thrust could be directed at such an angle that it had a 1mg upward component, completely canceled by gravity, and a horizontal component of mg× 3 2 − 1 2 {\displaystyle {\sqrt {3^{2}-1^{2}}}} = 2.8mg (by Pythagoras' theorem), achieving a 2.8g horizontal acceleration. As orbital speeds are approached, vertical thrust can be reduced as centrifugal force (in the rotating frame of reference around the center of the Earth) counteracts a large proportion of the gravitation force on the rocket, and more of the thrust can be used to accelerate. | Gravity loss | 0.789106 |
99,986 | Organoaluminium chemistry is the study of compounds containing bonds between carbon and aluminium. It is one of the major themes within organometallic chemistry. Illustrative organoaluminium compounds are the dimer trimethylaluminium, the monomer triisobutylaluminium, and the titanium-aluminium compound called Tebbe's reagent. The behavior of organoaluminium compounds can be understood in terms of the polarity of the C−Al bond and the high Lewis acidity of the three-coordinated species. Industrially, these compounds are mainly used for the production of polyolefins. | Trialkylaluminium compound | 0.789105 |
99,987 | In molecular biology, Pulmonary surfactant protein D (SP-D) is a protein domain predominantly found in lung surfactant. This protein plays a special role; its primary task is to act as a defence protein against any pathogens that may invade the lung. It also plays a role in lubricating the lung and preventing it from collapse. It has an interesting structure as it forms a triple-helical parallel coiled coil, helps the protein to fold into a trimer. | Pulmonary surfactant protein D | 0.789105 |
99,988 | Overdominance is a rare condition in genetics where the phenotype of the heterozygote lies outside the phenotypical range of both homozygous parents. Overdominance can also be described as heterozygote advantage regulated by a single genomic locus, wherein heterozygous individuals have a higher fitness than homozygous individuals. However, not all cases of the heterozygote advantage are considered overdominance, as they may be regulated by multiple genomic regions. Overdominance has been hypothesized as an underlying cause for heterosis (increased fitness of hybrid offspring). | Overdominance | 0.789105 |
99,989 | Strengthened GABAergic systems can induce an early critical period, while weaker GABAergic inputs can delay or even prevent plasticity. Inhibition also guides plasticity once the critical period has begun. For example, lateral inhibition is especially important in guiding columnar formation in the visual cortex. Hebbian theory provides insight on the importance of inhibition within neural networks: without inhibition, there would be more synchronous firing and therefore more connections, but with inhibition, fewer excitatory signals get through, allowing only the more salient connections to mature. | Critical period | 0.789105 |
99,990 | A parallel construction based on solvable Lie groups produces a class of spaces called solvmanifolds. An important example of a solvmanifolds are Inoue surfaces, known in complex geometry. == References == | Nilmanifold | 0.789103 |
99,991 | A compact nilmanifold is a nilmanifold which is compact. One way to construct such spaces is to start with a simply connected nilpotent Lie group N and a discrete subgroup Γ {\displaystyle \Gamma } . If the subgroup Γ {\displaystyle \Gamma } acts cocompactly (via right multiplication) on N, then the quotient manifold N / Γ {\displaystyle N/\Gamma } will be a compact nilmanifold. As Mal'cev has shown, every compact nilmanifold is obtained this way.Such a subgroup Γ {\displaystyle \Gamma } as above is called a lattice in N. It is well known that a nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Malcev's criterion. | Nilmanifold | 0.789103 |
99,992 | Scaling down experiments, when combined with modern projection technology, opened up the possibility of carrying out lecture demonstrations of the most hazardous kind in total safety. The approach has been adopted worldwide. It has become a major presence on the educational scene in the US, it is used to a lesser extent in the UK and it is used in many countries in institutions with staff who are enthusiastic about it. For example, in India, small scale chemistry/ microscale chemistry is now implemented in a few universities and colleges. | Microscale chemistry | 0.789102 |
99,993 | The other strand is the introduction of this approach into synthetic work, mainly in organic chemistry. Here the crucial breakthrough was achieved by Mayo, Pike and Butcher and by Williamson who demonstrated that inexperienced students were able to carry out organic syntheses on a few tens of milligrams, a skill previously thought to require years of training and experience. These approaches were accompanied by the introduction of some specialised equipment, which was subsequently simplified by Breuer without great loss of versatility.There is a great deal of published material available to help in the introduction of such a scheme, providing advice on choice of equipment, techniques and preparative experiments and the flow of such material is continuing through a column in the Journal of Chemical Education called 'The Microscale Laboratory' that has been running for many years. | Microscale chemistry | 0.789102 |
99,994 | There are two main strands of the modern approach. One is based on the idea that many of the experiments associated with general chemistry (acids and bases, oxidation and reduction, electrochemistry, etc.) can be carried out in equipment much simpler (injection bottles, dropper bottles, syringes, wellplates, plastic pipettes) and therefore cheaper than the traditional glassware in a laboratory, thus enabling the expansion of the laboratory experiences of students in large classes and to introduce laboratory work into institutions too poorly equipped for standard-type work. Pioneering development in this area was carried out by Egerton C. Grey (1928), Mahmoud K. El-Marsafy (1989) in Egypt, Stephen Thompson in the US and others. A further application of these ideas was the devising by Bradley of the Radmaste kits in South Africa, designed to make effective chemical experiments possible in developing countries in schools that lack the technical services (electricity, running water) taken for granted in many places. | Microscale chemistry | 0.789102 |
99,995 | – 20. May 2005 at Universidad Iberoamericana – Ciudad de Mexico """]]] 4th International Symposium on Microscale Chemistry Bangkok, Thailand 2009 5th International Symposium on Microscale Chemistry Manila, Philippines, 2010 6th International Symposium on Microscale Chemistry Kuwait City, Kuwait, 2011 7th International Symposium on Microscale Chemistry Berlin, Germany, 2013 8th International Symposium on Microscale Chemistry Mexico City, Mexico, 2015 9th International Symposium on Microscale Chemistry Sendai, Japan, 2017 10th International Symposium on Microscale Chemistry, North-west University, Potchefstroom South Africa, 201911th International Symposium on Microscale Chemistry. On-line, United Kingdom, 2021 | Microscale chemistry | 0.789102 |
99,996 | 1st International Symposium on Microscale Chemistry May 2000 at Universidad Iberoamericana – Ciudad de Mexico 2nd International Symposium on Microscale Chemistry 13. – 15. December 2001 at Hong Kong Baptist University – Hong Kong 3rd International Symposium on Microscale Chemistry 18. | Microscale chemistry | 0.789102 |
99,997 | Austria Viktor Obendrauf China Zhou Ning-Huai Egypt Mahmoud K. El-Marsafy Germany Angela Koehler-Kruetzfeld, Peter Schwarz, Waltraud Habelitz-Tkotz, Michael Tausch, John McCaskill, Theodor Grofe, Bernd-Heinrich Brand, Gregor von Borstel, Stephan Mattusek Hong Kong Winghong Chan Israel Mordechai Livneh Japan Kazuko Ogino Macedonia Metodija Najdoski Mexico Jorge Ibanez, Arturo Fregoso, Carmen Doria, Rosa Maria Mainero, Margarita Hernandez, et al. Poland Aleksander Kazubski, Dominika Strutyńska, Łukasz Sporny, Piotr Wróblewski Portugal M. Elisa Maia South Africa John Bradley Marie DuToit Sweden Christer Gruvberg USA National Microscale Chemistry Center USA National Small Scale Chemistry Center USA Microscale Gas Chemistry; Bruce Mattson Kenneth M. Doxsee Thailand Supawan Tantyanon Kuwait Abdulaziz Alnajjar India Govt. Victoria College, Palakkad,Kerala United Kingdom Bob Worley, CLEAPSS, Chis LLoyd SSERC | Microscale chemistry | 0.789102 |
99,998 | Microscale chemistry (often referred to as small-scale chemistry, in German: Chemie im Mikromaßstab) is an analytical method and also a teaching method widely used at school and at university levels, working with small quantities of chemical substances. While much of traditional chemistry teaching centers on multi-gramme preparations, milligrammes of substances are sufficient for microscale chemistry. In universities, modern and expensive lab glassware is used and modern methods for detection and characterization of the produced substances are very common. In schools and in many countries of the Southern hemisphere, small-scale working takes place with low-cost and even no-cost material. There has always been a place for small-scale working in qualitative analysis, but the new developments can encompass much of chemistry a student is likely to meet. | Microscale chemistry | 0.789102 |
99,999 | Adam Dennett is now the director of the Centre and associate professor. Masters courses developed since 2010 include MSc's in Smart Cities and Urban Analytics and Spatial Data Science and Visualisation. It currently has 10 lecturers/associate Professors and two teaching fellows leading its courses. Sir Alan Wilson was appointed Professor of Urban and Regional Systems in 2008 but moved to the Turing Institute in 2016. The centre is now an administrative unit in The Bartlett Faculty of the Built Environment. | UCL Centre for Advanced Spatial Analysis | 0.789101 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.