id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
190600
https://davidmathlogic.com/courses/math2110s16/polarexamples.pdf
MATH 2110Q – Spring 2016 Examples of Double Integrals in Polar Coordinates David Nichols Example 1. Find the volume of the region bounded by the paraboloid z = 2 −4x2 −4y2 and the plane z = 0. x y z D We need to find the volume under the graph of z = 2 −4x2 −4y2, which is pictured above. (Note that you do not have to produce such a picture to set up and solve the integral. It’s just nice to have.) That means we need to integrate 2 −4x2 −4y2 over the region D. Let’s take a look at the region of integration. x y z D x y D The boundary of the disk D is where the graph of z = 2 −4x2 −4y2 hits the xy-plane, so we can solve for it by setting z = 0: 0 = 2 −4x2 −4y2 4x2 + 4y2 = 2 x2 + y2 = 1 2 That’s the equation of a circle, so we can see that D is the disk centered at the origin with radius p 1/2 = 1/ √ 2. Because D is a circular disk, we will set up the integral in polar coordinates. Since D is the disk of radius 1/ √ 2, we have 0 ≤θ ≤2π and 0 ≤r ≤1/ √ 2. V = ZZ D (2 −4x2 −4y2) dA = ZZ D (2 −4(x2 + y2)) dA = Z 2π 0 Z 1/ √ 2 0 (2 −4r2)r dr dθ = Z 2π 0 dθ Z 1/ √ 2 0 (2r −4r3) dr = 2π  r2 −r4 r=1/ √ 2 r=0 = 2π 1 2 −1 4  = π 2 Example 2. Evaluate R 1 0 R √ 2−y2 y (x + y) dx dy by switching to polar coordinates. To convert to polar coordinates, it’s handy to first sketch the region of integration. Reading offthe limits of integration from the outside in, we first see that y goes from 0 up to 1... x y 0 1 ...and then we see that x goes from the line x = y out to the semicircle (of radius √ 2) x = p 2 −y2 : x y 0 1 0 1 • (1, 1) x = y x = p 2 −y2 How can we describe this region in polar coordinates? Well, the line x = y is at an angle of 45o or π/4 from the x-axis, so 0 ≤θ ≤π/4. And the semicircle has radius √ 2, so 0 ≤r ≤ √ 2. Translating the entire integral we get the following. Z 1 0 Z √ 2−y2 y (x + y) dx dy = Z π/4 0 Z √ 2 0 (r cos θ + r sin θ)r dr dθ. We can factor the r’s and θ’s into separate factors in the integrand, so we can factor the interated integral into separate integrals for r and θ: Z π/4 0 Z √ 2 0 (r cos θ + r sin θ)r dr dθ = Z π/4 0 Z √ 2 0 r2(cos θ + sin θ) dr dθ = Z π/4 0 (cos θ + sin θ) dθ Z √ 2 0 r2 dr = " sin θ −cos θ θ=π/4 θ=0 # · "1 3r3 r= √ 2 r=0 # = "√ 2 2 − √ 2 2 −0 + 1 # · 1 3  23/2 −0  = 2 √ 2 3 Example 3. Use a double integral to find the area inside one loop of the four-leaved rose r = cos 2θ. We start by drawing a picture of the curve in question. We can use a calculator to do this, or we can just plug in lots of values of θ and plot the resulting points: x y • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Now we connect the dots: x y • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Now we can start setting up the double integral to solve the problem. Recall that if D is a region in the xy-plane, then Area(D) = ZZ D 1 dA = (in polar) ZZ D r dr dθ, where the r showed up because that’s the scaling factor when we convert to polar. We want to set this up as an iterated integral: Area(D) = Z ? ? Z ? ? r dr dθ. We are supposed to find the area inside one loop. We can pick any loop: x y r goes from 0 out to cos 2θ. θ goes from here... ...up to here. We can write in the limits of integration for r: Area(D) = Z ? ? Z cos 2θ 0 r dr dθ. To find the limits of integration for θ, we need to do a bit of math to see what θ is when the function is at the origin. To do that, we set r = cos 2θ = 0, which means 2θ = ±π/2, ±3π/2, which means θ = ±π/4, ±3π/4. Looking at those angles on the graph above, we can see that we are integrating from θ = −π/4 to θ = π/4: Area(D) = Z π/4 −π/4 Z cos 2θ 0 r dr dθ = Z π/4 −π/4 1 2r2 r=cos 2θ r=0 dθ = Z π/4 −π/4 1 2 cos2 2θ dθ = Z π/4 −π/4 1 4(1 + cos 4θ) dθ (double angle formula) = 1 4  θ + 1 4 sin 4θ π/4 −π/4 = 1 4 π 2  = π 8
190601
https://www.youtube.com/watch?v=Z8XtYqgCkZQ
Chapter 22 Vibrations - Engineering Mechanics | 14th Edition - Dynamics Murtaja Academy 1310 subscribers 99 likes Description 4819 views Posted: 21 Jul 2023 Undamped Free Vibration Engineering Mechanics: Dynamics 14th edition Russell C Hibbeler 22-1. A spring is stretched 175 mm by an 8-kg block. If the block is displaced 100 mm downward from its equilibrium position and given a downward velocity of 1.50 m/s, determine the differential equation which describes the motion. Assume that positive displacement is downward. Also, determine the position of the block when t = 0.22 s. 22-2. A spring has a stiffness of 800 N/m. If a 2-kg block is attached to the spring, pushed 50 mm above its equilibrium position, and released from rest, determine the equation that describes the block’s motion. Assume that positive displacement is downward. 22-3. A spring is stretched 200 mm by a 15-kg block. If the block is displaced 100 mm downward from its equilibrium position and given a downward velocity of 0.75 m/s, determine the equation which describes the motion. What is the phase angle? Assume that positive displacement is downward. 22-4. When a 20-lb weight is suspended from a spring, the spring is stretched a distance of 4 in. Determine the natural frequency and the period of vibration for a 10-lb weight attached to the same spring. 22-5. When a 3-kg block is suspended from a spring, the spring is stretched a distance of 60 mm. Determine the natural frequency and the period of vibration for a 0.2-kg block attached to the same spring. 4 comments Transcript: hi in this video we will be doing problems related to Dynamics from this book it's called engineering mechanics Dynamics 14th Edition by Hepler the problems that we will do is from chapter 22.1 it's related to undamped free vibration so in this chapter we're talking you know we will be discussing and solving problems related to free vibration free vibration occurs when the motion is maintained by gravitational or elastic restoring forces such as The Swinging motion of a pendulum or the vibration of an elastic Road undamped vibration exclude fractional effects in the analysis but in reality both internal and external frictional forces are present in the motion of all vibrating bodies is actually Damned the simplest type of vibrating motion is undamped free vibration so it's important to know that free vibration undamped free vibration and that means that we are not considering the frictional internal and external frictional effects on a system free vibration this vibration happened due to gravity and or the elastic force in this case the spring so the first problem that we will discuss is 22-1 so here we are told a spring is stretched 175 millimeter by an 8 kilogram block if the block is displaced 100 millimeter downward from its equilibrium position and pay attention to the word equally phrase equilibrium position and given a downward velocity of 1.5 meter per second determine the differential equation which describes the motion assume that positive displacement is downward so that that forms your reference the downward is positive also determine the position of the block when time equal 0.22 seconds so first thing let's um write down what is given so it's important to write down what is given then we're gonna sketch what is given to us so here we are given a spring a spring is a device that stores energy and here we're talking about elastic energy the we are told that this spring is being stretched so the spring had an original length and due to the 8 kilogram block that spring got stretched and a distance of 0.175 meter so that is the change or the deformation of the spring so as we said the spring got stretched due to the mass 8 kilogram the eight kilogram block displaced an amount of 0.1 meter from the equilibrium downward in the positive direction and here we said that while we are displacing the block 0.1 meter the block attained or reached a velocity of 1.5 meter per second I here in in these two situations where 0.1 meter below the equilibrium and below and and it reached 1.5 meter per second these are called the in your initial conditions and what happened in here is your spring got stretched and then due to the block then the block got displaced while we are displacing the black 0.1 meter it reached this velocity and once these two condition met we start at time 0 the block will be released and the block will start to vibrate here we assume that the positive displacement is downward so here we are asked to determine the differential equation that represents this motion with these given information and we want to determine by file when we found the differential equation we want to determine the position of the block at time equals 0.22 second so this is our spring that we're talking about this is the normal unchanged spring length from here to here now in here as you can see we have the same spring but the spring got stretched and it was and it is stretched due to the 8 kilogram block okay now this was the original length it got and due to the block it got stretched an amount of 0.175 meter so now the spring and the mass right here this become our equilibrium position and here we can zero our location and we said that downward is positive y direction in here in this third one we displaced the block of mass within the spring an amount of 0.1 meter at zero second and as we are displacing the block the block attained or reached a velocity of 1.5 meter per second at zero seconds once we reach these two conditions we release the block now the the spring will start to do its job the the spring will use its elastic energy to bring back the spring to its original position in here back to the equilibrium position so the block will exert a force opposite to the direction of the displacement and that's what we're going to have in here and then the block will move upward to the to the equilibrium position but what will happen it will continue further above the equilibrium position and it will continue vibrating back and forth back and forth now in this we are not cons we are not analyzing the damping or the friction here we call it undamped so in reality this thing will vibrate and it will re and and the spring and the mass will come back to their original situation at the equilibrium position all right so first thing let us draw the free body diagram of the spring and mass at their equilibrium position at zero meter so at this position we are not moving so here we will have our 8 kilogram so and we said that downward is positive y direction now the eight kilogram we we have a force due to gravity which is mass times gravity and so this is the weight of the block the spring is obviously pulling up and that we call it the force of the spring all right so now we say that the sum of the forces in the y direction equal to zero and remember downward is positive so M times gravity which is downward positive and force of the spring is upward because force of the spring is always in the opposite direction of the displaced or stretched spring so what is the force of the spring the force of the spring is the spring constant or the spring stiffness times how much the spring got stretched so we have mass times gravity minus the spring constant or spring stiffness times y stretch that is how much the spring got stretched due to the attached block of 8 kilogram equal to zero so next the mass times gravity we will move this to the other side of the equation become K times y stretch so now we want to we are given the mass of eight kilogram the gravitational constant we know it it's 9.81 meter per second squared and we are told that the stretch amount is 0.175 meter so from here we can find the spring stiffness all right so here when we solve 4K and move the Y stretch to the other side of the equation from multiplication to division as we can see in here so what does this equation tells us it tells us that the less the spring gets stretched the more stiff the spring will be and vice versa so here we will solve for the spring constant and we will plug in the given values which of 8 kilogram 9.81 meter per second squared divided by the amount that the spring being stretched due to the 8 kilogram block attached to the spring so this will give us that the spring constant is 448.46 Newton per meter so that's how much force needed to stretch a spring an amount of one meter so obviously if this value is higher so let's say this one instead of 448 it's 580 for example so that means this value stretch will be less because the spring is stiffer but I want you to pay attention to the Y stretch in here so now we can instead of solving for K we can also solve for y stretch so we can move K to the other side of the equation from multiplication to division and we will get this equation so why stretch that is that is the this is the relationship that relate how much a spring is being stretched for a given mass and a given spring stiffness now we will deal with the spring and mass in motion so as we are displacing the block below the equilibrium position in the positive y direction so here let's draw the 8 kilogram and here we have a positive y direction going downward and it's moving downward so it have a positive acceleration all right so the eight kilogram block it have a weight due to gravity or force due to gravity of mg and as we are displacing the block obviously the spring will have will will exert a force opposite to the direction of displacement which we will call it the force of the spring so here we will say that the sum of the forces in the y direction and remember downward is positive is equal to mass times acceleration all right so here mass times gravity positive force of the spring upward negative equal mass times acceleration remember the force of the spring is the spring stiffness which is the type of the spring times how much the spring deformed in total so so here we will say mass times gravity minus the force of the spring K and and the first change happened due to the stretch due to the hanged Mass plus here the Y is the amount the spring got stretched due to the uh displaced plug that equal to mass times acceleration so here we what we can do is we can replace y stretch that we have by the relationship that we discussed in the previous um slide so now we can delete this K with this K and we will be left with mass times gravity minus mg times the spring stiffness times the amount it was displaced equal to mass times acceleration so here we will plan we will distribute the negative into the parentheses so it'll give us my M minus M times G minus K times y equal m times y so now positive one minus 1 it will give you 0 so this will cancel so here we will left with minus K times y equal to M times y squared so acceleration sorry so here we can rewrite this and take this to the other side of the equation so mass times acceleration plus K times y equal to zero so now we can divide mass by uh in both sides of the equation and zero times mass will give you zero so that's why we have zero in here so here we can cancel the mass and then we can rewrite this equation acceleration plus K Over M times y equal to zero so the question is what is K Over N well there is there is something called the natural frequency and in in general natural frequency tells you um how every system if for every mechanical system it have its elasticity it it have its um and that elasticity can be represented as a spring and that mechanical system or that is made of of a certain material it have a certain Mass so due to this object that have certain elasticity and a certain Mass it have a natural frequency so for example like a fork so when you hit a fork that fork will due to its elasticity and due to its mass it will have a certain frequency to it frequency means how many cycles this object can vibrate in one second so every object have its natural frequency so every spring and mass depending on the spring stiffness and the and depending on the mass that we have they have their natural frequency so here we we related natural frequency to two things the type of the spring which is the spring stiffness or the elasticity of an object that is vibrating and the mass or in other word the mass of the object that is vibrating so that's why in we can relate some mechanical system and we or we can model some mechanical systems as a spring and Mass model so the natural frequency is the square root of the spring stiffness divided by mass so if we took the square of both sides we can get rid of the square root so that's what we did in here so the natural frequency squared is equal to K Over M and this is the K Over N that we have in here so we can replace K of K Over M by the natural frequency squared so it will be acceleration times the natural frequency of the system that that we're dealing with times the amount this system got stretched or initial stretched equal to zero all right so right here what are we dealing with so this is a differential equation so what type of differential equation is this this is a homogeneous second order linear differential equation with constant coefficients so this right here that were is the general solution for this differential equation now in this video I'm not going I'm not going to go over how we solve this differential equation and we got this answer and obviously to get so this General solution is y as a function of time and this represents the position of the block as it is vibrating and to find the uh an equation that represents the velocity of the block as it vibrating with respect to time all what we can do is today is to differentiate this equation and to give us this now remember that sine the derivative of sine is cosine and the derivative of cosine is negative sign and here we use the chain rule same thing if we want to know the acceleration of the block with respect to time we can take the derivative of the Velocity function and that's what we will get right here and as a side practice what you can do is you can take the acceleration y double Prime and plug it into your original differential equation and here your position function y and plug it in here and when you solve for this equation you will get the answer zero so that is a proof that these three equations are General solution for this differential equation now these two A and B we call this the integration constant we can solve or find out these A and B values from the initial conditions so we what we will do is we will use the initial conditions at time equals zero second so here though the position at zero second is 0.1 meter and the velocity at zero second is 1.5 meter per second so now what we will do is we will plug in these given values into these Fung this General solution to find the equation that represents the the differential equation that represents this problem so we will plug this into our position equation so initial initially point one and remember zero times anything is zero so that's why I didn't write the natural frequency value now here as a sine of 0 is 0 because sine of zero is one so a will cancel and we and we can solve for b so B value is 0.1 and to to to find the velocity or to find the a value let's here uh find the natural frequency of this system that we're dealing with so we are given the spring con we found the spring constant 448.46 Newton over meter times the mass that is being attached to the spring it will give us this uh uh natural frequency 7.487 radian per second so that's how much uh angle this thing can rotate in one second so we can plug in the the natural frequency and the velocity at Time Zero 1.5 meter per second into the velocity equation like we have in here and as we said that the sine of 0 is 0 so the B term will cancel and here we can solve for a so 1.5 times a and cosine of 0 is 1. so now we can solve for a so we will move the natural frequency to the other side of the equation from multiplication to division plug this into your calculator and you will get that a is equal to 0.2 all right so here the a value is 0.2 the B value is 0.1 and the natural frequency we got it to be 7.487 radian per second all right so now for the position function we can plug in these values into the position function and that's what we will get in here now in the problem they told us hey we want to know the position of the block at time 0.22 simple so then now let's plug point 22 into this equation and use our calculator to find the value so the value will be 0.192 meter right here okay so if you took this function right here and you graph it so that's what you will get in here so now let's try to take a look at this function or this graph that represents the movement of the block now here at zero here at zero this zero represents your equilibrium position so first thing let's take a look at this so from 0 all the way to the maximum point we call it C C is the amplitude that is the highest point so when you so when when we stretched the block the spring opposes applies a force opposite to the to that direction and it compressed the spring to a maximum position and then it goes down below the equilibrium position to a maximum position so we want to know what is the amplitude of that so to find the amplitude we simply take a squared plus b squared and a is a 0.2 and B is 0.1 and we Square them and plug this into our calculator and and then we can get the C value so here the C value is 0.2236 meter so that is the maximum displacement from the equilibrium position if the spring got stretched or it got compressed so remember we said that positive y direction is downward so that so in this direction point in the positive direction it's being pulled in the negative Direction it's it's being compressed all right so now the distance from the equilibrium to this point right here this is that your B that's how much it's being displaced from the equilibrium position 0.1 meter so the that from here where where you start vibrating the block and the spring Mass now from now the we start from a point then we return back to the other point that's what we call it vibration you start from a point it vibrates and it comes back to where you stretch so that's the maximum stretch it reach then it will go it will be compressed above the equilibrium right here and then the elasticity will do its job for the spring and it will bring it back downward to where it was in here so this is going back and forth so we want to know how much time it took the mass to still to start from this position and come back to this position again so that's what we call it the period period is 2 pi because it completes a circle divided by the natural frequency of the system that we're dealing with so if we plug in the values that of 2 pi and the natural frequency that we found we will get that it takes 0.8392 seconds that is how that is the time required for the block of Mass to complete one cycle so the frequency is what is the is the reciprocal of period or the reciprocal of time which is one divided by 0.839 equal to 1.19 Hertz so what does 1.19 Hertz means it means that this is how many cycles this system can do in one second can you imagine this system do 60 hertz 60 complete Cycles so this from here let me try to use another blue so right here this one right here if you can think of if you took this stretch you can do it as a circle so imagine so so this one complete one cycle right it starts from a point which is this one and it ends at the same point which is this one okay so so this one tells us that it need to complete 1.19 cycle in one second what would you say if this system vibrate at a rate of 60 or 50. completing this cycle 50 times in one second or 60 time in one second what would that tell you about the system that we're dealing with all right the second problem here we have a spring has a stiffness of 800 Newton per meter all right if a two kilogram block is attached to the spring pushed 50 millimeter above its equilibrium position and released from rest huh released from rest now the previous problem the system while the block of mass was released when it reached a velocity of 1.5 meter per second but here you displace the block and use and then you stop then you release it from rest determine the equation that describes the Block's motion assumed that positive displacement is downward so first thing let's write what is given so here we are dealing with uh we are given a spring stiffness or a spring constant which is K which is 800 Newton over a meter now we are given a block of mass equal to 2 kilogram now this block of mass got displaced from equilibrium position a distance of minus 0.05 meter why is it minus why why did we write minus because we are told that assume positive displacement downward but here we are told that we pushed the the block and mass fifth block Mass system 50 millimeter above its equilibrium so in the opposite direction so that's why it's negative so now let's draw or let's sketch our problem so this is our spring this is that natural length of the spring right here and this spring we are told that we can we identified the type of the spring by its stiffness which is 800 Newton over meter this spring got stretched due to a two kilogram Mass and obviously when when you um when this mass is getting uh being attached to this spring of 800 Newton per meter it will cause the spring to stretch a certain amount so why stretch so now this will become your equilibrium position and here we can zero to our location and downward is positive y direction here the spring got pushed or in other words compressed a distant off minus 0.05 meter at zero second and then stops at zero at zero seconds zero velocity at zero second so these are called your initial conditions and as we said that that the spring will exert a force opposite to the displacement Direction so the displacement was upward the spring will will will it's will um exert a force opposite to that direction to return back the spring and mass back to the equilibrium position all right so first thing what we want to do is we want to analyze the spring and Mass at equilibrium position at zero so where so this so spring and mass is not moving so have you have your two kilogram this thing is uh downward is positive the first thing is mass times Gravity the weight of the block and the sprinkle will will exert a force opposite to that opposite to the displacement direction of Y stretch upward so that will be your force of the spring so here we can say that the sum of the forces in the y direction equal to zero so downward is positive mass times gravity upward is negative so that's the minus Force spring equal to zero and again the force of the spring is the spring stiffness times how much a spring got stretched due to the spring stiffness and the hang of mass equal to zero so here we have the mass the gravity we know it which is 9.81 we are given the spring stiffness we can find how much this spring got stretched due to the hanged Mass so we can move this to the other side of the equation and now we can solve for y stretch so why stretch is mass times gravity divided by k equal to Y Stritch so here y stretch is equal to what two kilogram times 9.81 meter per second squared divided by the spring constant of 800 Newton over meter so now when we plug this into our calculator it will give us that the Y stretch is 0.0245 meter hmm all right so now let's take a look at the spring mass at as it's being displaced above the equilibrium position at this two initial conditions so here you we have our two kilogram Mass downward is positive and oh before I forget uh this thing is moving up so it have a negative acceleration not a positive acceleration it's going above the equilibrium and and we assume that downward is positive so here we have our mass times gravity and the the the the the the force of the spring will exert since this is moving up the force will the or being displaced upward the force of the spring will exert an opposite direction to this one so it will be force of the spring downward so here what we can say is mass times gravity uh oh no this is wrong so this is supposed to be positive force of the spring because it's going downward and this is supposed to be negative so now mass times gravity plus correct the spring constant times how much this thing is being displaced which is Y which is minus 0.05 um minus K times how the Y stretch that we have because when it was in equilibrium the here when it was in the equilibrium it it uh the the force of the spring exerted the force in opposite to the to the directions being displaced this place so that's why we have a negative in here and here we then forget about this negative so here the mass times gravity Plus K times y minus and here we can replace the Y stretch with the relationship we had in the previous slides mass times gravity over k equal minus M times acceleration so the K here cancel with the K and we can rewrite this equation K times the Y stretch how much being this list minus mg equal to my minus M times acceleration here mass times gravity and now we will open the parenthesis that's what we did in here and that's what you what we did in this step next the mass times gravity will cancel with this mass times gravity and we can rewrite this equation by moving this equation to the other side and and it will give us this equation so now we can divide the mass to uh in both sides of the equation as we can see in here so here the mass will cancel with the mass and we can rewrite the equation the acceleration plus K Over m times y equal to zero so here we can replace the natural frequency as we mentioned in the previous slides K Over m is the is the square root is the square of the natural frequency so and that's what we have in here so this is our equation just like the previous problem this is our ordinary differential equation homogeneous equation and and these are the general solution that we got for this for this differential equation and now what we will do is we will use the initial conditions that we got to plug it in the these equations to find the integration constants so the position at zero is minus 0.05 meter and the velocity at 0 is 0 meter per second so when we we can plug it plug the values in the position function in here so this so here we can we will be solving for and so remember because sine of zero is one and the sine of zero is zero so here so here we can get rid of the A and we and we can find the the B value to be minus 0.05 so here we can solve for the natural frequency K Over M K is 800 Newton Newton over meter divided by 2 kilogram equal to 20 radian per second now we will plug in the natural frequency and the velocity at zero second into velocity equation and that's what we will have in here and zero times anything it will it will give you zero remember because sine of zero is one so here we can solve for A and B will be canceled so here a will equal to zero okay so now let's so we have a we have B we have the natural frequency now we can plug these three values into our position function and that's what we have in here so the position function will be minus 0.05 cosine 20 T so this is the equation that represents the uh spring and mass without with a certain spring stiffness and a mass and a certain a block of mass at a certain initial conditions that is the function that will represent that system with a given initial conditions that's the that is the function that will represent its position while it is vibrating so this is the uh sketch or this is the graph that represents the function the position function that we have in here so here we said B is minus 0.05 right that's how much so remember zero this is where this is your equilibrium this thing we is got stretched upward negative Direction that's why we have negative to minus 0.05 a is what zero so here we can find again the time so it starts in here and it completes one cycle and it came back to its position in here back again going back and forth and now this one completes one cycle so we want to know this system that we have a system with a with a certain spring constant with a certain Mass with a certain initial conditions if we are given these these three things how much time it takes for this system to complete one cycle so one cycle is two pi equal to natural frequency times time so time is 2 pi over natural frequency so here we can find the natural frequency oh sorry the period it takes to complete one cycle 2 pi radian divided by the natural frequency of the system which is 20 radian per second and it will give us this value so that's how much time it takes to complete one cycle so the frequency is simply the reciprocal of this so this one tells us that 3.1831 Hertz so what does that tell you now this tells tells us that that this system with this given spring constant with the with a given Mass with a with a given initial conditions uh it can vibrate or it can complete 3.18 cycle in one second next we want to find the amplitude that the highest point that this mass will reach as a linear displacement so here the C is equal a squared plus b squared so C is equal to what a is 0 and B is 0.05 squared so that will give you the value we got over it so now let's solve a third problem 22-3 so here we are told that we have a spring is stretch 200 millimeter by a 15 kilogram clock if the block is displaced 100 millimeter downward from its equilibrium position and given a downward velocity of 0.75 meter per second determine the equation which describes the motion what is the phase angle assume that positive displacement is downward all right so here we have 200 millimeter 15 kilogram and this block got displaced 100 millimeter downward and at and it attained 0.75 meter so now now let's write what's given so a spring stretch 0.2 meter the spring got stretched due to the mass 15 kilogram then the 15 kilogram block displaced 0.1 meter as the 15 kilogram block is displaced 0.1 meter below the equilibrium position the block achieved a downward velocity 0.75 meter per second assume that positive displacement is downward here we want to determine the equation and the phase angle all right so this is our spring this is the natural spring length the original spring length natural length of the spring the spring got stretched due to a hanged mass of 15 kilogram and it got stretched an amount of 0.2 meter so we know the spring how much is being stretched we know the mass that that this spring got stretched due to so now we can all right um so here is your equilibrium position and now we can zero the position and downward is positive y direction now we are told that the spring got stretched or pulled downward with the with the mass of 15 kilogram a 0.1 meter at zero second and as we are stretching down it attained a velocity of 0.75 meter per second just like the first problem I mean similar initial conditions as the first problem so this the force will the force of the spring will exert a force opposite to the direction of the displacement so first thing we what we will do is we will draw the free body diagram for um for the spring and mass at equilibrium position so here the 15 kilogram will have the downward velocity sorry a positive Y and it's going downward so it will have a positive acceleration mass times gravity and it will oppose the direction of the movement force of the spring so here we can say that the sum of the forces in the y direction equal to mass to to sorry to to zero so here mass times gravity minus the force of the spring equal to zero now remember that it's positive mg because downward is positive and of course of the spring is upward is in the opposite direction so mass times gravity is the spring constant times how much the spring got deflected due to the hanged mass of 15 kilogram equal to zero so now we will move this to the other side of the equation it will give us mass times gravity times KY stretched so now we can do we can get two equations in here the Y stretch we can solve for y stretch and or we can solve for the K value in here now in here we will solve for the Y stretch so y stretch is 0.2 meter when we plug in the values of 15 kilogram times 9.81 divided by the spring constant that we are finding its value so the mass is given the gravity and the Y stretch is given so now we can solve for K which is 7 3 5.75 Newton over meter so that's how stiff the spring is so now we're gonna draw the free body diagram for the spring and mass where the spring is and the spring of the mass is moving right so here we have your 15 kilogram positive y direction and it's going downward positive acceleration the force due to gravity mg and it the spring will oppose the motion of the block going downward it will have an upward Force Direction force of the spring so here mass times gravity minus force of the spring equal to mass times acceleration and remember the force of the spring is the spring stiffness times the total amount the spring got stretched so mass times gravity minus K times y plus K times y stretch equal to mass times acceleration so Y is the amount the spring got displaced from equilibrium position and Y stretch is the amount the spring got stretched due to the hanged block and now this minus sign we want to get rid of it so we will you know distribute it into the parenthesis so mass times gravity minus KY minus k y stretch equal to mass times acceleration we can replace the Y stretch in here with the previous slide relationship which is this one so we said that the Y stretch is mass times gravity divided by k equal y stretch so we will use this relationship and plug it in here so mass times gravity minus KY minus k m times G over K so the K value will cancel with this K value so here we can rewrite the equation mg minus k y minus mg minus M acceleration so the mg in here will cancel with this mg in here and here the spring minus KY will move to the other side of the equation to get rid of the negative sign so mass times acceleration plus k y equal to zero so now what we can do is we can divide both sides of the equation by Mass and that will give us mass times acceleration over Mass plus K Over y over m equals zero so the mass will cancel with the mass and we can rewrite acceleration plus K Over M times y equal to zero and here just as a reminder the natural frequency is the square root of K Over M we take the square of both sides to get rid of the square root so we'll get the natural frequency squared K Over M so now we can take this natural frequency and replace it in here so that will give us right here is acceleration plus the natural frequency squared times y equal to zero so again this is your differential equation these are the general solutions for this differential equation and here we will use the initial conditions that we have in the problem at time equals zero second and here we we are told that at the position at zero is 0.1 meter and the velocity at zero second is 0.75 meter per second and here and also from the found and given values we can find the natural frequency of the system to be 735.75 Newton over meter divided by 15 kilogram and this will give you 7 radian per second we will we will plug in the Y value at zero second and the natural frequency into the position function zero times anything will give you a zero so here we will solve for the B because remember cosine of 0 is 1 and the sine of zero is zero so here the sine zero or will be zero so B will equal 0.1 and now we will use the initial velocity or velocity at time 0 and the natural frequency in the velocity function and we will plug in these values into the velocity function and remember cosine of 0 is is one so we will be solved for a in here and B will cancel because sine of 0 is 0. so 0.75 equal eight times seven so that will give you a value of 0.107 so now we can plug in the B A and natural frequency into the position function to give us this function that represent the uh position function of the block so with a given with with the given spring constant with the found spring constant with a given mass and uh you know given initial conditions that is the function that will represent the vibration of this system all right so this is your function in here and in and this function is is being represented in the red function right here and when we this and here we are asked to find the phase angle so the phase angle is simply the inverse tan the B constant over the a constant and the B constant we said is 0.1 and the a constant is 0.107 and that gives us 43 degree that is the phase angle so or we can rewrite this equation using the general solution equation to be 0.146 sine 7t 7 is the natural frequency Plus 0.757 this is the this is 45 43 degree in Radian so that will be your phase angle in here now this if so this y this function right here is your red function this function in here is your blue function I should written this in blue that's my mistake but what I want to show when you're when you're phase angle is zero that is how the function will look like so when we talk about the phase angle this is what we're talking about this shift so that these initial conditions made the shift to to the left okay from here to here that is your phase angle that we're talking about right here all right now this is the fourth problem 22-4 here we're told when a 20 pound Force weight is suspended from a spring the spring is stretched a distance of four inch determine the natural frequency and the period of the vibration of 10 pound Force weight attached to the spring I mean to the same spring so what is given here we given a weight number one W the sub 1 equal to 20 pound Force okay so the weight one suspended from spring number one due to the uh suspended wait one on Spring number one spring number one stretched equal to four inch now way at number two W sub 2 equal to 10 pound Force the width 2 suspended from Spring number one in this situation we want to know the natural frequency and we want to find the period the time it takes to complete one cycle all right so here we will talk about the spring number one so this is your spring number one that is the natural length of the spring that's your spring number one now for spring number one got stretched due to the hanged mass of 20 pound force that is its weight one and downward is positive and it got stretched an amount of four inch all right so now this spring and Mass is at equilibrium position now the same spring spring number one that we have okay where it have its natural length of spring that's spring number one got stretched due to a different weight of 10 pound Force W sub 2. and we and and due to that it got stretched a certain amount so so now and reach to equilibrium position all right so now let's draw a free body diagram for spring number one and weight one so 20 pound Force which is our weight numbers weight one it have weight due to gravity which is mass times gravity which will force mass times acceleration and the force of the spring that opposes the the deflection of the spring in the downward Direction and we said downward is positive y so here the sum of the forces in the y direction will equal to zero because at the equilibrium position so downward is W sub 1 minus the force of the spring equals zero the weight is minus K stretch equal to zero so here K1 is 60 pound force over feet so weight one is equal to k y stretch weight one y stretch equal to K so K is 20 pound Force divided by four over 12 to to convert inch to feet and and we got the 60 pound force over feet so now we will talk about the spring number one and wait two so the mass times gravity go to the other side and now we got this Mass which is in sludge 0.3105 so K1 we we know because we're dealing with the same spring 60 pound force over feet here that natural frequency will be we want to find the natural frequency which is K Over m okay we found it to be 60 pound force over feet divided by the mass we found in sludge 0.3105 sludge and that gives us 13.899 radian per second now we want to find the period so what is the period period is 2 pi over the natural frequency now we can write 2 pi and we gotta find get the reciprocal M over k so it will give us we'll plug in the values the values that that we found and to come and for this system to complete one cycle it takes 0.45 second so here let's discuss a little bit about natural frequency and what is the natural frequency the natural every uh object for example um let's talk about aluminum now if you remember that you know every um material have its mechanical properties so when you let's say stretch an aluminum bar the aluminum have and it aluminum have elasticity so what is elasticity elasticity means that is how much a certain apply you can apply a certain force or certain pressure by pulling the aluminum bar to a certain level and when you release that pressure that force on the aluminum bar the spring will go back to its original position but the reality it will return but but what what will happen the this is that a aluminum bar will vibrate now every object or every material have its own natural frequency and that natural frequency of that system depends on the how elastic the system is and the mass of the system so so as I mentioned before the uh the the fork due to its material due to its mass when you hit it it will start vibrating right that the the spoon have its own natural frequency and we can represent a certain mechanical system or a dynamic system as a spring and Mass model so we can so instead of the elasticity we can say the spring stiffness K or uh and instead of the the the mass of the aluminum bar here we're talking about the mass that's being hanged from the spring so that what is natural frequency is now also you can learn about what a natural frequency is just by looking at the units so here we want to know what does a spring Mass system do it oscillates so when we say oscillate what does oscillating mean it mean it move back or swing move or swing back and forth so here we're talking about linear motion representing it in a circular motion you know in in the previous slides when we show the spring and the mass the mass is moving linearly in the vertical direction right but when we represent in a function we use sine and cosine and we use radians and degrees why because here we're dealing because because when you start from a position and you move and go and come back to that position then we can represent this position as a circular motion and we can use these things so in here so let's take a look at this so here you have your spring as this road and this is the mass that you have in here all right now this spring will move upward you know the spring is being you know pulls the mass upward that's the maximum amount this spring got compressed and then the spring will exert a force downward to put to push the mass downward back to its original position in here so it went back and forth it's it started from this position and it ended at the same position in here so if you if you think about it we start so at this point right here and this point right here so that's why we can represent so for example let's say this position where I have a circle in here is this one here so it went all the way back and we returned back to this position to this exact same position so that's why we can deal with it as a spring so this Theta right here is the radiant that represent the arc right so that's why the um uh the the the the period is 0.45 second meaning to complete this one Cycle takes 0.45 second now remember the The Arc represents the this one represent the linear a represent the linear movement Theta is is is the correspondence for the linear movement which is in Radian so it takes 13.889 degree Radian it moves this amount in one second so that that is like a velocity so here the natural frequency is 1 over the time it takes to complete one cycle and the natural frequency tells us that this system that have this natural frequency with a given initial conditions it can vibrate at a rate of 2.22 Hertz which means 2.22 cycles per one second all right problem number five so when a three kilogram block is suspended from when a three kilogram plug is suspended from a spring the spring is stretched a distance of 60 millimeter determine the natural frequency and the period of the vibration for 0.2 kilogram block attached to the same spring so here we have a mass number one M1 equal to three kilogram so M1 is suspended from the spring number one now due to the suspended M1 on Spring number one spring number one stretched 0.0 meter so mass number two is 0.2 kilogram so Mass 2 suspended from Spring 1 just like the previous problem we are asked to find the natural frequency and the period so here we have the spring number one right here and spring number one got stretched due to a hanged Mass of three kilogram okay it got stretched an amount of 0.06 meter and this is your equilibrium position again the spring Mass the spring number one same spring as as in this case we got stretched due to a different mass of 0.2 kilogram it got stretched a certain amount to the equilibrium position so so here the natural frequency which is this four spring number one and mass number two system and here we are asked to find the period for spring number one and mass number two system so the natural frequency is what is K of the this so so we said that the natural here we want to find the natural frequency for the spring number one and mass number two and we hang the mass number two on it so here we want to find the so we were given the mass 0.2 but we want to know the K value the um the the period for spring number one where a mass number two is hanged on it so spring number one and mass number two system and here we what we want to find is that natural frequency of the system and we will get this by finding the K value so the free body diagram for spring number one and mass number one is three kilogram positive y direction downward Mass due to gravity and the spring will will have an opposite to the displaced or the stretched the displacement so the sum of the forces in the y direction equal to zero so that's Mass 1 times gravity minus the force of the spring equal to zero so M1 times G minus K1 which is the value that we want to find times y stretch equal to zero so mass times 1 times gravity equal K sub 1 times y stretch now we can solve for K sub 1 by moving y straight to the other side of the equation and it will give us this equation now we can plug in the values that that we have for Mass 1 which is three kilogram and gravity which is 9.81 meter per second squared and the Y stretch we are told that the amount is 0.06 meter plug this into your calculator and you will get the K1 is equal to 490.5 Newton per meter so that's how stiff the spring is so now we can plug the K value in here all right so here we said the natural frequency for spring number one and mass number two and here we want the period for spring number one and mass number two so here the natural frequency we got it by replacing the found values got 14 9.52 radian per second and the tau is 2 pi over 49.52 radian per second and it when you plug this into your calculator it will give you 0.127 that's how much it takes to complete uh one cycle
190602
http://www.infocobuild.com/education/audio-video-courses/mathematics/Statistics110-Harvard/lecture-08.html
Statistics 110: Probability (Harvard Univ.): Lecture 08 - Random Variables and Their Distributions InfoCoBuild × HomeFree EducationBooks and FilmsSelected VideosLearning LanguagesFun Brain GamesRadio ServicesNews MediaUseful Websites Statistics 110 - Probability Statistics 110: Probability (Harvard Univ.). Taught by Professor Joe Blitzstein, this course is an introduction to probability as a language and set of tools for understanding statistics, science, risk, and randomness. The ideas and methods are useful in statistics, science, engineering, economics, finance, and everyday life. Topics include the following. Basics: sample spaces and events, conditioning, Bayes' Theorem. Random variables and their distributions: distributions, moment generating functions, expectation, variance, covariance, correlation, conditional expectation. Univariate distributions: Normal, t, Binomial, Negative Binomial, Poisson, Beta, Gamma. Multivariate distributions: joint, conditional, and marginal distributions, independence, transformations, Multinomial, Multivariate Normal. Limit theorems: law of large numbers, central limit theorem. Markov chains: transition probabilities, stationary distributions, reversibility, convergence. Lecture 08 - Random Variables and Their Distributions This lecture discusses distributions, cumulative distribution functions (CDFs), probability mass functions (PMFs), and the Hypergeometric distribution. Go to the Course Home or watch other lectures: Lecture 01 - Probability and Counting Lecture 02 - Story Proofs, Axioms of Probability Lecture 03 - Birthday Problem, Properties of Probability Lecture 04 - Conditional Probability Lecture 05 - Conditioning Continued, Law of Total Probability Lecture 06 - Monty Hall, Simpson's Paradox Lecture 07 - Gambler's Ruin and Random Variables Lecture 08 - Random Variables and Their Distributions Lecture 09 - Expectation, Indicator Random Variables, Linearity Lecture 10 - Expectation Continued Lecture 11 - The Poisson distribution Lecture 12 - Discrete vs. Continuous, the Uniform Lecture 13 - Normal distribution Lecture 14 - Location, Scale, and LOTUS Lecture 15 - Midterm Review Lecture 16 - Exponential Distribution Lecture 17 - Moment Generating Functions Lecture 18 - MGFs Continued Lecture 19 - Joint, Conditional, and Marginal Distributions Lecture 20 - Multinomial and Cauchy Lecture 21 - Covariance and Correlation Lecture 22 - Transformations and Convolutions Lecture 23 - Beta distribution Lecture 24 - Gamma distribution and Poisson process Lecture 25 - Order Statistics and Conditional Expectation Lecture 26 - Conditional Expectation Continued Lecture 27 - Conditional Expectation given an R.V. Lecture 28 - Inequalities Lecture 29 - Law of Large Numbers and Central Limit Theorem Lecture 30 - Chi-Square, Student-t, Multivariate Normal Lecture 31 - Markov Chains Lecture 32 - Markov Chains Continued Lecture 33 - Markov Chains Continued Further Lecture 34 - A Look Ahead Discover more Science sciences educational science Education Online movie streaming services Contents Education Home Audio/video Courses Mathematics Discover more Science science Education sciences Online movie streaming services educational ..Free EducationBooks and FilmsSelected VideosLearning LanguagesFun Brain GamesRadio ServicesNews MediaUseful Websites ..About UsContact UsSite Mapinfocobuild
190603
https://www.spring-ford.net/uploaded/Math/GC_Interior_Exterior_angles_of_polygons_Extra_Practice.pdf
GEOMETRY Connections 39 INTERIOR AND EXTERIOR ANGLES OF POLYGONS #14 #14 The sum of the measures of the interior angles of an n-gon is sum = (n ! 2)180˚. The measure of each angle in a regular n-gon is m! = (n"2)180˚ n . The sum of the exterior angles of any n-gon is 360˚. Example 1 Find the sum of the interior angles of a 22-gon. Since the polygon has 22 sides, we can substitute this number for n: (n ! 2)180˚= (22 ! 2)180˚= 20 "180˚= 3600˚. Example 2 If the 22-gon is regular, what is the measure of each angle? Use the sum from the previous example and divide by 22: 3600 22 ! 163.64˚ Example 3 Each angle of a regular polygon measures 157.5˚. How many sides does this n-gon have? a) Solving algebraically: 157.5˚= (n!2)180˚ n " 157.5˚n = (n ! 2)180˚ ! 157.5˚n = 180˚n " 360 ! " 22.5˚n = "360 ! n = 16 b) If each interior angle is 157.5˚, then each exterior angle is 180˚!157.5˚= 22.5˚. Since the sum of the exterior angles of any n-gon is 360˚, 360˚÷ 22.5˚ ! 16sides . Example 4 Find the area of a regular 7-gon with sides of length 5 ft. Because the regular 7-gon is made up of 7 identical, isosceles triangles we need to find the area of one, and then multiply it by 7. In order to find the area of each triangle we need to start with the angles of each triangle. h !1 2.5 2.5 Each interior angle of the regular 7-gon measures (7 ! 2)180˚ 7 = (5)180˚ 7 = 900 7 " 128.57˚ . The angle in the triangle is half the size of the interior angle, so m!1 " 128.57˚ 2 " 64.29˚. Find the height of the triangle by using the tangent ratio: tan!1 = h 2.5 " h = 2.5 # tan!1 $ 5.19ft. The area of the triangle is: 5 ! 5.19 2 " 12.98ft2 . Thus the area of the 7-gon is 7 !12.98 " 90.86ft2 . © 2007 CPM Educational Program. All rights reserved. Extra Practice 40 Find the measures of the angles in each problem below. 1. Find the sum of the interior angles in a 7-gon. 2. Find the sum of the interior angles in an 8-gon. 3. Find the size of each of the interior angles in a regular 12-gon. 4. Find the size of each of the interior angles in a regular 15-gon. 5. Find the size of each of the exterior angles of a regular 17-gon. 6. Find the size of each of the exterior angles of a regular 21-gon. Solve for x in each of the figures below. 7. 3x 4x 5x 3x 8. 2x 5x 4x x 9. x x 0.5x 1.5x 1.5x 10. 3x 4x 5x 5x 2x 4x Answer each of the following questions. 11. Each exterior angle of a regular n-gon measures 16 4 11 ˚. How many sides does this n-gon have? 12. Each exterior angle of a regular n-gon measures 13 1 3 ˚. How many sides does this n-gon have? 13. Each angle of a regular n-gon measures 156˚. How many sides does this n-gon have? 14. Each angle of a regular n-gon measures 165.6˚. How many sides does this n-gon have? 15. Find the area of a regular pentagon with side length 8 cm. 16. Find the area of a regular hexagon with side length 10 ft. 17. Find the area of a regular octagon with side length 12 m. 18. Find the area of a regular decagon with side length 14 in. Answers 1. 900˚ 2. 1080˚ 3. 150˚ 4. 156˚ 5. 21.1765˚ 6. 17.1429˚ 7. x = 24˚ 8. x = 30˚ 9. x = 98.18˚ 10. x = 31.30˚ 11. 22 sides 12. 27 sides 13. 15 sides 14. 25 sides 15. 110.1106 cm2 16. 259.8076 ft2 17. 695.2935 m2 18. 1508.0649 in2 © 2007 CPM Educational Program. All rights reserved.
190604
https://www.un.org/en/development/desa/population/publications/pdf/technical/TP2013-3.pdf
United Nations Department of Economic and Social Affairs Population Division Technical Paper No. 2013/3 Demographic Components of Future Population Growth This page is intentionally left blank Population Division Technical Paper No. 2013/3 Demographic Components of Future Population Growth Kirill Andreev, Vladimíra Kantorová and John Bongaarts United Nations · New York, 2013 NOTE The views expressed in the paper do not imply the expression of any opinion on the part of the United Nations Secretariat. The designations employed and the presentation of material in this paper do not imply the expression of any opinion whatsoever on the part of the United Nations Secretariat concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The term “country” as used in this paper also refers, as appropriate, to territories or areas. This publication has been issued without formal editing. iii PREFACE The Population Division of the Department of Economic and Social Affairs of the United Nations Secretariat is responsible for providing the international community with up-to-date and scientifically objective information on population and development. The Population Division provides guidance on population and development issues to the United Nations General Assembly, the Economic and Social Council and the Commission on Population and Development and undertakes regular studies on population estimates and projections, fertility, mortality, migration, reproductive health, population policies and population and development interrelationships. The purpose of the Technical Paper series is to publish substantive and methodological research on population issues carried out by experts within and outside the United Nations system. The series promotes scientific understanding of population issues among Governments, national and international organizations, research institutions and individuals engaged in social and economic planning, research and training. This paper presents the contributions of each demographic component—the current age structure of population, fertility, mortality and migration—to future population growth. Quantifying the roles of the demographic drivers of future population trends is important for developing policies and programmes aimed at balancing impending demographic changes and social, economic and environmental objectives. Contributions of demographic components have been estimated by constructing a series of appropriate cohort-component population projections: Standard, Natural, Replacement and Momentum. The analysis is based on the 2012 Revision of World Population Prospects. Results are presented for 201 countries or areas with total population of more than 90,000 inhabitants as of July 1, 2013 and for the world, major areas and regions. The authors are grateful to Ann Biddlecom for many stimulating discussions about the subject, for her assiduous work in revising the report, help and encouragement in development of this project. The authors also thank François Pelletier for reviewing the report and for overall supervision of the project. Gerhard Heilig, François Pelletier, Patrick Gerland, Kirill Andreev, Danan Gu, Nan Li and Thomas Spoorenberg, of the Population Division, produced the 2012 Revision of the World Population Prospects, on which all analyses in this technical report are based. Chandrasekhar Yamarthy, Igor Ribeiro, and Neena Koshy, of the Population Division, provided technical support for the production of the 2012 Revision. The authors also thank Kyaw-Kyaw Lay for preparing the annex tables and Neena Koshy for preparing the final document. The Technical Paper series as well as other population information may be accessed on the Population Division’s website at www.unpopulation.org. For further information concerning this publication, please contact the office of the Director, Population Division, Department of Economic and Social Affairs, United Nations, New York, 10017, USA, telephone (212) 963-3179, fax (212) 963-2147, email: [email protected] v CONTENTS Page PREFACE ............................................................................................................................................................. iii INTRODUCTION ................................................................................................................................................... 1 METHODOLOGY AND ILLUSTRATIVE CASES ........................................................................................................ 2 RESULTS ............................................................................................................................................................. 8 CONCLUSIONS ..................................................................................................................................................... 17 REFERENCES ....................................................................................................................................................... 19 TABLES 1. Total population in 2010 and 2100, the world, major areas and development groups .................................. 8 2. Contribution of demographic components to population growth from 2010 to 2100, the world, major areas and development groups ............................................................................................................ 9 3. Number of countries with largest absolute contributions by demographic components, 2050 and 2100 ..... 12 FIGURES 1. Population projection variants and contribution of demographic components to population growth, the world, 2010-2100 .................................................................................................................................... 4 2. Population projection variants and contribution of demographic components to population growth, Nigeria, 2010-2100 ....................................................................................................................................... 5 3. Population projection variants and contribution of demographic components to population growth, Brazil, 2010-2100 ......................................................................................................................................... 6 4. Population projection variants and contribution of demographic components to population growth, the Russian Federation, 2010-2100 ............................................................................................................... 7 5. Change in demographic components over time, as a proportion of total population in 2010, major areas, 2010-2100 ..................................................................................................................................................... 11 6. Contributions of demographic components as a proportion of total population in 2010, 2010-2100 (per cent) .................................................................................................................................... 13 7. Countries with largest absolute contributions of population momentum to population growth from 2010 to 2100 (thousands) ............................................................................................................................. 14 8. Countries with largest absolute contributions of fertility levels above and below replacement level to population growth from 2010 to 2100 (thousands) ...................................................................................... 15 9. Countries with largest absolute contributions of reductions in mortality to population growth from 2010 to 2100 (thousands) .............................................................................................................................. 16 ANNEX TABLES 1. Contribution of demographic components to population growth as proportion of the total population in 2010 and as proportion of the change in total population, 2010-2100 2. Total population by projection variants (in thousands), 2010-2100 3. Demographic components of population growth (in thousands), 2010-2100 ANNEX FIGURES Demographic components of future population growth for the world, major areas, regions and countries . This page is intentional left blank 1 DEMOGRAPHIC COMPONENTS OF FUTURE POPULATION GROWTH Kirill Andreev, Vladimíra Kantorová and John Bongaarts INTRODUCTION The world population is growing. By the end of the century, it is expected to increase by 3.7 billion people, rising from 7.2 billion in mid-2013 to 10.9 billion by 2100 (United Nations, 2013a). The distribution of future population growth by countries, regions and major areas is readily available from population projections prepared by the Population Division of the Department of Economic and Social Affairs of the United Nations Secretariat (United Nations, 2013a). This report identifies and estimates the major demographic components of future growth—fertility, mortality, migration and the current age structure of population. This decomposition is useful for understanding the relative weight of key factors that drive population growth and can inform policies and programmes aimed at balancing impending demographic changes and social, economic, health and environmental objectives (Bongaarts, 1994; Bongaarts and Bulatao, 1999). The projections prepared by the United Nations Population Division are based on a theoretical framework known as demographic transition. Over the course of the demographic transition, populations move from a regime of high mortality and high fertility to a regime of low mortality and low fertility. Over time rapid population growth takes place because mortality decline typically begins before fertility decline: as death rates fall but birth rates remain high, the number of births exceeds the number of deaths and population therefore grows. The countries that are still in the beginning or in the middle of the demographic transition are expected to complete their transitions over the next several decades. Both fertility and mortality levels in these countries are assumed to decline. For the countries that have already completed their demographic transitions, mortality is still assumed to be declining but fertility is expected to fluctuate around or below a level of about two children per woman. For the countries with natural growth close to zero (i.e., when the number of deaths is close to equal to the number of births), future population trajectories are to a greater extent influenced by assumptions about future migration in or out of the countries. Future population trajectories therefore depend on assumptions about future trends in fertility, mortality and migration. In addition, the current population age structure influences future growth by actually affecting the overall number of births, deaths and migrations that are implied by fertility, mortality and migration rates. All four demographic components can have a significant impact, positive or negative, on future population growth. Fertility provides a positive contribution to population growth if fertility is above replacement and a negative contribution to population growth if fertility is below replacement. The concept of replacement fertility is important in population projections because maintaining fertility at replacement level in the long run leads to a stationary population and stabilization of population growth (Preston et. al., 2000). If fertility is above replacement with constant mortality and zero migration, population will grow indefinitely. Similarly, if fertility stays below replacement, population will eventually decline to zero. To achieve replacement level fertility, women, on average, need to have one surviving daughter. In a population in which all females survive through the reproductive years and the probability of having a daughter at each pregnancy is 50 per cent, total fertility at replacement will be 2.0 children per woman. In reality, replacement-level fertility is slightly higher than 2.0 children per woman because the chances of Population Estimates and Projection Section, Population Division/DESA United Nations Fertility and Family Planning Section, Population Division/DESA United Nations Population Council 2 survival from birth to the reproductive ages are less than 100 per cent and there are more boys born than girls (i.e., the sex ratio at birth is greater than 100). The contribution of mortality to population growth will be positive if mortality is declining and negative if mortality is increasing. In population projections, a positive outlook for the future is usually adopted: life expectancy at birth is expected to continue to increase and death rates are expected to decline over all age groups. Under this assumption, the contribution of mortality to population growth will be positive. In more complex cases, death rates are not declining uniformly over all age groups but rather are increasing for some age groups and declining for others, as in countries that have been severely affected by HIV/AIDS epidemics. In these complex cases, the contribution of mortality to population growth is less clear. The contribution of mortality may also be related to the interplay between age-specific mortality rates and population age structure. Assumptions about future migration are incorporated in the UN population projections by specifying net international migration levels and migration distributions by age and sex. Projected levels of net migration are kept constant in the near-term. After 2050, net migration is assumed to decline gradually and reach zero by 2100. The contribution of migration to population growth is determined by net migration: positive net migration will contribute to population increase and negative net migration will reduce population. The population age structure at the starting point of the projection also influences the future growth trajectory. Even with assumptions of fertility at replacement level, constant mortality, and no migration, the total population will not necessarily remain constant. Total population could either increase or decrease before reaching a stationary population size. This phenomenon is called momentum of population growth and its value is defined by the ratio of ultimate population size to current population size (Keyfitz, 1971). In the countries in the midst of demographic transition and with young age structures, the total population will continue to grow because births produced by a large number of females of reproductive age will exceed deaths, even if total fertility is at replacement level. In this case, population momentum has a positive effect on population growth. In the countries that completed the demographic transition and with relatively old age structures due to long periods of low fertility and population ageing, the total population will actually decline before reaching ultimate population size. In this case, population momentum has a negative effect on population growth. The population growth brought about by the population momentum can be attributed exclusively to the initial age structure of population. This report assesses the contributions of each demographic component—fertility, mortality, migration and the current age structure of population—to future population growth. The analysis is based on the 2012 Revision of World Population Prospects (United Nations, 2013b). Results are presented for 201 countries or areas with total population of more than 90,000 inhabitants as of July 1, 2013 and for the world, major areas and regions. METHODOLOGY AND ILLUSTRATIVE CASES The analysis presented in this report quantifies the contribution of the current age structure of a population, fertility, mortality, and migration to future population growth. To measure the contribution of a single demographic component, this report relies on a procedure proposed by Bongaarts and Bulatao (1999). It consists of constructing a series of appropriate cohort-component population projections. The series of projections starts with a Standard population projection, which incorporates effects of all four demographic components. For our analysis the Standard population projection is set equal to the Medium variant from the 2012 Revision of World Population Prospects (United Nations, 2013b). This 3 projection starts with population by age and sex in 2010 and projects future population trajectories up to 2100 based on expected trends in fertility, mortality and net international migration, computed according to the methodology used in the 2012 Revision of World Population Prospects (United Nations, 2013b). The effect of migration is estimated by constructing a Natural population projection variant, which is derived from the Standard variant by setting net migration to zero. Population growth in this case is driven only by natural increase based on assumptions about future fertility and mortality and by the initial age distribution. The difference in total population between the Standard and Natural variants shows the effect of net migration on future population growth. The effect of fertility is estimated by a Replacement projection variant, which is derived from the Natural variant by setting total fertility at the replacement level for each five-year projection periods. The difference between the Natural and Replacement projection variants shows the effect of total fertility, above or below replacement level, on the overall population growth. Note that the Replacement projection variant is different from the instant-replacement variant published in the 2012 Revision of World Population Prospects (United Nations, 2013b), because the latter includes the effect of migration while the former does not. The last projection variant, Momentum, is constructed by using as of 2010 constant mortality rates, constant fertility at the replacement level and by setting net migration at zero. Computing the difference in total population between the Replacement and Momentum variants shows the effect of anticipated mortality decline on future population size. It is important to note that trends in mortality between birth and the reproductive ages are taken into account by the changes that occur in the replacement levels of fertility. The difference between the Replacement and Momentum projections therefore measures only adult mortality above the average age at childbearing. Lastly, the difference between the starting total population in 2010 and the Momentum variant is attributable to the initial age structure of a population. If fertility declines immediately to the replacement level as in the Momentum variant, population does not immediately stabilize; instead, population may still continue to increase or decrease for a few decades before it eventually tapers off and reaches the ultimate stationary level. This series of cohort-component projections is calculated for individual countries to estimate the contributions of each of the demographic components to future population growth. Standard aggregates for the world, major areas and regions published in the 2012 Revision of World Population Prospects (United Nations, 2013b) can be computed in two different ways. In the first approach, an aggregate (e.g., total population of the world) can be treated as a single “country”, and the contributions of demographic components can be estimated by running a series of cohort-component projections as described above. For the Momentum projection variant, for example, this approach implies that world fertility falls immediately to the replacement level. In the second approach, the projections for an aggregate can be computed by summing up corresponding projections for individual countries. The Momentum projection for the world in this case is the sum of Momentum projections for all countries. The two approaches do not necessarily generate the same results. Computations show that the ultimate population size of the Momentum projection for the world is 174 million higher if the first approach is used (treating the world as a country) than if the second approach is used (summing the Momentum projections for all countries). The difference is due to compositional changes of world population over the projection period. However, the estimated contributions of the four demographic components are quite close regardless of the method used for aggregation. For consistency with aggregation procedures used in the 2012 Revision of World Population Prospects, the second approach was adopted in this analysis for producing projections of aggregated populations. Figure 1 shows the components of population growth for the world population. Total population of the world is estimated at 6.9 billion people as of July 1, 2010. By 2100, total population is expected to 4 increase by 3.9 billion (light blue arrow labeled “ΔP”) in the Standard variant reaching 10.9 billion people (the numbers may not sum up exactly due to rounding). With no migration at the world level, the Natural projection variant is the same as the Standard variant and, obviously, the effect of migration on population growth at the world level is zero (i.e., the world is a closed system). If total fertility is maintained at replacement level and mortality is declining, total population will reach 9.9 billion people in 2100 (Replacement variant). The difference between the Natural and Replacement variants, 0.95 billion people, is the contribution of fertility above replacement to future population growth (green arrow labeled “Pf”). In the Momentum variant, total population continues to grow for approximately five decades before it stabilizes at an ultimate population size of 8.8 billion people. The difference between the Replacement and Momentum variants, 1.13 billion people in 2100, is due to reductions in adult mortality over the projection period (blue arrow labeled “Pmor”). Lastly, the difference of 1.86 billion people between the starting population in 2010 and the ultimate population size of the Momentum variant, is due to the young age structure of world population in 2010 (magenta arrow labeled “Py”). In sum, out of the total growth in world population of 3.94 billion people between 2010 and 2100, 1.86 billion is due to a young age structure in 2010, 1.13 billion is due to further reductions in mortality, and 0.95 billion is due to fertility that is above replacement level. Expressed as a proportion of the total population increase, the contributions are 47 per cent from population momentum, 29 per cent from mortality reductions and 24 per cent from above-replacement fertility levels (figure 1, the bar chart in the bottom right corner, the right-hand axis). Figure 1. Population projection variants and contribution of demographic components to future population growth, the world, 2010-2100 NOTE: ΔP – total population change, 2010-2100 ; Py – increase due to younger age structure in 2010; Pmor – increase due to mortality reductions; Pf – increase due to fertility above replacement. 0 5 10 15 20 25 30 Year = 2050, ∆P=+2635M, +38.1% Contribution as percentage of total population in 2010 0 13 26 39 52 66 79 Contribution as percentage of total change, 2010-2050 0 5 10 15 20 25 30 Year = 2100, ∆P=+3938M, +56.9% Contribution as percentage of total population in 2010 0 9 18 26 35 44 53 Contribution as percentage of total change, 2010-2100 Contribution to population growth by component 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 6500 7000 7500 8000 8500 9000 9500 10000 10500 11000 Year Population (millions) World Standard variant Natural variant Replacement variant Momentum variant Momentum Mortality Fertility Migration Momentum Mortality Fertility Migration Pmor ΔP Py Pf 2050 2100 5 Expressed as a proportion of the total population in 2010, the contributions of each demographic components to population growth from 2010 to 2100 period are 27 per cent from population momentum, 16 per cent from mortality reductions and 14 per cent from above-replacement fertility (figure 1, the bar chart in the bottom right corner, the left-hand axis). The components of population growth for the period 2010-2050 are presented in the bar chart in the top right corner (figure 1). In the annex, similar figures are presented the world, major areas and regions and for 201 countries or areas with total population of more than 90,000 inhabitants as of July 1, 2013 and for (annex figure 1). Nigeria (figure 2) provides an example typical of countries where future population growth is mainly driven by high fertility. The total population of Nigeria is expected to increase from 160 million in 2010 to 914 million in 2100, or by 754 million people. This increase is designated by the light blue arrow in the chart (labeled “ΔP”). The total increase is a sum of four components: a) increase due to a young age structure (magenta arrow labeled “Py” for population momentum); b) increase due to mortality reductions (blue arrow labeled “Pmor”); c) increase due to fertility above replacement level (green arrow labeled “Pf”); and d) small decline due to anticipated net emigration (brown arrow labeled “Pmig”): ΔP = Py + Pmor + Pf + Pmig Figure 2. Population projection variants and contribution of demographic components to future population growth, Nigeria, 2010-2100 NOTE: ΔP – total population change, 2010-2100; Py – increase due to younger age structure in 2010; Pmor – increase due to mortality reductions; Pf – increase due to fertility above replacement; Pmig – decrease due to negative net-migration. Mortality reductions account for only 6 per cent of the projected total population growth in Nigeria and a young age structure in 2010 accounts for about 9 per cent of projected growth. The overwhelming contribution to population growth in Nigeria—86 per cent or 647 million people—is accounted for by above-replacement fertility. In other words, population in Nigeria is expected to increase more than 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 100 200 300 400 500 600 700 800 900 1000 Year Population (millions) Nigeria Standard Natural Replacement Momentum ΔP = Py + Pmor + Pf + Pmig Components of population change 6 fourfold by 2100 due to fertility levels that are above replacement level. Note also that figure 2 is similar to figure 1 except that the bar charts on the right have been removed for the sake of simplicity of the presentation and the arrows are moved to the right outside the plot area. Countries with projected population growth that is near zero represent a complex interplay of demographic components. In Brazil, for example, nearly zero population growth is expected between 2010 and 2100. The nearly zero population growth is due to the compensation of a population increase because of a young population age structure and expected mortality reductions with total fertility below replacement (figure 3). Population momentum (figure 3, magenta arrow) adds 29 per cent to the total population size of 2010 and mortality reductions (figure 3, blue arrow) add a further 21 per cent. The combined increase of 50 per cent from both factors is offset by a decline in population caused by total fertility below replacement level, which amounts to -49 per cent (figure 3, green downwards arrow). An additional small decline of about -2 per cent is due to assumed net emigration from the country (figure 3, brown arrow). Brazil is also an example of a country where close-to-zero population growth results in percentages of demographic components relative to the total increase in population from 2010 to 2100 that are not intuitive. In this and similar cases, the right-hand axis of the bar charts is excluded from the country profile plots (annex figure 1). Figure 3. Population projection variants and contribution of demographic components to future population growth, Brazil, 2010-2100 NOTE: ΔP – total population change, 2010-2100; Py – increase due to younger age structure in 2010; Pmor – increase due to mortality reductions; Pf – decrease due to fertility below replacement; Pmig – decrease due to negative net-migration. 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 180 200 220 240 260 280 300 Year Population (millions) Brazil Standard Natural Replacement Momentum Components of population change ΔP = Py + Pmor + Pf + Pmig 7 The Russian Federation illustrates another case where the effects of the four demographic components on projected total population are in positive and negative directions. In the Russian Federation, total population is expected to decline by -41.7 million people from 143.6 million in 2010 to 101.9 million in 2100 (figure 4, downwards light blue arrow). The largest contribution to the total decline in population (a decline of 47.2 million people) is due to fertility (figure 4, downwards green arrow) which is projected to stay at below-replacement level through 2100. The Russian Federation also provides an example of negative population momentum, another contributor to the expected population decline. If mortality is kept constant at its level in 2010 and fertility is set to replacement level (which means an immediate increase in total fertility from the current below-replacement level), total population is still projected to decline to 122.8 million in 2100, a difference of -20.8 million people from 2010 (figure 4, downwards magenta arrow). Negative population momentum arises because the initial age structure is older than the age structure of the ultimate stationary population. The mortality and migration components, by contrast, are expected to provide positive contributions to population growth and thus offset the total population decline by 16.8 and 9.5 million people, respectively (figure 4, upwards blue and brown arrows). Expressed as proportions of the total population size in 2010, projected population change in the Russian Federation from 2010 to 2100 is accounted for by a decline of 33 per cent due to fertility, a decline of 14 per cent due to a relatively old age structure in 2010, an increase of 12 per cent due to further reductions in mortality, and an increase of 7 per cent due to positive net-migration (figure 4). Figure 4. Population projection variants and contribution of demographic components to future population growth, the Russian Federation, 2010-2100 NOTE: ΔP – total population change, 2010-2100; Py – decrease due to older age structure in 2010; Pmor – increase due to mortality reductions; Pf – decrease due to fertility below replacement; Pmig – increase due to positive net-migration. 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 90 100 110 120 130 140 150 Year Population (millions) Russian Federation Standard Natural Replacement Momentum Components of population change ΔP = Py + Pmor + Pf + Pmig 8 RESULTS World A large part of the increase in the world population from 2010 to 2100 can be attributed to the population momentum that is a result of the young age structure of the world population today. The world population is projected to increase from 6.9 billion in 2010 to 10.9 billion in 2100, according to the medium projection variant (table 1). Nearly half of the projected population growth to 2100 will be due to population momentum, accounting for 1.9 billion people or 47 per cent of the total increase in the population (table 2). In other words, if total fertility is set to a replacement level and mortality remains unchanged from 2010 onwards in all countries of the world, the world population would still increase to 8.9 billion by 2100, or by 27 per cent compared to the 2010 population. Continuing reductions in mortality rates and total fertility persisting above replacement level will each contribute to about a quarter of the world population increase from 2010 to 2100. Reductions in mortality will add 1.1 billion people by 2100 (or 29 per cent of the total increase) and total fertility above replacement at the world level will add 1.0 billion people (or 24 per cent of the total increase) (table 2). TABLE 1. TOTAL POPULATION IN 2010 AND 2100, THE WORLD, MAJOR AREAS AND DEVELOPMENT GROUPS Total population (millions) Population change 2010-2100 Absolute (millions) Relative to 2010 (per cent) Major area or development group 2010 2100 World 6,916 10,854 3,938 57 More developed regions 1,241 1,284 43 3 Less developed regions 5,675 9,570 3,895 69 Least developed countries 839 2,928 2,089 249 Africa 1,031 4,185 3,153 306 Asia 4,165 4,712 546 13 Europe 740 639 -101 -14 Latin America and the Caribbean 596 736 140 23 Northern America 347 513 167 48 Oceania 37 70 33 90 Development groups and major areas Most contemporary developed countries have already reached the end of the demographic transition, while most developing countries are still in transition. As a result the impact of demographic components on the future population growth differs sharply between developed and developing regions. Developed regions will experience a small increase in the total population over the period 2010 to 2100, less than 4 per cent compared to the 2010 population. The demographic transition is in an advanced state and population age structures are ageing. Thus, in developed regions population momentum has a negative contribution to projected population growth. In the hypothetical situation of holding total fertility at the replacement level over the period 2010 to 2100, the population of developed regions would decline by 5 per cent compared to the population in 2010, in the absence of net migration and mortality reductions. 9 TABLE 2. CONTRIBUTION OF DEMOGRAPHIC COMPONENTS TO POPULATION GROWTH FROM 2010 TO 2100, THE WORLD, MAJOR AREAS AND DEVELOPMENT GROUPS Contributions of demographic components Major area or development group Momentum Mortality Fertility Migration Total Relative to total population in 2010 (per cent) World 26.9 16.3 13.8 0.0 56.9 More developed regions -5.2 11.5 -23.9 21.1 3.5 Less developed regions 33.9 17.4 21.7 -4.3 68.6 Least developed countries 54.8 26.4 177.0 -9.1 249.0 Africa 49.5 27.9 235.0 -6.6 305.8 Asia 27.7 14.1 -25.8 -2.9 13.1 Europe -9.5 11.3 -28.8 13.2 -13.7 Latin America and the Caribbean 39.2 20.1 -27.3 -8.5 23.5 Northern America 6.8 12.1 -10.9 40.1 48.1 Oceania 20.8 13.4 4.9 50.9 90.0 Relative to population change 2010-2100 (per cent) World 47.2 28.6 24.2 0.0 100.0 More developed regions -149.5 329.8 -687.7 607.3 100.0 Less developed regions 49.3 25.3 31.6 -6.3 100.0 Least developed countries 22.0 10.6 71.1 -3.7 100.0 Africa 16.2 9.1 76.9 -2.2 100.0 Asia 211.0 107.9 -196.9 -21.9 100.0 Europe 69.0 -82.8 210.0 -96.2 100.0 Latin America and the Caribbean 166.8 85.7 -116.4 -36.1 100.0 Northern America 14.2 25.2 -22.8 83.4 100.0 Oceania 23.1 14.9 5.5 56.6 100.0 Absolute (millions) World 1,857 1,128 953 0 3,938 More developed regions -64 142 -296 262 43 Less developed regions 1,921 986 1,232 -245 3,895 Least developed countries 460 221 1,485 -76 2,089 Africa 510 288 2,423 -68 3,153 Asia 1,152 589 -1,075 -120 546 Europe -70 84 -213 98 -101 Latin America and the Caribbean 234 120 -163 -51 140 Northern America 24 42 -38 139 167 Oceania 8 5 2 19 33 The total fertility projected for developed regions is, however, well below replacement level, and the contribution of total fertility towards population decline is estimated at 24 per cent of the 2010 population. Two components in developed regions that act in a different direction from fertility and population momentum are mortality reductions and positive net-migration trends. From 2010 to 2100, the mortality component would increase population in 2100 by 12 per cent compared to the 2010 population 10 size and the migration component would increase population in 2100 by 21 per cent compared to the 2010 population size. In developing regions, the population is projected to increase by 69 per cent between 2010 and 2100. The demographic transition started later than in developed regions, and in most countries the population age structures are still young. Population momentum will have a positive impact on population growth in developing regions (about 34 per cent of the 2010 population). The fertility component contribution, although quite different across countries in the developing regions, accounts for an additional 22 per cent of the population size in 2010. The contribution of the mortality component in developing regions is larger than in the developed regions, accounting for 17 per cent of the population size in 2010. Only the migration component has a negative impact on projected growth (-4 per cent of the 2010 population). In the least developed countries, the population is projected to increase by 249 per cent between 2010 and 2100, largely due to fertility levels above replacement (a 177 per cent increase compared to the 2010 population), followed by contribution of the population momentum (an additional 55 per cent of the 2010 population) and reductions in mortality (26 per cent of the 2010 population). As in developing regions as a whole, the migration component is negative (-9 per cent of the 2010 population). Across the major areas, differences in the impact on population growth are largest for the fertility component. While in Africa the fertility component has a large impact on population increase, in other major areas the effect is small (e.g., Oceania) or negative (e.g., Asia). Due to future fertility trends alone, the populations of Asia, Europe, and Latin America and the Caribbean would all decline between 2010 and 2100 by at least 25 per cent. Only in Europe has population ageing become so advanced that the contribution of population momentum in the period 2010 to 2100 will be negative (at -10 per cent of the 2010 population size). In contrast, the projected population to 2100 in Africa increases due to young age structures alone by an additional 50 per cent. The young population age structures in Asia and in Latin America and the Caribbean contribute to a population increase by 28 per cent and 40 per cent, respectively. Mortality reductions to 2100 will have the largest impact on population projections in Africa (increasing the population size by 28 per cent compared to the 2010 population size). In other major areas, the impact is between 10 to 20 per cent. Future migration trends as projected for the period 2010-2100 will add 40 per cent of the population in Northern America and 51 per cent in Oceania. Changes over time The preceding analysis has used the year 2100 as the endpoint of the projections for estimating the contribution of each demographic component. The components, however, can gradually loose or gain in the impact they have on population growth during the projection period. In general, the contribution of the population momentum component to population growth, expressed as a proportion of the 2010 population, is largest during the early decades and then stabilizes around the year 2060 (figure 5). The population momentum component has a positive impact on population growth in all major areas over the projection period with the exception of Europe. In Europe, population momentum has a small, positive contribution towards population growth up to 2025, followed by a negative contribution from 2030 onwards, though never more than 10 per cent relative to the 2010 population. After 2030, because of the relatively older population age structure of Europe, the population would decline even with the total fertility at replacement (assuming mortality is constant and net migration is zero). 11 The contribution of the fertility component to population growth rapidly increases over time in Africa, where the fertility component alone will add one billion people by 2055 and another billion by 2090. In Oceania, which is a very diverse group of countries, the contribution of the fertility component from 2010 to 2100 is positive but small (below 10 per cent) and declines towards 2100. In Latin America and the Caribbean, the fertility contribution to population growth is positive only until 2020 (though less than 1 per cent relative to the 2010 population), followed by a rapid negative impact on population. By 2100, the contribution of the fertility component towards population decline is more than 25 per cent relative to the 2010 population size in Asia, Europe and Latin America and the Caribbean. Figure 5. Change in demographic components over time, as proportion of the total population in 2010, by major areas, 2010-2100 -50 0 50 100 150 200 250 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Contribution Year Africa Momentum Fertility Mortality Migration -50 -40 -30 -20 -10 0 10 20 30 40 50 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Contribution Year Asia Momentum Fertility Mortality Migration -50 -40 -30 -20 -10 0 10 20 30 40 50 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Contribution Year Europe Momentum Fertility Mortality Migration -50 -40 -30 -20 -10 0 10 20 30 40 50 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Contribution Year Latin America and the Caribbean Momentum Fertility Mortality Migration 12 Figure 5 (continued) Countries The diversity of experiences among countries is noteworthy. Among 201 countries or areas with total population of more than 90,000 inhabitants as of July 1, 2013, 55 countries are expected to have fewer inhabitants in 2100 than in 2010. These countries are in Asia, Europe and Latin America and the Caribbean. Variation in the fertility component is by far the largest cause of variation in population growth at the country level. On the one hand, there are strong positive impacts in many sub-Saharan African countries (figure 6) resulting from above-replacement fertility projected for the future. On the other hand, there are negative impacts of below-replacement fertility in many countries among all major areas. In projections to 2100 the fertility component has the largest absolute contribution of all components in 115 countries (or 57 per cent of all countries) (table 3). In 69 countries the fertility component makes a positive contribution to population growth. Momentum has the largest impact of all components in 49 countries (or 24 per cent of all countries). Mortality reductions are predominant in five countries. A net migration contribution to population growth has the largest weight in 32 countries (or 16 per cent of all countries). For projections to the year 2050, the momentum component is dominant in more countries (87 countries) and the fertility component is dominant in fewer countries (75 countries) compared to projection to 2100. TABLE 3. NUMBER OF COUNTRIES WITH LARGEST ABSOLUTE CONTRIBUTIONS BY DEMOGRAPHIC COMPONENTS, 2050 AND 2100 The largest component: Fertility Momentum Mortality Migration Year of projection Number Proportion of all countries (per cent) Number Proportion of all countries (per cent) Number Proportion of all countries (per cent) Number Proportion of all countries (per cent) 2050 75 37.3 87 43.3 1 0.5 38 18.9 2100 115 57.2 49 24.4 5 2.5 32 15.9 -50 -40 -30 -20 -10 0 10 20 30 40 50 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Contribution Year Northern America Momentum Fertility Mortality Migration -50 -40 -30 -20 -10 0 10 20 30 40 50 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Contribution Year Oceania Momentum Fertility Mortality Migration 13 Figure 6. Contributions of demographic components as proportion of total population in 2010, 2010-2100 (per cent) Momentum Mortality Fertility Migration 14 Largest contributions of population momentum Countries with populations that are young have relatively large cohorts of young people which will in the near future contribute towards further increases in projected population size, even if the total fertility was set at replacement level. In contrast, a number of countries have already experienced population ageing due to long periods of low fertility. The older age structures in these countries will contribute towards further decreases in projected population size, even if the total fertility was at replacement level. In absolute terms, the youthful age structure alone in India would increase its population by 447 million people by 2100 (or by 37 per cent relative to the 2010 population). In China, the contribution of population momentum is 146 million people (or 11 per cent relative to the 2010 population). The absolute contribution of population momentum will be between 50 million and 100 million in Bangladesh, Brazil, Ethiopia, Indonesia, Mexico, Nigeria and Pakistan (figure 7). In a further 10 countries (Afghanistan, Guatemala, Honduras, Mayotte, Nicaragua, Niger, Sao Tome and Principe, State of Palestine, Timor-Leste and Uganda), momentum would increase projected population in 2100 by more than two-thirds of the population size in 2010. In the opposite direction, the largest absolute contribution of age structure to population decline, by 21 million people, will be in Japan and the Russian Federation (relative to the 2010 population, a decline of 17 per cent in Japan and 14 per cent in the Russian Federation). Population momentum is negative in 38 countries (figure 7), including most European countries, four countries in Asia (Japan, Hong Kong, SAR of China, Qatar, United Arab Emirates) and Cuba. In Qatar and Bulgaria, the ageing population would contribute to a decline of more than 20 per cent of the 2010 population in each country. Figure 7. Countries with largest absolute contributions of population momentum to population growth from 2010 to 2100 (thousands) -50,000 0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 India China Pakistan Indonesia Bangladesh Nigeria Mexico Brazil Ethiopia Philippines Belarus Poland Bulgaria Romania Spain Italy Ukraine Germany Russian Federation Japan Contribution of population momentum 15 Largest contributions of fertility above or below replacement Future trends in total fertility have a large impact on projected population growth in many countries. Nigeria’s future fertility trends will have the highest absolute contribution to population growth of any country and any demographic component: the fertility component alone is estimated to account for 647 million people by 2100 (or four times Nigeria’s population in 2010). In other populous African countries with high total fertility in 2010, such as the Democratic Republic of the Congo, Niger, Uganda, United Republic of Tanzania and Zambia, the fertility component accounts for population growth of more than 100 million people by 2100 (figure 8). In 77 countries continuing trends in total fertility above replacement level will make positive contributions to population growth, including 44 countries where the size of the fertility component’s contribution to projected population growth is larger than the country’s total population size in 2010. For the period 2010 to 2100, the fertility component alone would multiply the population of Niger 11 times compared to the 2010 population size. Large relative effects of the fertility component are also projected for Mali (5.9 times its 2010 population) and Zambia (7.6 times its 2010 population). For 124 countries with projected total fertility below replacement, the fertility component will have a negative impact on population size over the period 2010-2100. The largest contribution in absolute terms will be in China with a decline of 425 million people to 2100 (or 38 per cent of China’s 2010 population). Other countries with the fertility component accounting for a population decline of 50 million or more people are Bangladesh, Brazil and India (figure 8). In terms of the largest impact relative to population size, Lebanon and Singapore would see their populations decline by 63 per cent by 2100 due to the total fertility below replacement level. Figure 8. Countries with largest absolute contributions of fertility levels above and below replacement level to population growth from 2010 to 2100 (thousands) -600,000 -400,000 -200,000 0 200,000 400,000 600,000 China India Brazil Bangladesh Viet Nam Russian Federation Indonesia Mexico Japan Thailand Mozambique Ethiopia Mali Kenya Zambia Uganda Democratic Republic of the Congo Niger United Republic of Tanzania Nigeria Contribution of fertility 16 Largest contribution of mortality reductions As reductions in mortality are projected to continue in all countries, the mortality component always makes a positive contribution to projected population. Countries with large populations and relatively low life expectancies have the largest contributions from the mortality component in absolute terms: the largest contribution is in India at 188 million people by 2100 (figure 9). Countries that have experienced high mortality rates due to the HIV/AIDS epidemic in the recent past also have rapid reductions in mortality rates projected for the future. As a consequence, the largest contributions of the mortality component to population growth in relative terms are projected for these countries. In Botswana, Central African Republic, Lesotho, Mozambique, Swaziland and Zimbabwe, the mortality component is projected to contribute to an increase in the population to 2100 by more than 40 per cent relative to the population size in 2010 (annex table 1). Figure 9. Countries with largest absolute contributions of reductions in mortality to population growth from 2010 to 2100 (thousands) Largest contribution of migration The contribution of migration can be substantial, especially for countries where natural population growth is close to zero. The extent of the contribution is determined by the assumptions made regarding the future migration trends. For example, in the United States of America, the migration component accounts for more than 117 million people by 2100, followed by Canada (22 million people) and Australia and the United Kingdom (18 million people each). In Australia and Canada, positive net migration would add 82 per cent and 64 per cent, respectively, of the 2010 population size of each country (annex table 1). Comparison of fertility and momentum components In most countries the largest contributions to future population growth will come from the fertility and momentum components. How these two components relate to each other and whether both components act in the same direction or compensate for each other depends on country-specific past and future population trends. Four groups of countries can be distinguished depending on the size and sign of the fertility and momentum components: 0 50,000 100,000 150,000 200,000 India China Nigeria Indonesia Brazil Bangladesh United States of America Ethiopia Mexico Iran (Islamic Republic of) Contribution of mortality 17 • Positive fertility and momentum. In many countries with projected fast population growth, total fertility above replacement is a major contributor to population growth. In 44 countries, (32 in the least developed world and 36 in Africa), the contribution of the fertility component is larger than the 2010 population size. Thus, projected total fertility above replacement will lead to more than a doubling of the population between 2010 and 2100. In an additional 33 countries the fertility component is positive but less than 100 per cent of the 2010 population. Population momentum contributes a further 37 per cent to 89 per cent to the population increase (as compared to population size in 2010) in this group of countries. • Negative fertility offset by larger momentum. In countries with recent fertility declines towards or below replacement, the contribution of the fertility component to future population growth is negative. However, this effect is more than compensated by the impact of population momentum, since these countries have young age structures. Thirty-nine countries belong to this group, including countries in all major areas except Northern America. Some of the most populous countries (2010 population greater than 100 million) are in this group, including Bangladesh, India, Indonesia, Mexico and Pakistan. • Negative fertility effect with smaller momentum. In this third group of countries young age structures contribute towards population increase, but the projected total fertility below replacement has a larger impact thus producing an overall population decline. In total, 47 countries from all major areas belong to this group. Among the most populous countries, these include Brazil, China and the United States of America. • Negative fertility and negative momentum. Both population momentum and total fertility below replacement contribute towards a projected decline in population size in 38 countries, including 33 countries in Europe, four countries in Asia (Hong Kong, SAR of China; Japan, Qatar, United Arab Emirates) and Cuba. CONCLUSIONS The demographic conditions of countries around the world today are more diverse than at any previous point in history. At one end of the spectrum are countries that are still relatively early in their demographic transition and have rapid population growth, high fertility and young age structures; at the other end of the spectrum are countries that may be regarded as post transitional, where growth rates are negative, fertility has dropped well below the replacement level and populations are aging rapidly. The main objectives of this technical report are to quantify the roles of the demographic drivers of future population trends for regions and countries and to identify the demographic causes of differences in their growth trajectories. Conventional demographic theory has established that population growth is related to fertility, life expectancy, migration and momentum, but it does not provide estimates of their contributions. This issue is addressed here by making a series of hypothetical cohort component projections which allow the quantification of the four demographic drivers of future population change. For example, the population of the world is expected to grow by 3.9 billion between 2010 and 2100 an increase of 57.9 per cent. This increase can be decomposed into contributions of high fertility (13.8 per cent), declining mortality (16.3 per cent) and momentum (26.9 per cent). The analysis of regional and country estimates demonstrates that the fertility is the most influential component in causing differences in growth trajectories between populations. The contribution of fertility to future growth ranges from 235 per cent (of the 2010 population) in Africa to -28.8 per cent in Europe. 18 The regional contributions of the other factors have a much smaller range, in particular for mortality (27.9 per cent in Africa compared to 11.3 per cent in Europe). The range for momentum is from 49.5 per cent for Africa to -9.5 per cent for Europe (table 2). The decompositions results are derived from the medium variant projection of the United Nations Population Division (United Nations, 2013b). These projections involve assumptions about the future trajectory of fertility, mortality and migration which are uncertain. As a result, the estimates of the demographic components for fertility, mortality and migration are also uncertain. In contrast, estimates of momentum do not rely on assumptions about the future and depend only on the current age structure of the population and mortality which are relatively well known. Estimates of the size of momentum are therefore subject to very little uncertainty. Overall, UN projections made in the past have turned out to be remarkably accurate at the regional and global levels for periods up to a few decades (NRC, 2000; Lee, 2011). The fact that momentum now accounts for half of world population growth and that it can be projected with confidence makes large errors unlikely at least for the next two or three decades. These quantitative assessments of the demographic factors that drive population growth should be of interest to policy makers concerned about the adverse effects of population change for human welfare. In theory each of the components of growth can be affected by appropriate policies: • Fertility. A range of policies are available to affect fertility. In many high fertility countries family planning programs have been implemented. These programs provide access to information about contraceptives with the goal of reducing unplanned pregnancies and to assist women in having the family size they desire (Bongaarts and Sinding, 2011). In countries with very low fertility many governments have implemented policies to help women combine childbearing with participation in the labor force (e.g. through child care subsidies or favorable taxes for families with children). The goal is to raise fertility to a level closer to the one wanted by women, because in these societies women often do not achieve their desired family size (Thévenon, 2011). • Mortality. Better health and a longer life are universally desired and governments therefore make every effort to reduce mortality. This is therefore not a population policy variable. • Migration. Most governments have laws regulating the level of immigration. In theory, countries with declining populations could raise immigration, but this option may be unattractive for social, economic or cultural reasons. • Momentum. The age structure of a population cannot be changed, but momentum can be offset by raising the age at first birth and by wider spacing between births. These changes reduce the number of births occurring in future years, independent of the number of births women have over their lifetimes. Delays in early marriage and early first birth have a range of beneficial effects on the welfare of girls and women independent of the demographic effects (Bongaarts, 1994). Governments have a variety policies at their disposal to address adverse demographic trends. The debate about which of these to implement in a particular country at a given point in time should partly be informed by the magnitude of the components of population growth as documented in this report. 19 REFERENCES Bongaarts, John (1994). Population policy options in the developing world. Science, vol. 263, pp. 2-7. Bongaarts, John and Rodolfo A. Bulatao (1999). Completing the demographic transition. Population and Development Review vol. 25, No. 3, pp. 515–529. Bongaarts, John and Sinding, Steven (2011). Population Policy in Transition in the Developing World. Science, vol. 333, pp. 574-576. Keyfitz, Nathan (1971). On the momentum of population growth. Demography vol. 8, No. 1, pp. 71-80. Lee, Ronald (2011). The outlook for population growth. Science, vol. 333, pp.569-573. National Research Council, Commission on Behavioral and Social Sciences and Education, Beyond Six Billion: Forecasting the World's Population, J. Bongaarts, R. A. Bulatao, Eds. (National Academies Press, Washington, DC, 2000). Preston, Samuel, Patrick Heuveline and Michel Guillot (2000). Demography: Measuring and Modeling Population Processes. Oxford: Blackwell. Thévenon, Olivier (2011). Family Policies in OECD Countries: A Comparative Analysis. Population and Development Review vol. 37, No. 1, pp. 57–87. United Nations, Department of Economic and Social Affairs, Population Division (2013a). World Population Prospects: The 2012 Revision, Volume I: Comprehensive Tables ST/ESA/SER.A/336. Available online at ____ (2013b). World Population Prospects: The 2012 Revision. New York: United Nations.
190605
https://bayanbox.ir/view/700773461890554190/Matousek-discrete-geometry.pdf
• Jitl MatouSek Lectures on Discrete Geometry Jiff Matousek Lectures on Discrete Geometry With 206 Illustrations Springer Jin Matousek Department of Applied Mathematics Charles University Malostranske nam. 25 118 00 Praha 1 Czech Republic matousek@ kam.mff.cuni.cz Editorial Board S. Axler Mathematics Department San Francisco State University San Francisco, CA 94132 USA axler@ sfsu.edu F. W. Gehring Mathematics Department East Hall University of Michigan Ann Arbor, MI 48109 USA fgehring@ math .I sa. umich.edu Mathematics Subject Classification (2000): 52-01 Library of Congress Cataloging-in-Publication Data Matousek, Jifi. Lectures on discrete geometry I Jin Matousek. p. em.- (Graduate texts in mathematics ; 212) Includes bibliographical references and index. K.A. Ribet Mathematics Department University of California, Berkeley Berkeley, CA 94 720-3840 USA ri bet@ math.berkeley .edu ISBN 0-387-95373-6 (alk. paper) - ISBN 0-387-95374-4 (softcover : alk. paper) 1. Convex geometry. 2. Combinatorial geometry. I. Title. II. Series. QA639.5 .M37 2002 516--dc21 2001054915 Printed on acid-free paper. © 2002 Springer-Verlag New York, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written pennission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer soft­ ware, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Production managed by Michael Koy; manufacturing supervised by Jacqui Ashri. Typesetting: Pages created by author using Springer TeX macro package. Printed and bound by Sheridan Books, Inc., Ann Arbor, MI. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 ISBN 0-387-95373-6 ISBN 0-387-95374-4 SPIN 10854370 (hardcover) SPIN 10854388 (softcover) Springer-Verlag New York Berlin Heidelberg A member of BertelsmannSpringer Science+Business Media GmbH Preface The next several pages describe the goals and the main topics of this book. Questions in discrete geometry typically involve finite sets of points, lines, circles, planes, or other simple geometric objects. For example, one can ask, what is the largest number of regions into which n lines can partition the plane, or what is the minimum possible number of distinct distances occur­ ring among n points in the plane? (The former question is easy, the latter one is hard.) More complicated objects are investigated, too, such as convex polytopes or finite families of convex sets. The emphasis is on "combinato­ rial" properties: Which of the given objects intersect, or how many points are needed to intersect all of them, and so on. Many questions in discrete geometry are very natural and worth studying for their own sake. Some of them, such as the structure of 3-dimensional convex polytopes, go back to the antiquity, and many of them are motivated by other areas of mathematics. To a working mathematician or computer scientist, contemporary discrete geometry offers results and techniques of great diversity, a useful enhancement of the "bag of tricks" for attacking problems in her or his field. My experience in this respect comes mainly from combinatorics and the design of efficient algorithms, where, as time progresses, more and more of the first-rate results are proved by methods drawn from seemingly distant areas of mathematics and where geometric methods are among the most prominent. The development of computational geometry and of geometric methods in combinatorial optimization in the last 20-30 years has stimulated research in discrete geometry a great deal and contributed new problems and motivation. Parts of discrete geometry are indispensable as a foundation for any serious study of these fields. I personally became involved in discrete geometry while working on geometric algorithms, and the present book gradually grew out of lecture notes initially focused on computational geometry. (In the meantime, several books on computational geometry have appeared, and so I decided to concentrate on the nonalgorithmic part.) In order to explain the path chosen in this book for exploring its subject, let me compare discrete geometry to an Alpine mountain range. Mountains can be explored by bus tours, by walking, by serious climbing, by playing . Vl Preface in the local casino, and in many other ways. The book should provide safe trails to a few peaks and lookout points (key results from various subfields of discrete geometry). To some of them, convenient paths have been marked in the literature, but for others, where only climbers' routes exist in research papers, I tried to add some handrails, steps, and ropes at the critical places, in the form of intuitive explanations, pictures, and concrete and elementary proofs. 1 However, I do not know how to build cable cars in this landscape: Reaching the higher peaks, the results traditionally considered difficult, still needs substantial effort. I wish everyone a clear view of the beautiful ideas in the area, and I hope that the trails of this book will help some readers climb yet unconquered summits by their own research. (Here the shortcomings of the Alpine analogy become clear: The range of discrete geometry is infinite and no doubt, many discoveries lie ahead, while the Alps are a small spot on the all too finite Earth.) This book is primarily an introductory textbook. It does not require any special background besides the usual undergraduate mathematics (linear al­ gebra, calculus, and a little of combinatorics, graph theory, and probability). It should be accessible to early graduate students, although mastering the more advanced proofs probably needs some mathematical maturity. The first and main part of each section is intended for teaching in class. I have actually taught most of the material, mainly in an advanced course in Prague whose contents varied over the years, and a large part has also been presented by students, based on my writing, in lectures at special seminars (Spring Schools of Combinatorics). A short summary at the end of the book can be useful for reviewing the covered material. The book can also serve as a collection of surveys in several narrower subfields of discrete geometry, where, as far as I know, no adequate recent treatment is available. The sections are accompanied by remarks and biblio­ graphic notes. For well-established material, such as convex polytopes, these parts usually refer to the original sources, point to modern treatments and surveys, and present a sample of key results in the area. For the less well cov­ ered topics, I have aimed at surveying most of the important recent results. For some of them, proof outlines are provided, which should convey the main ideas and make it easy to fill in the details from the original source. Topics. The material in the book can be divided into several groups: • Foundations (Sections 1.1-1.3, 2 .1, 5.1-5.4, 5.7, 6.1). Here truly basic things are covered, suitable for any introductory course: linear and affine subspaces, fundamentals of convex sets, Minkowski's theorem on lattice points in convex bodies, duality, and the first steps in convex polytopes, Voronoi diagrams, and hyperplane arrangements. The remaining sections of Chapters 1, 2, and 5 go a little further in these topics. 1 I also wanted to invent fitting names for the important theorems, in order to make them easier to remember. Only few of these names are in standard usage. Preface .. Vll • Combinatorial cornplexity of geornetric configurations (Chapters 4, 6, 7, and 11). The problems studied here include line-point incidences, com­ plexity of arrangements and lower envelopes, Davenport-Schinzel se­ quences, and the k-set problem. Powerful methods, mainly probabilistic, developed in this area are explained step by step on concrete nontriv­ ial examples. Many of the questions were motivated by the analysis of algorithms in computational geometry. • Intersection patterns and transversals of convex sets. Chapters 8-10 con­ tain, among others, a proof of the celebrated (p, q)-theorem of Alon and Kleitman, including all the tools used in it. This theorem gives a suffi­ cient condition guaranteeing that all sets in a given family of convex sets can be intersected by a bounded (small) number of points. Such results can be seen as far-reaching generalizations of the well-known Helly's the­ orem. Some of the finest pieces of the weaponry of contemporary discrete and computational geometry, such as the theory of the VC-dimension or the regularity lemma, appear in these chapters. • Geometric Ramsey theory (Chapters 3 and 9). Ramsey-type theorems guarantee the existence of a certain "regular" subconfiguration in every sufficiently large configuration; in our case we deal with geometric ob­ jects. One of the historically first results here is the theorem of Erdos and Szekeres on convex independent subsets in every sufficiently large point set. • Polyhedral combinatorics and high-dimensional convexity (Chapters 12-14). Two famous results are proved as a sample of polyhedral combina­ torics, one in graph theory (the weak perfect graph conjecture) and one in theoretical computer science (on sorting with partial information). Then the behavior of convex bodies in high dimensions is explored; the high­ lights include a theorem on the volume of an N-vertex convex polytope in the unit ball (related to algorithmic hardness of volume approxima­ tion), measure concentration on the sphere, and Dvoretzky's theorem on almost-spherical sections of convex bodies. • Representing finite metric spaces by coordinates (Chapter 15). Given an n-point metric space, we would like to visualize it or at least make it com­ putationally more tractable by placing the points in a Euclidean space, in such a way that the Euclidean distances approximate the given dis­ tances in the finite metric space. We investigate the necessary error of such approximation. Such results are of great interest in several areas; for example, recently they have been used in approximation algorithms in combinatorial optimization (multicommodity flows, VLSI layout, and others). These topics surely do not cover all of discrete geometry, which is a rather vague term anyway. The selection is (necessarily) subjective, and naturally I preferred areas that I knew better and/or had been working in. (Unfortu­ nately, I have had no access to supernatural opinions on proofs as a more Vlll Preface reliable guide.) Many interesting topics are neglected completely, such as the wide area of packing and covering, where very accessible treatments exist, or the celebrated negative solution by Kahn and Kalai of the Borsuk conjec­ ture, which I consider sufficiently popularized by now. Many more chapters analogous to the fifteen of this book could be added, and each of the fifteen chapters could be expanded into a thick volume. But the extent of the book, as well as the time for its writing, are limited. Exercises. The sections are complemented by exercises. The little framed numbers indicate their difficulty: ITI is routine, 0 may need quite a bright idea. Some of the exercises used to be a part of homework assignments in my courses and the classification is based on some experience, but for others it is just an unreliable subjective guess. Some of the exercises, especially those conveying important results, are accompanied by hints given at the end of the book. Additional results that did not fit into the main text are often included as exercises, which saves much space. However, this greatly enlarges the danger of making false claims, so the reader who wants to use such information 1nay want to check it carefully. Sources and further reading. A great inspiration for this book project and the source of much material was the book Combinatorial Geometry of Pach and Agarwal [PA95]. Too late did I become aware of the lecture notes by Ball [Bal97] on modern convex geometry; had I known these earlier I would probably have hesitated to write Chapters 13 and 14 on high-dimensional convexity, as I would not dare to compete with this masterpiece of mathe­ matical exposition. Ziegler's book [Zie94] can be recommended for studying convex polytopes. Many other sources are mentioned in the notes in each chapter. For looking up information in discrete geometry, a good starting point can be one of the several handbooks pertaining to the area: Handbook of Convex Geometry [GW93], Handbook of Discrete and Computational Ge­ ometry [G097], Handbook of Computational Geometry [SUOO], and (to some extent) Handbook of Combinatorics [GGL95], with numerous valuable sur­ veys. Many of the important new results in the field keep appearing in the journal Discrete and Computational Geometry. Acknowledgments. For invaluable advice and/or very helpful comments on preliminary versions of this book I would like to thank Micha Sharir, Gunter M. Ziegler, Yuri Rabinovich, Pankaj K. Agarwal, Pavel Valtr, Martin Klazar, Nati Linial, Gunter Rote, Janos Pach, Keith Ball, Uli Wagner, Imre Barany, Eli Goodman, Gyorgy Elekes, Johannes Blamer, Eva Matouskova, Gil Kalai, Joram Lindenstrauss, Emo Welzl, Komei Fukuda, Rephael Wenger, Piotr In­ dyk, Sariel Har-Peled, Vojtech Rodl, Geza T6th, Karoly Boroczky Jr., Rados Radoicic, Helena Nyklova, Vojtech Franek, Jakub Simek, Avner Magen, Gre­ gor Baudis, and Andreas Marwinski (I apologize if I forgot someone; my notes are not perfect, not to speak of my memory). Their remarks and suggestions Preface . IX allowed me to improve the manuscript considerably and to eliminate many of the embarrassing mistakes. I thank David Kramer for a careful copy-editing and finding many more mistakes (as well as offering me a glimpse into the exotic realm of English punctuation). I also wish to thank everyone who par­ ticipated in creating the friendly and supportive environments in which I have been working on the book. Errors. If you find errors in the book, especially serious ones, I would appreciate it if you would let me know (email: matousek@kam. mff. cuni. cz). I plan to post a list of errors at http: I /www .ms .mff. cuni. cz/-matousek. Prague, July 2001 Jiri Matousek Contents Preface v Notation and Terminology XV 1 Convexity 1 1.1 Linear and Affine Subspaces, General Position . . . . . . . . . . . . . 1 1. 2 Convex Sets, Convex Combinations, Separation . . . . . . . . . . . . 5 1.3 Radon's Lemma and Helly's Theorem . . . . . . . . . . . . . . . . . . . . . 9 1.4 Centerpoint and Ham Sandwich . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Lattices and Minkowski's Theorem 17 2.1 Minkowski 's Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2. 2 General Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2. 3 An Application in Number Theory . . . . . . . . . . . . . . . . . . . . . . . . 27 3 Convex Independent Subsets 29 3.1 The Erdos-Szekeres Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3. 2 Horton Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4 Incidence Problems 41 4.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4. 2 Lower Bounds: Incidences and Unit Distances . . . . . . . . . . . . . . 51 4.3 Point-Line Incidences via Crossing Numbers . . . . . . . . . . . . . . . 54 4.4 Distinct Distances via Crossing Numbers . . . . . . . . . . . . . . . . . . 59 4.5 Point-Line Incidences via Cuttings . . . . . . . . . . . . . . . . . . . . . . . 64 4.6 A Weaker Cutting Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4. 7 The Cutting Lemma: A Tight Bound . . . . . . . . . . . . . . . . . . . . . 73 5 Convex Polytopes 77 5.1 Geometric Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.2 H-Polytopes and V-Polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5. 3 Faces of a Convex Polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.4 Many Faces: The Cyclic Polytopes . . . . . . . . . . . . . . . . . . . . . . . . 96 5. 5 The Upper Bound Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 . . Xll Contents 5.6 The Gale Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5. 7 Voronoi Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6 Number of Faces in Arrangements 125 6.1 Arrangements of Hyperplanes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.2 Arrangements of Other Geometric Objects . . . . . . . . . . . . . . . . . 130 6.3 Number of Vertices of Level at Most k . . . . . . . . . . . . . . . . . . . . 140 6.4 The Zone Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 6.5 The Cutting Lemma Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 7 Lower Envelopes 165 7.1 Segments and Davcnport-Schinzel Sequences . . . . . . . . . . . . . . . 165 7.2 Segments: Superlinear Complexity of the Lower Envelope . . . . 169 7.3 More on Davenport-Schinzel Sequences . . . . . . . . . . . . . . . . . . . 173 7.4 Towards the Tight Upper Bound for Segments . . . . . . . . . . . . . 178 7.5 Up to Higher Dimension: Triangles in Space . . . . . . . . . . . . . . . 182 7.6 Curves in the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 7. 7 Algebraic Surface Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 8 Intersection Patterns of Convex Sets 195 8.1 The Fractional Helly Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 8.2 The Colorful Caratheodory Theorem . . . . . . . . . . . . . . . . . . . . . . 198 8.3 Tverberg's Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 9 Geometric Selection Theorems 207 9.1 A Point in Many Simplices: The First Selection Lemma . . . . . 207 9.2 The Second Selection Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 9.3 Order Types and the Same-Type Lemma . . . . . . . . . . . . . . . . . . 215 9.4 A Hypergraph Regularity Lemma . . . . . . . . . . . . . . . . . . . . . . . . 223 9.5 A Positive-Fraction Selection Lemrna . . . . . . . . . . . . . . . . . . . . . 228 10 Transversals and Epsilon Nets 231 10.1 General Preliminaries: Transversals and Matchings . . . . . . . . . 231 10.2 Epsilon Nets and VC-Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . 237 10.3 Bounding the VC-Dimension and Applications . . . . . . . . . . . . . 243 10.4 Weak Epsilon Nets for Convex Sets . . . . . . . . . . . . . . . . . . . . . . . 251 10.5 The Hadwiger-Debrunner (p, q)-Problem . . . . . . . . . . . . . . . . . . 255 10.6 A (p, q )-Theorem for Hyperplane Transversals . . . . . . . . . . . . . . 259 11 Attempts to Count k-Sets 265 1 1.1 Definitions and First Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 11.2 Sets with Many Halving Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 11.3 The Lovasz Lemma and Upper Bounds in All Dimensions . . . 277 11.4 A Better Upper Bound in the Plane . . . . . . . . . . . . . . . . . . . . . . 283 Contents X Ill 12 Two Applications of High-Dimensional Polytopes 289 12.1 The Weak Perfect Graph Conjecture . . . . . . . . . . . . . . . . . . . . . . 290 12.2 The Brunn-Minkowski Inequality . . . . . . . . . . . . . . . . . . . . . . . . . 296 12.3 Sorting Partially Ordered Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 13 Volumes in High Dimension 311 13.1 Volumes, Paradoxes of High Dimension, and Nets . . . . . . . . . . . 311 13.2 Hardness of Volume Approximation . . . . . . . . . . . . . . . . . . . . . . . 315 13.3 Constructing Polytopes of Large Volume . . . . . . . . . . . . . . . . . . 322 13.4 Approximating Convex Bodies by Ellipsoids . . . . . . . . . . . . . . . 324 14 Measure Concentration and Almost Spherical Sections 329 14.1 Measure Concentration on the Sphere . . . . . . . . . . . . . . . . . . . . . 330 14.2 Isoperimetric Inequalities and More on Concentration . . . . . . . 333 14.3 Concentration of Lipschitz Functions . . . . . . . . . . . . . . . . . . . . . . 337 14.4 Almost Spherical Sections: The First Steps . . . . . . . . . . . . . . . . 341 14.5 Many Faces of Symmetric Polytopes . . . . . . . . . . . . . . . . . . . . . . 34 7 14.6 Dvoretzky's Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 15 Embedding Finite Metric Spaces into Normed Spaces 355 15.1 Introduction: Approximate Embeddings . . . . . . . . . . . . . . . . . . . 355 15.2 The Johnson-Lindenstrauss Flattening Lemma . . . . . . . . . . . . . 358 15.3 Lower Bounds By Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 15.4 A Lower Bound for the Hamming Cube . . . . . . . . . . . . . . . . . . . 369 15.5 A Tight Lower Bound via Expanders . . . . . . . . . . . . . . . . . . . . . . 373 15.6 Upper Bounds for £00-Embeddings . . . . . . . . . . . . . . . . . . . . . . . . 385 15.7 Upper Bounds for Euclidean Em beddings . . . . . . . . . . . . . . . . . . 389 What Was It About? An Informal Summary 401 Hints to Selected Exercises 409 Bibliography 417 Index 459 Notation and Terminology This section summarizes rather standard things, and it is mainly for reference. More special notions are introduced gradually throughout the book. In order to facilitate independent reading of various parts, some of the definitions are even repeated several times. If X is a set, I X I denotes the number of elements (cardinality) of X. If X is a multiset, in which some elements may be repeated, then lXI counts each element with its multiplicity. The very slowly growing function log x is defined by log x = 0 for x < 1 and log x = 1 + log(log2 x) for x > 1. For a real number x, l x J denotes the largest integer less than or equal to x, and r X l means the smallest integer greater than or equal to x. The boldface letters R and Z stand for the real numbers and for the integers, respectively, while Rd denotes the d-dimensional Euclidean space. For a point x = (xi, x2, . . . , xd) E Rd, llxll = Jxi + xԇ + · · · + x࢟ is the Euclidean norm of x, and for x, y E Rd, (x, y) = XIYI + x2y2 + · · · + XdYd is the scalar product. Points of Rd are usually considered as column vectors. The symbol B(x, r) denotes the closed ball of radius r centered at x in some metric space (usually in R d with the Euclidean distance), i.e., the set of all points with distance at most r from x. We write Bn for the unit ball B(O, 1) in Rn. The symbol 8A denotes the boundary of a set A C Rd, that is, the set of points at zero distance from both A and its complement. For a measurable set A C Rd, vol(A) is the d-dimensional Lebesgue mea­ sure of A (in most cases the usual volume). Let f and g be real functions (of one or several variables). The notation f = O(g) means that there exists a number C such that 1!1 < Clgl for all values of the variables. Normally, C should he an absolute constant, but if f and g depend on some parameter(s) that we explicitly declare to be fixed (such as the space dimension d), then C may depend on these parameters as well. The notation f = O(g) is equivalent to g = O(J), f(n) = o(g(n)) to limn--?<X)(f(n)jg(n)) = 0, and f = 8(g) means that both f = O(g) and f == O(g). For a random variable X, the symbol E[X] denotes the expectation of X, and Prob [A] stands for the probability of an event A. . XVl Notation and Terminology Graphs are considered simple and undirected in this book unless stated otherwise, so a graph G is a pair (V, E), where V is a set (the verte.rc set) and E C (é) is the edge set. Here (r) denotes the set of all k-element subsets of V. For a multigraph, the edges form a multiset, so two vertices can be connected by several edges. For a given (multi)graph G, we write V(G) for the vertex set and E(G) for the edge set. A complete graph has all possible edges; that is, it is of the form ( V, (é)). A complete graph on n vertices is denoted by Kn. A graph G is bipartite if the vertex set can be partitioned into two subsets vl and v2, the (color} classes, in such a way that each edge connects a vertex of V1 to a vertex of V2• A graph G' = (V', E') is a subgraph of a graph G = (V, E) if V' C V and E' C E. We also say that G contains a copy of H if there is a subgraph G' of G isomorphic to H, where G' and H are isomorphic if there is a bijective map <p: V(G') & V(H) such that {u, v} E E(G') if and only if {<p(u),<p(v)} E E(H) for all u, v E V(G'). The degree of a vertex v in a graph G is the number of edges of G containing v. An r-regular graph has all degrees equal to r. Paths and cycles are graphs as in the following picture, I/N ٴ ooo paths cycles and a path or cycle in G is a subgraph isomorphic to a path or cycle, respec­ tively. A graph G is connected if every two vertices can be connected by a path in G. We recall that a set X C Rd is compact if and only if it is closed and bounded, and that a continuous function f: X & R defined on a compact X attains its minimum (there exists xo E X with f(x0) < f(x) for all x E X). The Cauchy-Schwarz inequality is perhaps best remembered in the form (x, Y) < llxll · IIYII for all x, y E Rn. A real function f defined on an interval A C R (or, more generally, on a convex set A C Rd) is convex if f(tx + (1-t)y) < tf(x) + (1-t)f(y) for all x, y E A and t E [0, 1]. Geometrically, the graph of f on [x, y] lies below the segment connecting the points (x, f(x)) and (y, j(y_)). If the second derivative satisfies f"(x) > 0 for all x in an (open) interval A C R, then f is convex on A. Jensen's inequality is a straightforward generalization of the definition of convexity: j(t1x1 + t2x2 + · · · + tnxn) < t1J(x1) + t2J(x2) + · · · + tnf(xn) for all choices of nonnegative ti summing to 1 and all x1, . . . , Xn E A. Or in integral form, if J1 is a probability measure on A and f is convex on A, we have f (fA xdp,(x)) < fA f(x) dp,(x). In the language of probability theory, if X is a real random variable and f: R & R is convex, then /(E[X]) < E[f(X)]; for example, (E[XJ)2 < E[X2). 1 Convexity We begin with a review of basic geo1netric notions such as hyperplanes and affine subspaces in Rd, and we spend some time by discussing the notion of general position. Then we consider fundamental properties of convex sets in Rd, such as a theorem about the separation of disjoint convex sets by a hyperplane and Helly's theorem. 1.1 Linear and Affine Subspaces, General Position Linear subspaces. Let R d denote the d-dimensional Euclidean space. The points are d-tuples of real numbers, x = (x1, x2, . . . , xd)· The space Rd is a vector space, and so we may speak of linear subspaccs, linear dependence of points, linear span of a set, and so on. A linear subspace of Rd is a subset closed under addition of vectors and under multiplication by real numbers. What is the geometric meaning? For instance, the linear subspaces of R 2 are the origin itself, all lines passing through the origin, and the whole of R 2• In R 3, we have the origin, all lines and planes passing through the origin, and R 3. Affine notions. An arbitrary line in R 2, say, is not a linear subspace unless it passes through 0. General lines are what arc called affine subspaces. An affine subspace of Rd has the form x + L, where x E R d is some vector and L is a linear subspace of Rd. Having defined affine subs paces, the other "affine" notions can be constructed by imitating the "linear" notions. What is the affine hull of a set X C Rd? It is the intersection of all affine subspaces of R d containing X. As is well known, the linear span of a set X can be described as the set of all linear combinations of points of X. What is an affine combination of points a1, a2, ... , an E R d that would play an analogous role? To see this, we translate the whole set by -an, so that an becomes the origin, we make a linear combination, and we translate back by 2 Chapter 1 : Convexity +an. This yields an expression of the form f3t (at- an) + {32(a2 - an) + · · · + fJn(an -an) +an == f3tal + tJ2a2 + · · · + fJn-lan-1 + (1-f3t-f32 -· · ·-f3n-t)an, where f3t, . . . , f3n are arbitrary real numbers. Thus, an affine combination of points a 1 , ... , an E R d is an expression of the forrn Then indeed, it is not hard to check that the affine hull of X is the set of all affine combinations of points of X. The affine dependence of points a1, . . . , an means that one of them can be written as an affine combination of the others. This is the sarne as the existence of real numbers a1, a2, . . . an, at least one of them nonzero, such that both (Note the difference: In an affine combination, the ai sum to 1, while in an affine dependence, they sum to 0.) Affine dependence of a1, . . • , an is equivalent to linear dependence of the n-1 vectors a1 - an, a2 - an, . . . , an-1 - an· Therefore, the maximum possible number of affinely independent points in Rd is d+1. Another way of expressing affine dependence uses "lifting" one dimension higher. Let bi == ( ai, 1) be the vector in R d+ 1 obtained by appending a new coordinate equal to 1 to ai; then a 1, . . . , an are affinely dependent if and only if b1, ... , bn are linearly dependent. This correspondence of affine notions in Rd with linear notions in Rd+l is quite general. For example, if we identify R 2 with the plane x3 == 1 in R 3 as in the picture, then we obtain a bijective correspondence of the k-dimensional linear sub­ spaces of R3 that do not lie in the plane x3 == 0 with (k-1)-dimensional affine subs paces of R 2• The drawing shows a 2-diinensional linear subspace of R 3 and the corresponding line in the plane x3 = 1. (The satne works for affine subspaces of Rd and linear subspaces of Rd+t not contained in the subspace Xd+l = 0.) This correspondence also leads directly to extending the affine plane R2 into the projective plane: To the points of R 2 corresponding to nonhorizontal 1.1 Linear and Affine Subspaces, General Position 3 lines through 0 in R 3 we add points "at infinity," that correspond to hori­ zontal lines through 0 in R 3. But in this book we remain in the affine space most of the time, and we do not use the projective notions. Let a1, a2, . . . , ad+l be points in Rd, and let A be the d x d rnatrix with ai- ad+ I as the ith column, i = 1, 2, . . . , d. Then a1, . . . , ad+I are affi.nely independent if and only if A has d linearly independent columns, and this is equivalent to det(A) -# 0. We have a useful criterion of affine independence using a determinant. Affine subspaces of R d of certain diinensions have special names. A ( d-1 )­ dimensional affine subspace of R d is called a hyperplane (while the word plane usually means a 2-dimensional subspace of R d for any d). One-dimensional subs paces are lines, and a k-dimensional affine subspace is often called a k­ fiat. A hyperplane is usually specified by a single linear equation of the forrn a1x1 + a2x2 + · · · + adxd =b. We usually write the left-hand side as the scalar product {a, x). So a hyperplane can be expressed as the set {x E Rd: (a, x) = b} where a E Rd \ {0} and b E R. A (closed) half-space in Rd is a set of the form {x E Rd: (a, x) > b} for some a E Rd \ {0}; the hyperplane { x E Rd: (a, x) = b} is its boundary. General k-flats can be given either as intersections of hyperplanes or as affine images of R k (parametric expression). In the first case, an intersection of k hyperplanes can also be viewed as a solution to a system Ax == b of linear equations, where x E Rd is regarded as a column vector, A is a k x d matrix, and b E R k. (As a rule, in forrnulas involving matrices, we interpret points of Rd as column vectors.) An affine mapping I: R k ---t R d has the form I: y H By+ c for some d x k matrix B and some c E Rd, so it is a composition of a linear map with a translation. The image of f is a k'-flat for some k' < min(k, d). This k' equals the rank of the matrix B. General position. "We assume that the points (lines, hyperplanes, . . . ) are in general position." This magical phrase appears in many proofs. Intuitively, general position means that no "unlikely coincidences" happen in the consid­ ered configuration. For example, if 3 points are chosen in the plane without any special intention, "randomly," they are unlikely to lie on a common line. For a planar point set in general position, we always require that no three of its points be collinear. For points in Rd in general position, we assume similarly that no unnecessary affine dependencies exist: No k < d+l points lie in a common (k-2)-ftat. For lines in the plane in general position, we postulate that no 3 lines have a common point and no 2 are parallel. The precise meaning of general position is not fully standard: It may depend on the particular context, and to the usual conditions mentioned above we sometimes add others where convenient. For example, for a planar point set in general position we can also suppose that no two points have the same x-coordinate. 4 Chapter 1: Convexity What conditions are suitable for including into a "general position" as­ sumption? In other words, what can be considered as an unlikely coincidence? For example, let X be an n-point set in the plane, and let the coordinates of the ith point be (xi, Yi)· Then the vector v(X) = (xi, x2, . . . , Xn, YI, Y2, . . . , Yn) can be regarded as a point of R2n. For a configuration X in which x1 = x2, i.e., the first and second points have the same x-coordinate, the point v (X) lies on the hyperplane {XI = x2} in R 2n. The configurations X where .'jome two points share the x-coordinate thus correspond to the union of (Ð) hy­ perplanes in R 2n. Since a hyperplane in R 2n has ( 2n-dimensional) measure zero, almost all points of R 2n correspond to planar configurations X with all the points having distinct x-coordinates. In particular, if X is any n-point planar configuration and c > 0 is any given real number, then there is a con­ figuration X', obtained from X by moving each point by distance at most c, such that all points of X' have distinct x-coordinates. Not only that: Almost all small movements (perturbations) of X result in X' with this property. This is the key property of general position: Configurations in general position lie arbitrarily close to any given configuration (and they abound in any small neighborhood of any given configuration). Here is a fairly gen­ eral type of condition with this property. Suppose that a configuration X is specified by a vector t = ( t I, t2, . • . , tm) of m real numbers (coordinates). The objects of X can be points in Rd, in which case m = dn and the tj are the coordinates of the points, but they can also be circles in the plane, with m = 3n and the tj expressing the center and the radius of each circle, and so on. The general position condition we can put on the configuration X is p( t) = p( ti, t2, • • . , tm) f= 0, where p is some nonzero polynomial in m variables. Here we use the following well-known fact (a consequence of Sard's theorem; see, e.g., Bred on [Bre93], Appendix C): For any nonzero m-variate polynomial p(t1, • • • , tm), the zero set {t E Rm: p(t) = 0} has measure 0 in Rm. Therefore, almost all configurations X satisfy p(t) f= 0. So any condition that can be expressed as p(t) f= 0 for a certain polynomial p in m real variables, or, more generally, as PI ( t) =f. 0 or P2 ( t) =f. 0 or . . . , for finitely or countably many polynomials PI, P2, . . . , can be included in a general position assumption. For example, let X be an n-point set in Rd, and let us consider the con­ dition "no d+ 1 points of X lie in a comrnon hyperplane." In other words, no d+1 points should be affinely dependent. As we know, the affine dependence of d+ 1 points means that a suitable d x d determinant equals 0. This deter­ minant is a polynomial (of degree d) in the coordinates of these d+ 1 points. Introducing one polynomial for every (d+1)-tuple of the points, we obtain (dϭ1) polynomials such that at least one of them is 0 for any configuration X with d+ 1 points in a common hyperplane. Other usual conditions for general position can be expressed similarly. 1.2 Convex Sets, Convex Combinations, Separation 5 In many proofs, assuming general position simplifies matters consider­ ably. But what do we do with configurations Xo that are not in general position? We have to argue, somehow, that if the statement being proved is valid for configurations X arbitrarily close to our X0, then it must be valid for X0 itself, too. Such proofs, usually called perturbation arguments, are of­ ten rather simple, and almost always somewhat boring. But sometimes they can be tricky, and one should not underestimate them, no matter how tempt­ ing this may be. A nontrivial example will be demonstrated in Section 5.5 (Lemma 5.5.4). Exercises 1. Verify that the affine hull of a set X C R d equals the set of all affine combinations of points of X. 121 2. Let A be a 2 x 3 matrix and let b E R 2• Interpret the solution of the system Ax = b geometrically (in most cases, as an intersection of two planes) and discuss the possible cases in algebraic and geometric terms. [I] 3. (a) What are the possible intersections of two ( 2-dimensional) planes in R4? What is the "typical" case (general position)? What about two hyperplanes in R4? 0 (b) Objects in R4 can sometimes be "visualized" as objects in R3 moving in time (so time is interpreted as the fourth coordinate). Try to visualize the intersection of two planes in R 4 discussed (a) in this way. 1.2 Convex Sets, Convex Combinations, Separation Intuitively, a set is convex if its surface has no "dips": ࢠ not allowed in a convex set 1.2.1 Definition (Convex set). A set C C Rd is convex if for every two points x, y E C the whole segment xy is also contained in C. In other words, for every t E (0, 1], the point tx + (1 - t)y belongs to C. The intersection of an arbitrary family of convex sets is obviously convex. So we can define the convex hull o£ a set X C R d, denoted by conv( X), as the intersection of all convex sets in R d containing X. Here is a planar example with a finite X: X • • • • • • • • • • conv(X) 6 Chapter 1 : Convexity An alternative description of the convex hull can be given using convex combinations. 1.2.2 Claim. A point x belongs to conv(X) if and only if there exist points Xt, x2, . . • Xn E X and nonnegative real numbers t1., t2, ... , tn with 2:ß 1 ti = 1 such that x == I:ß 1 tixi. The expression LŸ 1 tixi as in the claiin is called a convex cornbinat'ion of the points x1, x2, . . . , Xn. (Compare this with the definitions of linear and affine combinations.) Sketch of proof. Each convex combination of points of X must lie in conv( X): For n = 2 this is by definition, and for larger n by induction. Conversely, the set of all convex combinations obviously contains X, and it is convex. D In R d, it is sufficient to consider convex combinations involving at most d+l points: 1.2.3 Theorem (Caratheodory's theorem). Let X c Rd. Then each point of conv(X) is a convex combination of at most d+ 1 points of X. For example, in the plane, conv(X) is the union of all triangles with vertices at points of X. The proof of the theorem is left as an exercise to the subsequent section. A basic result about convex sets is the separability of disjoint convex sets by a hyperplane. 1.2.4 Theorem (Separation theorem). Let C, D C Rd be convex sets with C n D = 0. Then there exists a hyperplane h such that C lies in one of the closed half-spaces determined by h, and D lies in the opposite closed half -space. In other words, there exist a unit vector a E Rd and a number b E R such that for all x E C we have (a, x) > b, and f or all x E D we have (a, x) < b. If C and D are closed and at least one of them is bounded, they can be separated strictly; in such a way that C n h = D n h = 0. In particular, a closed convex set can be strictly separated from a point. This implies that the convex hull of a closed set X equals the intersection of all closed half-spaces containing X. Sketch of proof. First assume that C and D are compact (i.e., closed and bounded). Then the Cartesian product C x D is a compact space, too, and the distance function (x, y) M llx - Yll attains its minimum on C x D. That is, there exist points p E C and q E D such that the distance of C and D equals the distance of p and q. The desired separating hyperplane h can be taken as the one perpendic­ ular to the segment pq and passing through its midpoint: 1.2 Convex Sets, Convex Combinations, Separation 7 It is easy to check that h indeed avoids both C and D. If D is cornpact and C closed, we can intersect C with a large ball and get a compact set C'. If the ball is sufficiently large, then C and C' have the same distance to D. So the distance of C and D is attained at some p E C' and q E D, and we can use the previous argument. For arbitrary disjoint convex sets C and D, we choose a sequence C1 C C2 c C3 c · · · of compact convex subsets of C with Uٲ 1 Cn = C. For example, assuming that 0 E C, we can let Cn be the intersection of the closure of (1 - ! )C with the ball of radius n centered at 0. A similar sequence D1 C D2 C ·· · is chosen for D, and we let hn = {x E Rd: (an, x) = bn} be a hyperplane separating Cn from Dn, where an is a unit vector and bn E R. The sequence (bn)Ų 1 is bounded, and by compactness, the sequence of (d+l)­ component vectors (an, bn) E R d+ 1 has a cluster point (a, b). One can verify, by contradiction, that the hyperplane h = { x E R d: (a, x) = b} separates C and D ( nonstrictly). D The irnportance of the separation theorem is documented by its presence in several branches of mathematics in various disguises. Its home territory is probably functional analysis, where it is formulated and proved for infinite­ dimensional spaces; essentially it is the so-called Hahn-Banach theorem. The usual functional-analytic proof is different from the one we gave, and in a way it is rnore elegant and conceptual. The proof sketched above uses more special properties of Rd, but it is quite short and intuitive in the case of compact C and D. Connection to linear programming. A basic result in the theory of linear programming is the Farkas lemma. It is a special case of the duality of linear programming (discussed in Section 10.1) as well as the key step in its proof. 1.2.5 Lemma (Farkas lemma, one of many versions). For every d x n real matrix A, exactly one of the f ollowing cases occurs: (i) The system of linear equations Ax = 0 has a nontrivial nonnegative solution x E Rn (all components of x are nonnegative and at least one of them is strictly positive). 8 Chapter 1: Convexity (ii) There exists a y E Rd such that yT A is a vector with all entries strictly negative. Thus, if we multiply the jth equation in the system Ax= 0 by Yj and add these equations together, we obtain an equation that obviously has no nontrivial nonnegative solution, since all the coefficients on the left-hand sides are strictly negative, while the right-hand side is 0. Proof. Let us see why this is yet another version of the separation theorem. Let V c Rd be the set of n points given by the column vectors of the matrix A. We distinguish two cases: Either 0 E conv(V) or 0 ¢ conv(V). In the former case, we know that 0 is a convex combination of the points of V, and the coefficients of this convex combination determine a nontrivial nonnegative solution to Ax = 0. In the latter case, there exists a hyperplane strictly separating V from 0, i.e., a unit vector y E Rd such that ( y, v) < (y, 0) = 0 for each v E V. This is just the y from the second alternative in the Farkas lemma. D Bibliography and remarks. Most of the n1aterial in this chapter is quite old and can be found in many surveys and textbooks. Providing historical accounts of such well-covered areas is not among the goals of this book, and so we mention only a few references for the specific results discussed in the text and add some remarks concerning related results. The concept of convexity and the rudiments of convex geometry have been around since antiquity. The initial chapter of the Handbook of Convex Geometry [GW93] succinctly describes the history, and the handbook can be recommended as the basic source on questions re­ lated to convexity, although knowledge has progressed significantly since its publication. For an introduction to functional analysis, including the Hahn­ Banach theorem, see Rudin [Rud91), for example. The Farkas lemma originated in [Far94} (nineteenth century!). More on the history of the duality of linear programming can be found, e.g., in Schrijver's book [Sch86]. As for the origins, generalizations, and applications of Caratheo­ dory's theorem, as well as of Radon's lemma and Helly's theorem dis­ cussed in the subsequent sections, a recommendable survey is Eckhoff [Eck93], and an older well-known source is Danzer, Griinbaum, and Klee [DGK63]. Caratheodory's theorem comes from the paper [Car07], concerning power series and harmonic analysis. A somewhat similar theorem, due to Steinitz [Ste16] , asserts that if x lies in the interior of conv(X) for an X C Rd, then it also lies in the interior of conv(Y) for some Y C X with IYI < 2d. Bonnice and Klee (BK63] proved a common generalization of both these theorems: Any k-interior point of X is a k-interior point of Y for some Y C X with at most max(2k, d+l) 1 . 3 Radon's Lemma and Helly's Theorem points, where x is called a k-interior point of X if it lies in the relative interior of the convex hull of some k+ 1 affinely independent points of X. Exercises 1. Give a detailed proof of Claim 1.2.2. m 2. Write down a detailed proof of the separation theorem. 0 9 3. Find an example of two disjoint closed convex sets in the plane that are not strictly separable. II1 4. Let f: Rd ---+ Rk be an affine map. (a) Prove that if C C Rd is convex, then f(C) is convex as well. Is the preimage of a convex set always convex? m (b) For X C Rd arbitrary, prove that conv{/(X)) = conv(f(X)). CD 5. Let X C Rd. Prove that dian1(conv(X)) = diam(X), where the dian1eter diam(Y) of a set Y is sup{llx - yll: x, y E Y}. 0 6. A set C C Rd is a convex cone if it is convex and for each x E C, the ray a± is fully contained in C. (a) Analogously to the convex and affine hulls, define the appropriate "conic hull" and the corresponding notion of "combination" (analogous to the convex and affine combinations). 0 (b) Let C be a convex cone in Rd and b fl. C a point. Prove that there exists a vector a with (a, x) > 0 for all X E C and (a, b) < 0. m 7. (Variations on the Farkas lemma) Let A be a d x n matrix and let b E Rd. (a) Prove that the systen1 Ax = b has a nonnegative solution x E Rn if and only if every y E Rd satisfying yT A > 0 also satisfies yTb > 0. 0 (b) Prove that the system of inequalities Ax < b has a nonnegative solution x if and only if every nonnegative y E Rd with yT A > 0 also satisfies yTb > 0. 0 8. (a) Let C C Rd be a compact convex set with a nonen1pty interior, and let p E C be an interior point. Show that there exists a line f passing through p such that the segment f n C is at least as long as any segment parallel to f and contained in c. m (b) Show that (a) may fail for C compact but not convex. III 1.3 Radon's Lemma and Belly's Theorem Caratheodory's theorem from the previous section, together with Radon's lemma and Helly's theorem presented here, are three basic properties of con­ vexity in Rd involving the dimension. We begin with Radon's len1n1a. 1.3.1 Theorem (Radon's lemma). Let A be a set of d+2 points in Rd. Then there exist two disjoint subsets A1, A2 c A such that conv(At) n conv(A2) =/: 0. 10 Chapter 1: Convexity A point x E conv(A1) nconv(A2), where A1 and A2 are as in the theorem, is called a Radon point of A, and the pair (A 1, A2) is called a Radon partition of A (it is easily seen that we can require A1 U A2 = A). Here are two possible cases in the plane: Proof. Let A == {at, a2, . . . , ad+2}· These d+2 points are necessarily affi.nely dependent. That is, there exist real numbers a1, . . . , ad+2, not all of them 0, ""'d+2 d ""'d+2 such that L..., i=l ai == 0 an L..., i=l aiai = 0. Set P = {i: ai > 0} and N = {i: ai < 0}. Both P and N are nonempty. We claim that P and N determine the desired subsets. Let us put A1 = { ai: i E P} and A2 = { ai: i E N}. We are going to exhibit a point x that is contained in the convex hulls of both these sets. Put S = LiEP ai; we also have S = - LiEN ai. Then we define (1.1) (1.2) The coefficients of the ai in ( 1.1) are nonnegative and sum to 1, so x is a convex combination of points of At. Similarly, (1.2) expresses X as a convex combination of points of A2• 0 Helly's theorem is one of the most famous results of a combinatorial nature about convex sets. 1.3.2 Theorem (Helly's theorem). Let Ot, 02, . . . , On be convex sets in Rd, n > d+l. Suppose that the intersection of every d+1 of these sets is nonempty. Then the intersection of all the Oi is nonempty. The first nontrivial case states that if every 3 among 4 convex sets in the plane intersect, then there is a point common to all 4 sets. This can be proved by an elementary geometric argument, perhaps distinguishing a few cases, and the reader may want to try to find a proof before reading further. In a contrapositive form, Helly's theorem guarantees that whenever 01, 02, . . . , On are convex sets with nࢡ 1 Oi = 0, then this is witnessed by some at most d+l sets with empty intersection among the Oi. In this way, many proofs are greatly simplified, since in planar problems, say, one can deal with 3 convex sets instead of an arbitrary number, as is amply illustrated in the exercises below. 1.3 Radon's Lemma and Helly's Theorem 11 It is very tempting and quite usual to formulate Helly's theorem as fol­ lows: "If every d+l among n convex sets in Rd intersect, then all the sets intersect." But, strictly speaking, this is false, for a trivial reason: For d > 2, the assumption as stated here is n1et by n = 2 disjoint convex sets. Proof of Helly's theorem. (Using Radon's lemma.) For a fixed d, we proceed by induction on n. The case n = d+l is clear, so we suppose that n > d+2 and that the statement of Helly's theorem holds for smaller n. Actually, n = d+2 is the crucial case; the result for larger n follows at once by a simple induction. Consider sets C1, C2, . . . , Cn satisfying the assumptions. If we leave out any one of these sets, the remaining sets have a nonempty intersection by the inductive assumption. Let us fix a point ai E ni#i Ci and consider the points a1, a2, . . . , ad+2. By Radon's lemma, there exist disjoint index sets I1, I2 c {1, 2, . . . , d+2} such that We pick a point x in this intersection. The following picture illustrates the case d = 2 and n = 4: We claim that X lies in the intersection of all the ci. Consider some i E {1, 2, . . . , n }; then i ш 11 or i ш I2. In the former case, each aj with j E It lies in Ci, and so x E conv( { aj: j E 11}) C Ci. For i ш /2 we similarly conclude that x E conv( { aj: j E /2}) C Ci. Therefore, x E n< 1 Ci. 0 An infinite version of Helly's theorem. If we have an infinite collection of convex sets in Rd such that any d+1 of them have a common point, the entire collection still need not have a common point. Two examples in R 1 are the families of intervals {(0, 1/n): n = 1, 2, . . . } and {[n, oo): n = 1, 2, . . . }. The sets in the first exan1ple are not closed, and the second example uses unbounded sets. For compact (i.e., closed and bounded) sets, the theorem holds: 1.3.3 Theorem (Infinite version of Helly's theorem). Let C be an ar­ bitrary infinite family of compact convex sets in R d such that any d+ 1 of the sets have a nonempty intersection. Then all the sets of C have a nonempty intersection. 12 Chapter 1 : Convexity Proof. By Helly's theorem, any finite subfamily of C has a nonempty inter­ section. By a basic property of compactness, if we have an arbitrary family of compact sets such that each of its finite subfamilies has a nonempty inter­ section, then the entire family has a nonen1pty intersection. D Several nice applications of Reily's theorem are indicated in the exercises below, and we will meet a few more later in this book. Bibliography and remarks. Helly proved Theorem 1.3.2 in 1913 and communicated it to Radon, who published a proof in [Rad21]. This proof uses Radon's lemma, although the statement wasn't explicitly formulated in Radon's paper. References to many other proofs and generalizations can be found in the already mentioned surveys [Eck93] and [DGK63]. Helly's theorem inspired a whole industry of Helly-type theorems. A family B of sets is said to have H elly number h if the following holds: Whenever a finite subfamily F C B is such that every h or fewer sets of F have a common point, then n F =/= 0. So Helly's theorem says that the family of all convex sets in Rd has Helly number d+l. More generally, let P be some property of families of sets that is hereditary, meaning that if :F has property P and F' C F, then F' has P as well. A family B is said to have Helly number h with respect to P if for every finite F C B, all subfamilies of F of size at most h having P implies :F having P. That is, the absence of P is always witnessed by some at most h sets, so it is a "local" property. Exercises 1. Prove Caratheodory's theorem (you n1ay use Radon's lemma). 8J 2. Let K c Rd be a convex set and let C1 , . . • , Cn c Rd, n > d+1, be convex sets such that the intersection of every d+ 1 of them contains a translated copy of K. Prove that then the intersection of all the sets Ci also contains a translated copy of K. x This result was noted by Vincensini [Vin39) and by Klee [Kle53]. 3. Find an example of 4 convex sets in the plane such that the intersection of each 3 of them contains a segment of length 1, but the intersection of all 4 contains no segment of length 1. II1 4. A strip of width w is a part of the plane bounded by two parallel lines at distance w. The width of a set X C R2 is the sn1allest width of a strip containing X. (a) Prove that a compact convex set of width 1 contains a segment of length 1 of every direction. @:1 (b) Let { C 1, C2, . . . , Cn} be closed convex sets in the plane, n > 3, such that the intersection of every 3 of them has width at least 1. Prove that nȘ 1 ci has width at least 1. x 1 .3 Radon's Lemma and Helly's Theorem 13 The result as in (b), for arbitrary dimension d, was proved by Sallee (Sal75], and a simple argument using Helly's theorem was noted by Buch­ man and Valentine [BV82]. 5. Statement: Each set X c R2 of diameter at most 1 (i.e., any 2 points have distance at most 1) is contained in some disc of radius 1/-/3. (a) Prove the statement for 3-element sets X. (b) Prove the statement for all finite sets X. E3J (c) Generalize the statement to R d: determine the smallest r = r( d) such that every set of diameter 1 in Rd is contained in a ball of radius r (prove your claim). 0 The result as in (c) is due to Jung; see [DGK63). 6. Let C C Rd be a compact convex set. Prove that the mirror image of C can be covered by a suitable translate of C blown up by the factor of d; that is, there is an x E Rd with -C C x + dC. 0 7. (a) Prove that if the intersection of each 4 or fewer among convex sets C1, . . . , Cn c R2 contains a ray then n< 1 Ci also contains a ray. 0 (b) Show that the number 4 in (a) cannot be replaced by 3. E3J This result, and an analogous one in Rd with the Helly number 2d, are due to Katchalski [Kat78). 8. For a set X C R2 and a point x E X, let us denote by V(x) the set of all points y E X that can "see" x, i.e., points such that the segment xy is contained in X. The kernel of X is defined as the set of all points x E X such that V(x) = X. A set with a nonempty kernel is called star-shaped. (a) Prove that the kernel of any set is convex. li1 (b) Prove that if V(x) n V(y) n V(z) =f. 0 for every x, y, z E X and X is compact, then X is star-shaped. That is, if every 3 paintings in a (planar) art gallery can be seen at the same time from some location (possibly different for different triples of paintings), then all paintings can be seen simultaneously from somewhere. If it helps, assume that X is a polygon. 0 (c) Construct a nonempty set X C R2 such that each of its finite subsets can be seen from some point of X but X is not star-shaped. @] The result in (b), as well as the d-dimensional generalization (with ev­ ery d+ 1 regions V ( x) intersecting), is called Krasnosel'skii's theorem; see [Eck93] for references and related results. 9. In the situation of Radon's lemma (A is a (d+2)-point set in Rd), call a point x E R d a Radon point of A if it is contained in convex hulls of two disjoint subsets of A. Prove that if A is in general position (no d+ 1 points affinely dependent)' then its Radon point is unique. m 10. (a) Let X, Y C R2 be finite point sets, and suppose that for every subset S C X U Y of at most 4 points, S n X can be separated (strictly) by a line from S n Y. Prove that X and Y are line-separable. @J (b) Extend (a) to sets X, Y C Rd, with lSI < d+2. 0 The result (b) is called Kirchberger's theorem [Kir03]. 14 Chapter 1 : Convexity 1.4 Centerpoint and Ham Sandwich We prove an interesting result as an application of Helly's theorem. 1.4.1 Definition (Centerpoint). Let X be an n-point set in Rd. A point x E R d is called a centerpoint of X if each closed half-space containing x contains at least dpI points of X. Let us stress that one set may generally have n1any centerpoints, and a centerpoint need not belong to X. The notion of centerpoint can be viewed as a generalization of the me­ dian of one-dimensional data. Suppose that x1, . . . , Xn E R are results of measurements of an unknown real parameter x. How do we estimate x from the Xi? We can use the arithmetic mean, but if one of the measurement5 is completely wrong (say, 100 times larger than the others), we may get quite a bad estimate. A more "robust" estimate is a median, i.e., a point x such that at least G of the xi lie in the interval (-oo, x] and at least G of them lie in [ x, oo). The centerpoint can be regarded as a generalization of the median for higher-dimensional data. In the definition of centerpoint we could replace the fraction d! 1 by some other parameter a E (0, 1). For a > d!I , such an "a-centerpoint" need not always exist: Take d+l points in general position for X. With o: = d!l as in the definition above, a centerpoint always exists, as we prove next. Centerpoints are in1portant, for example, in son1e algorithn1s of divide­ and-conquer type, where they help divide the considered problem into smaller subproblems. Since no really efficient algorithms are known for finding "exact" centerpoints, the algorithms often use o:-centerpoints with a suit­ able a < d! 1 , which are easier to find. 1.4.2 Theorem (Centerpoint theorem). Each finite point set in Rd has at least one centerpoint. Proof. First we note an equivalent definition of a centerpoint: x is a cen­ terpoint of X if and only if it lies in each open half-space 'Y such that IX n 'YI > d!1 n. We would like to apply Helly's theorem to conclude that all these open half-spaces intersect. But we cannot proceed directly, since we have infinitely many half-spaces and they are open and unbounded. Instead of such an open half-space 'Y, we thus consider the compact convex set conv (X n 'Y) c 'Y . • '•,, .. ... ȣ . .. ' ' • ........ ..... .... conv(r n X) 1 . 4 Centerpoint and Ham Sandwich 15 Letting 'Y run through all open half-spaces 1 with IX n 'YI > d!l n, we obtain a family C of compact convex sets. Each of them contains more than d!t n points of X, and so the intersection of any d+ 1 of them contains at least one point of X. The family C consists of finitely many distinct sets (since X has finitely many distinct subsets), and so n C =/= 0 by Reily's theorem. Each point in this intersection is a centerpoint. D In the definition of a centerpoint we can regard the finite set X as defining a distribution of mass in Rd. The centerpoint theorem asserts that for some point x, any half-space containing x encloses at least d.!.l of the total mass. It is not difficult to show that this remains valid for continuous mass distri­ butions, or even for arbitrary Borel probability measures on Rd (Exercise 1). Ham-sandwich theorem and its relatives. Here is another important result, not much related to convexity but with a flavor resembling the cen­ terpoint theorem. 1.4.3 Theorem (Ham-sandwich theorem}. Every d finite sets in R d can be simultaneously bisected by a hyperplane. A hyperplane h bisects a finite set A if each of the open half -spaces defined by h contains at most LIAI/2J points of A. This theorem is usually proved via continuous mass distributions using a tool from algebraic topology: the Borsuk-Ulam theorem. Here we omit a proof. Note that if Ai has an odd number of points, then every h bisecting Ai passes through a point of Ai. Thus if A 1, . . . , Ad all have odd sizes and their union is in general position, then every hyperplane simultaneously bisecting them is determined by d points, one of each Ai. In particular, there are only finitely many such hyperplanes. Again, an analogous ham-sandwich theorem holds for arbitrary d Borel probability measures in Rd. Center transversal theorem. There can be beautiful new things to dis­ cover even in well-studied areas of mathematics. A good exan1ple is the fol­ lowing recent result, which "interpolates" between the centerpoint theorem and the ham-sandwich theorem. 1.4.4 Theorem (Center transversal theorem). Let 1 < k < d and let A1, A2, . • . , Ak be finite point sets in Rd. Then there exists a (k -1)-flat f such that for every hyperplane h containing f, both the closed half -spaces defined by h contain at least d-k+21Ail points of Ai, i = 1, 2, . . . , k. The ham-sandwich theorem is obtained for k = d and the centerpoint theorem for k = 1. The proof, which we again have to omit, is based on a result of algebraic topology, too, but it uses a considerably more advanced machinery than the ham-sandwich theorem. However, the weaker result with d6l instead of d-k+2 is easy to prove; see Exercise 2. 16 Chapter 1: Convexity Bibliography and remarks. The centerpoint theorem was es­ tablished by Rado [Rad47]. According to Steinlein's survey [Ste85], the ham-sandwich theorem was conjectured by Steinhaus (who also invented the popular 3-dimensional interpretation, namely, that the ham, the cheese, and the bread in any ham sandwich can be simulta­ neously bisected by a single straight motion of the knife) and proved by Banach. The center transversal theorem was found by Dol'nikov [Dol'92] and, independently, by Zivaljevic and Vrecica [ZV90). Significant effort has been devoted to efficient algorithn1s for find­ ing (approximate) centerpoints and ham-sandwich cuts (i.e., hyper­ planes as in the ham-sandwich theorem). In the plane, a ham-sandwich cut for two n-point sets can be computed in linear time (Lo, Matousek, and Steiger [LMS94]). In a higher but fixed dimension, the complexity of the best exact algorithms is currently slightly better than 0( nd-l). A centerpoint in the plane, too, can be found in linear time (Jadhav and Mukhopadhyay [JM94]). Both approximate ham-sandwich cuts (in the ratio 1 : 1 +c- for a fixed c > 0) and approximate centerpoints (( d!1 -c-)-centerpoints) can be computed in time O(n) for every fixed dimension d and every fixed c > 0, but the constant depends expo­ nentially on d, and the algorithms are impractical if the dimension is not quite small. A practically efficient randomized algorithm for com­ puting approximate centerpoints in high dimensions ( o:-centerpoints with a 3 1 / d2) was given by Clarkson, Eppstein, Miller, Sturtivant, and Teng [CEM+96]. Exercises 1. (Centerpoints for general mass distributions) (a) Let J-t be a Borel probability measure on Rd; that is, Jt(Rd) = 1 and each open set is measurable. Show that for each open half-space 'Y with JL( 'Y) > t there exists a compact set C c 'Y with J.L( C) > t. li1 (b) Prove that each Borel probability measure in Rd has a centerpoint (use (a) and the infinite Helly's theorem). li1 2. Prove that for any k finite sets A1 , . . . , Ak C Rd, where 1 < k < d, there exists a ( k-1 )-fiat such that every hyperplane containing it has at least d! 1 I Ai I points of Ai in both of its closed half-spaces for all i = 1, 2, . . . , k. III 2 Lattices and Minkowski's Theorem This chapter is a quick excursion into the geometry of numbers, a field where number-theoretic results are proved by geometric arguments, often using properties of convex bodies in Rd. We formulate the simple but beautiful theorem of Minkowski on the existence of a nonzero lattice point in every symmetric convex body of sufficiently large volume. We derive several con­ sequences, concluding with a geometric proof of the famous theorem of La­ grange claiming that every natural number can be written as the sum of at most 4 squares. 2.1 Minkowski's Theorem In this section we consider the integer lattice zd, and so a lattice point is a point in Rd with integer coordinates. The following theorem can be used in many interesting situations to establish the existence of lattice points with certain properties. 2.1.1 Theorem (Minkowski's theorem). Let C C Rd be symmetric (around the origin, i.e., C = -C), convex, bounded, and suppose that vol( C) > 2d. Then C contains at least one lattice point diff erent f rom 0. Proof. We put C' = ÈC = {5x: x E C}. Claim: There exists a nonzero integer vector v E zd \ {0} such that C' n ( C' + v) i= 0; i.e., C' and a translate of C' by an integer vector intersect. Proof By contradiction; suppose the claim is false. Let R be a large integer number. Consider the family C of translates of C' by the 18 Chapter 2: Lattices and Minkowski's Theorern integer vectors in the cube [-R, R)d: C = {C' +v: v E [-R, R]dnzd}, as is indicated in the drawing ( C is painted in gray). Each such translate is disjoint from C', and thus every two of these translates arc disjoint as well. They are all contained in the enlarged cube K = [-R - D, R + D]d, where D denotes the diameter of C'. Hence vol(K) = (2R + 2D)d > ICi vol(C') = (2R + 1)d vol(C'), and vol(C') < ( 1 + êèç) d . The expression on the right-hand side is arbitrarily close to 1 for sufficiently large R. On the other hand, vol( C') = 2-d vol( C) > 1 is a fixed number exceeding 1 by a certain amount independent of R, a contradiction. The claim thus holds. D Now let us fix a v E zd as in the clairn and let us choose a point X E C' n ( C' + v). Then we have x - v E C', and since C' is symmetric, we obtain v - x E C'. Since C' is convex, the midpoint of the segment x( v - x) lies in C' too, and so we have 5x + 5 (v - x) = !v E C'. This means that v E C, which proves Minkowski's theorem. D 2.1.2 Example (About a regular forest). Let K be a circle of diameter 26 (meters, say) centered at the origin. Trees of diameter 0.16 grow at each lattice point within K except for the origin, which is where you are standing. Prove that you cannot see outside this miniforest. 2. 1 Minkowski's Theoren1 19 Proof. Suppose than one could see outside along some line f passing through the origin. This means that the strip S of width 0.16 with R as the middle line contains no lattice point in K except for the origin. In other words, the symmetric convex set C = KnS contains no lattice points but the origin. But as is easy to calculate, vol( C) > 4, which contradicts Minkowski's theorem. 0 2.1.3 Proposition (Approximating an irrational number by a frac­ tion). Let a E (0, 1) be a real number and N a natural number. Then there exists a pair of natural numbers m, n such that n < N and m 1 a - - < -. n nN This proposition implies that there arc infinitely many pairs m, n such that Ia - : I < 1/n2 (Exercise 4). This is a basic and well-known result in elementary number theory. It can also be proved using the pigeonhole principle. The proposition has an analogue concerning the approximation of several numbers a1, . . . , ak by fractions with a common denominator (see Exercise 5), and there a proof via Minkowski's theorem seems to be the simplest. Proof of Proposition 2.1.3. Consider the set C = { (x, y) E R2 : -N - 5 < x < N + 5' lax - Yi < ٰ }· 'i \i . 1 IV -L -• . ! 2 y = ax 20 Chapter 2: Lattices and Minkowski 's Theoren1 This is a symmetric convex set of area (2N + 1) ٮ > 4, and therefore it con­ tains some nonzero integer lattice point (n, m}. By symmetry, we may assume n > 0. The definition of C gives n < N and I an - ml < k. In other words, Ia - !!! I < _ 1 . • D n nN Bibliography and remarks. The name "geometry of numbers" was coined by Minkowski, who initiated a systematic study of this field (although related ideas appeared in earlier works). He proved Theorem 2.1.1, in a more general form mentioned later on, in 1891 (see [Min96]). His first application was a theorem on simultaneously making linear forms small (Exercise 2.2.4). While geometry of numbers originated as a tool in number theory, for questions in Diophantine approximation and quadratic forms, today it also plays a significant role in several other diverse areas, such as coding theory, cryptography, the theory of uniform distribution, and numerical integration. Theorem 2.1.1 is often called Minkowski's first theorem. What is, then, Minkowski's second theorem? We answer this natural question in the notes to Section 2.2, where we also review a few more of the basic results in the geometry of numbers and point to some interesting connections and directions of research. Most of our exposition in this chapter follows a similar chapter in Pach and Agarwal [PA95]. Older books on the geometry of numbers are Cassels [Cas59] and Gruber and Lekkerkerker [GL87]. A pleasant but somewhat aged introduction is Siegel [Sie89]. The Gruber [Gru93] provides a concise recent overview. Exercises 1. Prove: If C C Rd is convex, symmetric around the origin, bounded, and such that vol( C) > k2d, then C contains at least 2k lattice points. x 2. By the method of the proof of Minkowski's theorern, show the following result (Blichtfeld; Van der Corput): If S C Rd is measurable and vol(S) > k, then there are points 8 1 , s2, . . . , Sk E S with all Si - Sj E zd, 1 < i, j < k. Î 3. Show that the boundedness of C in Minkowski's theorem is not really necessary. ITJ 4. (a) Verify the claim made after Example 2.1.3, namely, that for any irrational a there are infinitely many pairs m, n such that Ia - m/nl < 1/n2. ITJ (b) Prove that for a = v'2 there are only finitely many pairs m, n with Ia - mfnl < 1/4n2. x (c) Show that for any algebraic irrational number a (i.e., a root of a univariate polynomial with integer coefficients) there exists a constant D such that Ia - mfnl < 1/nD holds for finitely many pairs (m, n) only. Conclude that, for example, the number 2:: ٯ 1 2-ii is not algebraic. 0 2.2 General Lattices 21 5. (a) Let a1, a2 E (0, 1) be real numbers. Prove that for a given N E N there exist m1, m2, n E N, n < N, such that lai - Ři I < nȖ, i = 1, 2. 8J (b) Formulate and prove an analogous result for the simultaneous ap­ proximation of d real numbers by rationals with a common denominator. ill (This is a result of Dirichlet (Dir42].) 6. Let K c R 2 be a compact convex set of area a and let x be a point chosen uniformly at random in [0, 1)2• (a) Prove that the expected number of points of Z2 in the set K + X equals a. ill (b) Show that with probability at least 1 - a, K + x contains no point of Z2. [I] 2.2 General Lattices Let z1, z2, . . . , Zd be a d-tuple of linearly independent vectors in Rd. We define the lattice with basis { Zt, zz, . . . , Zd} as the set of all linear combinations of the Zi with integer coefficients; that is, Let us remark that this lattice has in general many different bases. For in­ stance, the sets {(0, 1), (1, 0)} and {(1, 0), (3, 1)} are both bases of the "stan­ dard" lattice Z2• Let us form a d x d matrix Z with the vectors z1, . . . , zd as columns. We define the determinant of the lattice A = A(zt, z2, . . . , zd) as det A = I det Zl. Geometrically, det A is the volume of the parallelepiped { a1z1 + a2z2 + · · · + adzd: a1, . . . , ad E [0, 1]}: • • • • • • • (the proof is left to Exercise 1). The number det A is indeed a property of the lattice A (as a point set), and it does not depend on the choice of the basis of A (Exercise 2). It is not difficult to show that if Z is the matrix of some basis of A, then the matrix of every basis of A has the form BU, where U is an integer matrix with determinant ±1. 22 Chapter 2: Lattices and Minkowski's Theorem 2.2.1 Theorem (Minkowski's theorem for general lattices). Let A be a lattice in Rd, and let C C Rd be a symmetric convex set with vol(C) > 2d det A. Then C contains a point of A different f rom 0. Proof. Let { z1, . . . , zd} be a basis of A. We define a linear mapping f: Rd --+ Rd by j(x1, x2, . . . , xd) = x1 Z1 + X2Z2 + · · · + xdzd. Then f is a bijection and A = f(Zd). For any convex set X, we have vol(f(X)) = det(A) vol(X). (Sketch of proof: This holds if X is a cube, and a convex set can be ap­ proximated by a disjoint union of sufficiently small cubes with arbitrary precision.) Let us put C' = f-1 (0). This is a symmetric convex set with vol( C') = vol( C)/ det A > 2d. Minkowski's theorem provides a nonzero vec­ tor v E C' n zd, and f ( v) is the desired point as in the theorem. D A seemingly more general definition of a lattice. What if we consider integer linear combinations of more than d vectors in Rd? Some caution is necessary: If we take d = 1 and the vectors v1 = ( 1), v2 = ( J2), then the integer linear combinations i1 v1 + i2v2 arc dense in the real line (by Example 2.1.3), and such a set is not what we would like to call a lattice. In order to exclude such pathology, we define a discrete subgroup of Rd as a set A c Rd such that whenever x, y E A, then also x - y E A, and such that the distance of any two distinct points of A is at least 8, for some fixed positive real number 8 > 0. It can be shown, for instance, that if v1, v2, . . . , Vn E R d are vectors with rational coordinates, then the set A of all their integer linear combinations is a discrete subgroup of Rd (Exercise 3). As the following theorem shows, any discrete subgroup of Rd whose linear span is all of Rd is a lattice in the sense of the definition given at the beginning of this section. 2.2.2 Theorem (Lattice basis theorem). Let A c Rd be a discrete subgroup of Rd whose linear span is Rd. Then A has a basis; that is, there exist d linearly independent vectors z1, z2, . . . , Zd E R d such that A = A ( z 1 ' Z2' . . . ' Zd). Proof. We proceed by induction. For some i, 1 < i < d+1, suppose that linearly independent vectors z1, z2, . . . , Zi-I E A with the following prop­ erty have already been constructed. If Fi-1 denotes the ( i-1 )-dimensional subspace spanned by z1, . . . , Zi-I, then all points of A lying in Fi-l can be written as integer linear combinations of z1, . . . , Zi-l· For i = d+ 1, this gives the statement of the theorem. So consider an i < d. Since A generates R d, there exists a vector w E A not lying in the subspace Fi-1. Let P be the i-dimensional parallelepiped determined by z1, z2, . . . , Zi-I and by w: P = {a1z1 +a2z2 + · · · +ai-IZi-I + aiw: a1, . . . , ai E [0, 1]}. Among all the (finitely many) points of A lying in P but not in Fi-I, choose one nearest to Fi-I and call it zi, as in the picture: 2. 2 General Lattices 23 • 0 • Note that if the points of A n P are written in the form a1z1 + a2z2 + · · · + ai-lZi-1 + aiw, then Zi is one with the smallest ai. It remains to show that z1, z2, . . . , Zi have the required property. So let v E A be a point lying in Fi (the linear span of z1, . . . , Zi). We can write v = /31z1 + f32z2 + · · · + f3izi for some real numbers f3t, . . . , f3i· Let /j be the fractional part of /3j, j = 1, 2, . . . , i; that is, /j = /3j - l/3j J . Put v' = 11z1 + 12z2 + · · · + /iZi· This point also lies in A (since v and v' differ by an integer linear combination of vectors of A). We have 0 < /j < 1, and hence v' lies in the parallelepiped P. Therefore, we must have /i = 0, for otherwise, v' would be nearer to Fi-1 than Zi. Hence v' E A n Fi-1, and by the inductive hypothesis, we also get that all the other /j are 0. So all the /3j are in fact integer coefficients, and the inductive step is finished. D Therefore, a lattice can also be defined as a full-dimensional discrete sub­ group of Rd. Bibliography and remarks. First we mention several fundamental theorems in the "classical" geometry of numbers. Lattice packing and the Minkowski-Hlawka theorem. For a compact C c R d, the lattice constant Û (C) is defined as min { det (A): A n C = {0} }, where the minimum is over all lattices A in Rd (it can be shown by a suitable compactness argument, known as the compactness theo­ rem of Mahler, that the minimum is attained). The ratio vol(C)/ Û(C) is the smallest number D = D(C) for which the Minkowski-like re­ sult holds: Whenever det(A) > D, we have C n A -# {0}. It is also easy to check that 2-d D( C) equals the maximum density of a lattice packing of C; i.e., the fraction of Rd that can be filled by the set C + A for some lattice A such that all the translates C + v , v E A, have pairwise disjoint interiors. A basic result (obtained by an aver­ aging argument) is the Minkowski-Hlau;ka theorem, which shows that D > 1 for all star-shaped compact sets C. If C is star-shaped and symmetric, then we have the improved lower bound (better packing) D > 2((d) = 2 L:٭ 1 n-d. This brings us to the fascinating field of lattice packings, which we do not pursue in this book; a nice geometric 24 Chapter 2: Lattices and Minkowski 's Theorem introduction is in the first half of the book Pach and Agarwal [PA95], and an authoritative reference is Conway and Sloane [CS99]. Let us remark that the lattice constant (and hence the maximum lattice pack­ ing density) is not known in general even for Euclidean spheres, and many ingenious constructions and arguments have been developed for packing them efficiently. These problems also have close connections to error-correcting codes. Successive minima and Minkowski's second theorem. Let C c Rd be a convex body containing 0 in the interior and let A C R d be a lattice. The ith successive minimum of C with respect to A, denoted by Ai = Ai ( C, A), is the infimum of the scaling factors A > 0 such that .XC contains at least i linearly independent vec­ tors of A. In particular, .X1 is the smallest number for which .X1 C contains a nonzero lattice vector, and Minkowski's theorem guaran­ tees that .xt < 2d det(A)/ vol(C). Minkowski's second theorem asserts (2d /d!) det(A) < .X1A2 · · · Ad · vol( C) < 2d det(A). The flatness theorem. If a convex body K is not required to be sym­ metric about 0, then it can have arbitrarily large volume without con­ taining a lattice point. But any lattice-point free body has to be fiat: For every dimension d there exists c( d) such that any convex body K c Rd with K n zd = 0 has lattice width at Inost c(d). The lat-tice width of K is defined as min{maxxEK (x, y) - minxEK (x, y): y E zd \ { 0}}; geometrically, we essentially count the number of hyper­ planes orthogonal to y, spanned by points of zd, and intersecting K. Such a result was first proved by Khintchine in 1948, and the current best bound c(d) = O(d312) is due to Banaszczyk, Litvak, Pajor, and Szarek [BLPS99]; we also refer to this paper for more references. Computing lattice points in convex bodies. Minkowski's theorem pro­ vides the existence of nonzero lattice points in certain convex bodies. Given one of these bodies, how efficiently can one actually compute a nonzero lattice point in it? More generally, given a convex body in Rd, how difficult is it to decide whether it contains a lattice point, or to count all lattice points? For simplicity, we consider only the integer lattice zd here. First, if the dimension d is considered as a constant, such prob­ lems can be solved efficiently, at least in theory. An algorithm due to Lenstra (Len83] finds in polynomial time an integer point, if one exists, in a given convex polytope in R d, d fixed. It is based on the flatness theorem mentioned above (the ideas are also explained in many other sources, e.g., [GLS88], [Lov86], [Sch86], [Bar97]). More recently, Barvi­ nok [Bar93] (or see [Bar97]) provided a polynomial-time algorithm for counting the integer points in a given fixed-dimensional convex poly­ tope. Both algorithms are nice and certainly nontrivial, and especially 2.2 General Lattices the latter can be recommended as a neat application of classical math­ ematical results in a new context. On the other hand, if the dimension d is considered as a part of the input then (exact) calculations with lattices tend to be algorithmically difficult. Most of the difficult problems of combinatorial optimization can be formulated as instances of integer programming, where a given linear function should be minimized over the set of integer points in a given convex polytope. This problem is well known to be NP-hard, and so is the problem of deciding whether a given convex polytope contains an integer point (both problems are actually polynomially equivalent). For an introduction to integer programming see, e.g., Schrijver [Sch86]. Some much more special problems concerning lattices have also been shown to be algorithmically difficult. For example, finding a shortest (nonzero) vector in a given lattice A specified by a basis is NP-hard (with respect to randomized polynomial-time reductions). (In the notation introduced above, we are asking for A 1 ( Bd, A), the first successive minimum of the ball. This took quite some time to prove (Micciancio [Mic98] has obtained the strongest result to date, inap­ proximability up to the factor of J2, building on earlier work mainly of Ajtai), although the analogous hardness result for the shortest vec­ tor in the maximum norm (i.e., A 1 ( [-1, 1 ]d, A)) has been known for a long time. Basis reduction and applications. Although finding the shortest vec­ tor of a lattice A is algorithmically difficult, the shortest vector can be approximated in the following sense. For every c > 0 there is a polynomial-time algorithm that, given a basis of a lattice A in Rd, computes a nonzero vector of A whose length is at most (1 +c)d times the length of the shortest vector of A; this was proved by Schnorr [Sch87]. The first result of this type, with a worse bound on the approx­ imation factor, was obtained in the seminal work of Lenstra, Lenstra, and Lovasz [LLL82]. The LLL algorithm, as it is called, computes not only a single short vector but a whole "short" basis of A. The key notion in the algorithm is that of a reduced basis of A; intuitively, this means a basis that cannot be much improved (made significantly shorter) by a simple local transformation. There are many technically different notions of reduced bases. Some of them are clas­ sical and have been considered by mathematicians such as Gauss and Lagrange. The definition of the Lovasz-reduced basis used in the LLL algorithm is sufficiently relaxed so that a reduced basis can be com­ puted from any initial basis by polynomially many local improvements, and, at the same time, is strong enough to guarantee that a reduced basis is relatively short. These results are covered in many sources; the thin book by Lovasz [Lov86] can still be recommended as a delightful 25 26 Chapter 2: Lattices and Minkowski's Theorem introduction. Numerous refinements of the LLL algorithm, as well as efficient implementations, are available. We sketch an ingenious application of the LLL algorithm for poly­ nomial factorization (from Kannan, Lenstra, and Lovasz [KLL88]; the original LLL technique is somewhat different). Assume for simplicity that we want to factor a monic polynomial p(x) E Z[x] (integer coeffi­ cients, leading coefficient 1) into a product of factors irreducible over Z[x]. By numerical methods we can compute a root a of p(x) with very high precision. If we can find the minimal polynomial of a, i.e., the lowest-degree monic polynomial q(x) E Z[x] with q(o:) = 0, then we are done, since q(x) is irreducible and divides p(x). Let us write q(x) = xd + ad-lXd-l + · · · + ao. Let K be a large number and let us consider the d-dimensional lattice A in Rd+l with basis (K, 1, 0, . . . , 0}, (Ka, O, 1, 0, . . . , 0), (Ka2, 0, 0, 1, 0, . . . , 0), . . . , (Kad, O, . . . , 0, 1). Corn­ bining the basis vectors with the coefficients a0, a1, . . . , ad-l, 1, respec­ tively, we obtain the vector vo 1 (O, a0, a1 , . . . , ad-l , 1) E A. It turns out that if K is sufficiently large compared to the ai, then v0 is the shortest nonzero vector, and moreover, every vector not much longer than vo is a rnultiple of v0. The LLL algorithm applied to A thus finds v0, and this yields q(x). Of course, we do not know the degree of q(x), but we can test all possible degrees one by one, and the required mag­ nitude of K can be estimated from the coefficients of p( x). The LLL algorithm has been used for the knapsack problem and for the subset sum problem. Typically, the applications are problerns where one needs to express a given number (or vector) as a linear combina­ tion of given numbers (or vectors) with small integer coefficients. For example, the subset sum problem asks, for given integers a1, a2, . . • , an and b, for a subset I C { 1, 2, . . . , n} with LiE I ai = b; i.e., b should be expressed as a linear combination of the ai with 0/1 coefficients. These and many other significant applications can be found in Grotschel, Lovasz, and Schrijver [GLS88]. In cryptography, several cryptographic systems proposed in the literature were broken with the help of the LLL algorithm (references are listed, e.g., in [GLS88], [Dwo97]). On the other hand, lattices play a prorninent role in recent constructions, mainly due to Ajtai, of new cryptographic systems. While currently the security of every known efficient cryptographic system depends on an (unproven) assumption of hardness of a certain computational problem, Ajtai's methods suffice with a considerably weaker and more plausible assumption than those required by the previous systerns (see [Ajt98] or [Dwo97] for an introduction). Exercises 1. Let v1 , . . . , vd be linearly independent vectors in Rd. Form a matrix A with Vt, . . . , vd as rows. Prove that I det AI is equal to the volume of the 2.3 An Application in Number Theory 27 parallelepiped {a1v1 + a2v2 + · · · + advd: a1, . . . , ad E [0, 1]}. (You may want to start with d ::::; 2.) 0 2. Prove that if z1 , . . . , Zd and zÞ , . . . , zd arc vectors in R d such that A(zt, . . . , zd) = A(zÏ, . . . , zd), then l det ZI = l det Z'I, where Z is the d x d matrix with the Zi as columns, and similarly for Z'. 0 3. Prove that for n rational vectors Vt, . . . , Vn, the set A = { i1 Vt + i2v2 + · · · + invn: it, i2, . . . , in E Z} is a discrete subgroup of Rd. 0 4. (Minkowski's theorem on linear forms) Prove the following from Min-kowski's theorem: Let fi(x) = (Ȑ 1 aijXj be linear forms in d variables, i = 1, 2, . . . , d, such that the d x d matrix ( aij )i,j has determinant 1. Let b1, • • • , bd be positive real numbers with b1 b2 · · · bd = 1. Then there exists a nonzero integer vector z E zd \ { 0} with lfi ( z) I < bi for all i = 1, 2, . . . ' d. [I] 2.3 An Application in N un1ber Theory We prove one nontrivial result of elementary number theory. The proof via Minkowski's theorem is one of several possible proofs. Another proof uses the pigeonhole principle in a clever way. 2.3.1 Theorem (Two-square theorem). Each prime p _ 1 (mod4) can be written as a sum of two squares: p = a2 + b2, a, b E Z. Let F = GF(p) stand for the field of residue classes modulo p , and let F = F \ {0}. An element a E F is called a quadratic residue modulo p if there exists an x E F with x2 - a (modp). Otherwise, a is a quadratic non residue. 2.3.2 Lemma. If p is a prime with p - 1 (mod4) then -1 is a quadratic residue modulo p. Proof. The equation i2 = 1 has two solutions in the field F, namely i ::::; 1 and i = -1. Hence for any i -=f. ±1 there exists exactly one j -=f. i with ij == 1 (namely, j == i-1, the inverse element in F), and all the elements of F \ { -1, 1} can be divided into pairs such that the product of elements in each pair is 1. Therefore, (p-1}! = 1 · 2 · · · (p-1) _ -1 (modp). For a contradiction, suppose that the equation i2 = -1 has no solution in F. Then all the elements of F can be divided into pairs such that the product of the elements in each pair is -1. There are (p-1)/2 pairs, which is an even number. Hence (p-1)! - (-1)(p-l)/2 = 1, a contradiction. D Proof of Theorem 2.3.1. By the lemma, we can choose a number q such that q2 - -1 (modp). Consider the lattice A = A(zt, z2) , where z1 = (1, q) and z2 = (0, p). We have dct A = p. We usc Minkowski's theorem for general lattices (Theorem 2.2.1) for the disk C = { (x, y) E R2: x2 + y2 < 2p}. The 28 Chapter 2: Lattices and Minkowski 's Theoren1 area of C is 2trp > 4p = 4 det A, and so C contains a point (a, b) E A\ {0}. We have 0 < a2 + b2 < 2p. At the same time, (a, b) = iz1 + jz2 for some i, j E Z, which means that a = i, b = iq + jp. We calculate a2 + b2 = i2 + (iq + jp)2 = i2 + i2q2 + 2iqjp + j2p2 i2(1 + q2) 0 (modp). Therefore a2 + b2 = p. D Bibliography and remarks. The fact that every prime congruent to 1 mod 4 can be written as the sum of two squares was already known to Fermat (a more rigorous proof was given by Euler). The possibility of expressing every natural number as a sum of at most 4 squares was proved by Lagrange in 1770, as a part of his work on quadratic forms. The proof indicated in Exercise 1 below is due to Davenport. Exercises 1. (Lagrange's four-square theorem) Let p be a prime. (a) Show that there exist integers a, b with a2 + b2 -1 (modp). m (b) Show that the set A = {(x, y, z, t) E Z4: z _ ax + by (modp), t -bx - ay (modp)} is a lattice, and compute det(A). II1 (c) Show the existence of a nonzero point of A in a ball of a sui table radius, and infer that p can be written as a sum of 4 squares of integers. x (d) Show that any natural number can be written as a sum of 4 squares of integers. 0 3 Convex Independent Subsets Here we consider geometric Ramsey-type results about finite point sets in the plane. Ramsey-type theorems are generally statements of the following type: Every sufficiently large structure of a given type contains a "regular" substructure of a prescribed size. In the forthcoming Erdos-Szekeres theorem (Theorem 3.1.3), the "structure of a given type" is simply a finite set of points in general position in R 2, and the "regular substructure" is a set of points forming the vertex set of a convex polygon, as is indicated in the picture: • • • • • • • A prototype of Ramsey-type results is Ramsey's theorem itself: For every choice of natural numbers p, r, n, there exists a natural number N such that whenever X is an N-element set and c: (x) -+ {1, 2, . . . , r} is an arbitrary coloring of the system of all p-element subsets of X by r colors, then there is an n-element subset Y C X such that all the p-tuples in (ȕ) have the same color. The most famous special case is with p = r = 2, where ( "'i) is interpreted as the edge set of the complete graph K N on N vertices. Ramsey's theorem asserts that if each of the edges of KN is colored red or blue, we can always find a complete subgraph on n vertices with all edges red or all edges blue. Many of the geometric Ramsey-type theorems, including the Erdos­ Szekeres theorem, can be derived from Ramsey's theorem. But the quantita­ tive bound for the N in Ramsey's theorem is very large, and consequently, 30 Chapter 3: Convex Independent Subsets the size of the "regular" configurations guaranteed by proofs via Ramsey's theorem is very small. Other proofs tailored to the particular problems and using more of their geometric structure often yield much better quantitative results. 3.1 The Erdos-Szekeres Theorem 3.1.1 Definition (Convex independent set). W e say that a set X C Rd is convex independent if f or every x E X, we have x ԏ conv(X \ {x} ). The phrase "in convex position" is sometimes used synonymously with "convex independent." In the plane, a finite convex independent set is the set of vertices of a convex polygon. We will discuss results concerning the occurrence of convex independent subsets in sufficiently large point sets. Here is a simple example of such a statement. 3.1.2 Proposition. Among any 5 points in the plane in general position (no 3 collinear), we can find 4 points forming a convex independent set. Proof. If the convex hull has 4 or 5 vertices, we are done. Otherwise, we have a triangle with two points inside, and the two interior points together with one of the sides of the triangle define a convex quadrilateral. D Next, we prove a general result. 3.1.3 Theorem (Erdos-Szekeres theorem). For every natural number k there exists a number n(k) such that any n(k)-point set X c R2 in general position contains a k-point convex independent subset. First proof (using Ramsey's theorem and Proposition 3.1.2). Color a 4-tuple T c X red if its four points are convex independent and blue otherwise. If n is sufficiently large, Ramsey's theorem provides a k-point subset Y c X such that all 4-tuples from Y have the same color. But for k > 5 this color cannot be blue, because any 5 points determine at least one red 4-tuple. Consequently, Y is convex independent, since every 4 of its points are ( Caratheodory's theorem). o Next, we give an inductive proof; it yields an almost tight bound for n(k). Second proof of the Erdos-Szekeres theorem. In this proof, by a set in general position we mean a set with no 3 points on a common line and no 2 points having the same x-coordinate. The latter can always be achieved by rotating the coordinate system. Let X be a finite point set in the plane in general position. We call ... X" a cup if X is convex independent and itR convex hull is bounded from above by a single edge (in other words, if the points of X lie on the graph of a convex function). 3.1 The Erdos-Szekeres Theorem •···... ..-······· ···----------· 31 Similarly, we define a cap, with a single edge bounding the convex hull from below. A k-cap is a cap with k points, and similarly for an t'-cup. We define f(k, f) as the smallest number N such than any N-point set in general position contains a k-cup or an t'-cap. By induction on k and t', we prove the following formula for f ( k, f): f(k, 1!) < c; æ 2 4) + 1. (3.1) Theorem 3.1.3 clearly follows from this, with n(k) < f(k, k). For k < 2 or .e < 2 the formula holds. Thus, let k, t' > 3, and consider a set P in general position with N ::::;: /(k-1, £) + f(k, f-1)-l points. We prove that it contains a k-cup or an l-eap. This will establish the inequality f(k, f) < f(k -1, f) + f(k, f-1)-1, and then (3.1) follows by induction; we leave the simple manipulation of binomial coefficients to the reader. Suppose that there is no t'-cap in X. Let E C X be the set of points p E X such that X contains a (k-1)-cup ending with p. We have lEI > N - f(k-1, l) + 1 = f(k, i-1), because X \ E contains no (k-1)-cup and so IX \ El < f(k-1, i) . Either the set E contains a k-cup, and then we are done, or there is an ( f -1 )-cap. The first point p of such an ( i-1 )-cap is, by the definition of E, the last point of some ( k-1 )-cup in X, and in this situation, either the cup or the cap can be extended by one point: k - 1 · • . • .··p· . . ··•············•· i - 1 .•···--.... . . ' . . . . This finishes the inductive step. or . . . k - 1 , ·· .• . . . . . . . . . . . .• -· p f - 1 .--•--·--• . --· ..... D A lower bound for sets without k-cups and £-caps. Interestingly, the bound for f(k, t') proved above is tight, not only asymptotically but exactly! This means, in particular, that there are n-point planar sets in general posi­ tion where any convex independent subset has at most O(log n) points, which is somewhat surprising at first sight. An example of a set Xk,e of (kt!_2 4) points in general position with no k-cup and no t'-cap can be constructed, again by induction on k + .e. If k < 2 or f < 2, then Xk,e can be taken as a one-point set. 32 Chapter 3: Convex Independent Subsets Supposing both k > 3 and f > 3, the set Xk,£ is obtained from the sets L :=:; Xk-l,l and R :=:; Xk,l-l according to the following picture: - · .­ .. - · · - - · ........... .. L = xk-1 £ ' . .. . - - .. -.. . . --R = Xk,l-1 ..... .. - - · · ... ... .. -.. .. . . - · · .. -The set L is placed to the left of R in such a way that all lines determined by pairs of points in L go below R and all lines determined by pairs of points of R go above L. Consider a cup C in the set Xk,l thus constructed. If C n L = 0, then ICI < k-1 by the assumption on R. If C n L =I= 0, then C has at most 1 point in R, and since no cup in L has more than k-2 points, we get ICI < k-l as well. The argument for caps is symmetric. We have JXk,t.l = IXk-1,£1 + IXk,t.-1 1, and the formula for IXk,£1 follows by induction; the calculation is almost the same as in the previous proof. D Determining the exact value of n(k) in the Erdos-Szekeres theorem is much more challenging. Here are the best known bounds: (2k - 5) 2k-2 + 1 < n(k) < k _ 2 + 2. The upper bound is a small improvement over the bound f(k, k) derived above; see Exercise 5. The lower bound results from an inductive construction slightly more complicated than that of Xk,l· Bibliography and remarks. A recent survey of the topics discussed in the present chapter is Morris and Sol tan [MSOO]. The Erdos-Szekeres theorem was one of the first Ramsey-type re­ sults [ES35], and Erdos and Szekeres independently rediscovered the general Ramsey's theorem at that occasion. Still another proof, also using Ramsey's theorem, was noted by Tarsi: Let the points of X be numbered x1, x2, . . . , Xn, and color the triple {Xi, x j, Xk}, i < j < k, red if we make a right turn when going from Xi to Xk via Xj, and blue if we make a left turn. It is not difficult to check that a homogeneous subset, with all triples having the same color, is in convex position. 3. 1 The Erdos-Szekeres Theorem The original upper bound ofn(k) < (2; 2 4)+1 from (ES35] has been improved only recently and very slightly; the last improvement to the bound stated in the text above is due to T6th1 and Valtr [TV98J. The Erdos-Szekeres theorem was generalized to planar convex sets. The following somewhat misleading term is used: A family of pairwise disjoint convex sets is in general position if no set is contained in the convex hull of the union of two other sets of the family. For every k there exists n such that in any family of n pairwise disjoint convex sets in the plane in general position, there are k sets in convex position, meaning that none of them is contained in the convex hull of the union of the others. This was shown by Bisztriczky and G. Fejes T6th [BT89] and, with a different proof and better quantitative bound, by Pach and T6th [PT98]. The assumption of general position is necessary. An interesting problem is the generalization of the Erdos-Szekeres theorem to Rd, d > 3. The existence of nd(k) such that every nd(k) points in Rd in general position contain a k-point subset in convex position is easy to see (Exercise 4), but the order of magnitude is wide open. The current best upper bound nd(k) < (2k k 2Ϫ-1) +d [KarOl] slightly improves the immediate bound. Fiiredi (unpublished] conjec-tured that n3(k) < e0(Vk). If true, this would be best possible: A construction of Karolyi and Valtr [KVOl] shows that for every fixed d > 3, nd(k) > ecdki / (d- I ) with a suitable cd > 0. The construction starts with a one-point set X0, and Xi+1 is obtained from Xi by re­ placing each point X E Xi by the two points X - ( cf, cf-1 , . . . , ci) and x + (cf, cf - 1 , . . . , Ei), with Ei > 0 sufficiently small, and then perturbing the resulting set very slightly, so that Xi+1 is in suitable general position. We have IXil == 2i, and the key lemma asserts that mc(Xi+l) < mc(Xi)+mc(7r(Xi)), where mc(X) denotes the maximum size of a convex independent subset of X and 7f is the projection to the hyperplane { xd == 0}. Another interesting generalization of the Erdos-Szekeres theorem to R d is mentioned in Exercise 5.4.3. The bounds in the Erdos-Szekeres theorem were also investigated for special point sets, namely, for the so-called dense sets in the plane. An n-point X c R2 is called c-dense if the ratio of the maximum and minimum distances of points in X is at most cy'ri. For every planar n-point set, this ratio is at least c0 fo for a suitable constant c0 > 0, as an easy volume argument shows, and so the dense sets are quite well spread. Improving on slightly weaker results of Alon, Katchalski, and Pulleyblank [AKP89], Valtr [Val92a] showed, by a probabilistic argument, that every c-dense n-point set in general position contains 33 The reader should be warned that four mathematicians named Toth are men­ tioned throughout the book. For two of them, the surname is actually Fejes Toth (Laszlo and Gabor), and for the other two it is just Toth (Geza and Csaba). 34 Chapter 3: Convex Independent Subsets a convex independent subset of at least c1 n 113 points, for some c1 > 0 depending on c, and he proved that this bound is asymptotically optimal. Simplified proofs, as well as many other results on dense sets, can be found in Valtr's thesis [Val94]. Exercises 1. Find a configuration of 8 points in general position in the plane with no 5 convex independent points (thereby showing that n(5) > 9). 0 2. Prove that the set {(i,j); i = 1, 2, . . . , m, j = 1, 2, . . . , m} contains no convex independent subset with more that Cm213 points (with C some constant independent of m). m 3. Prove that for each k there exists n( k) such that each n( k )-point set in the plane contains a k-point convex independent subset or k points lying on a common line. 0 4. Prove an Erdos-Szekeres theorem in Rd: For every k there exists n = nd(k) such that any n points in Rd in general position contain a k-point convex independent subset. l:3J 5. (A small improvement on the upper bound on n(k)) Let X c Rd be a planar set in general position with f(k, i)+l points, where f is as in the second proof of Erdos-Szekeres, and let t be the (unique) topmost point of X. Prove that X contains a k-cup with respect to t or an f-cap with respect to t, where a cup with respect to t is a subset Y C X \ { t} such that Y U { t} is in convex position, and a cap with respect to t is a subset Y C X \ { t} such that { x, y, z, t} is not in convex position for any triple {x, y, z} C Y. Infer that n(k) < f(k-l, k)+l. m 6. Show that the construction of Xk R. described in the text can be realized ' on a polynomial-size grid. That is, if we let n = JXk,tJ, we may suppose that the coordinates of all points in X k,e are integers between 1 and nc with a suitable constant c. (This was observed by Valtr.) 0 3.2 Horton Sets Let X be a set in Rd. A k-point set Y C X is called a k-hole in X if Y is convex independent and conv(Y) n X = Y. In the plane, Y determines a convex k-gon with no points of X inside. Erdos raised the question about the rather natural strengthening of the Erdos-Szekeres theorem: Is it true that for every k there exists an n( k) such that any n( k )-point set in the plane in general position has a k-hole? A construction due to Horton, whose streamlined version we present be­ low, shows that this is false for k > 7: There are arbitrarily large sets without a 7-hole. On the other hand, a positive result holds for k < 5. For k = 6, the answer is not known, and this "6-hole problem" appears quite challenging. 3. 2 Horton Sets 35 3.2.1 Proposition (The existence of a 5-hole). Every sufficientl.Y large planar point set in general position contains a 5-hole. Proof. By the Erdos-Szekeres theorem, we may assume that there exists a 6-point convex independent subset of our set X. Consider a 6-point convex independent subset H C X with the smallest possible IX n conv(H)I. Let I == conv(H) n (X \ H) be the points inside the convex hull of H. • If I == 0, we have a 6-hole. • If there is one point x in I, we consider a diagonal that partitions the hexagon into two quadrilaterals: The point x lies in one of these quadrilaterals, and the vertices of the other quadrilateral together with x form a 5-hole. • If III > 2, we choose an edge xy of conv(J). Let "f be an open half-plane bounded by the line xy and containing no points of I (it is determined uniquely unless I I I == 2). If I'Y n HI > 3, we get a 5-hole formed by X, y, and 3 points of 'Y n H. For I'Y n HI < 2, we have one of the two cases indicated in the following picture: By replacing ' U and v by x and y in the left situation, or u by x in the right situation, we obtain a 6-point convex independent set having fewer points inside than H, which is a contradiction. D 3.2.2 Theorem (Seven-hole theorem). There exist arbitrarily large finite sets in the plane in general position without a 7 -hole. The sets constructed in the proof have other interesting properties as well. Definitions. Let X and Y be finite sets in the plane. We say that X is high above Y (and that Y is deep below X) if the following hold: 36 Chapter 3: Convex Independent Subsets (i) No line determined by two points of X U Y is vertical. ( ii) Each line determined by two points of X lies above all the points of Y. (iii) Each line determined by two points of Y lies below all the points of X. For a set X ;:;;;;; { Xt , x2, . . . , Xn}, with no two points having equal x­ coordinatcs and with notation chosen so that the x-coordinates of the Xi increase with i, we define the sets X0 ;:;;;;; { x2, x4, . • . } (consisting of the points with even indices) and X 1 ;:;;;;; { x1 , X3 , . • . } (consisting of the points with odd indices). A finite set H c R2 is a Horton set if JHJ < 1, or the following conditions hold: JHJ > 1, both H0 and H1 are Horton sets, and H1 lies high above Ho or Ho lies high above H1. 3.2.3 Lemma. For every n > 1, an n-point Horton set exists. Proof. We note that one can produce a smaller Horton set from a larger one by deleting points from the right. We construct H(k) , a Horton set of size 2k, by induction. We define H(O) as the point (0, 0). Suppose that we can construct a Horton set H(k) with 2k points whose x-coordinates are 0, 1, . . . , 2k- 1. The induction step goes as follows. Let A ;:;;;;; 2H(k) (i.e., H(k) expanded twice), and B == A + (1, hk), where hk is a sufficiently large number. We set H(k+l) == A U B. It is easily seen that if hk is large enough, B lies high above A, and so H(k+l) is Horton as well. The set H(3) looks like this: • • • • • • • • D Closedness from above and from below. A set X in R2 is r-closed from above if for any r-cup in X there exists a point in X lying above the r-cup (i.e., above the bottom part of its convex hull). r == 4 Similarly, we define a set r-closed from below using r-eaps. 3.2.4 Lemma. Every Horton set is both 4-closed f rom above and 4-closed f rom below. 3. 2 Horton Sets 37 Proof. We proceed by induction on the size of the Horton set. Let H be a Horton set, and assume that H0 lies deep below H1 (the other possible case is analogous). Let C C H be a 4-cup. If C C H0 or C C H1, then a point closing C from above exists by the inductive hypothesis. Thus, let C n H0 =/= 0 =/= C n H1 . The cup C may have at most 2 points in H1 (the upper part): If there were 3 points, say a, b, c (in left-to-right order), then Ho lies below the lines ab and be, and so the remaining point of C, which was supposed to lie in H0, cannot form a cup with {a, b, c}: Hr This means that C has at least 2 points, a and b, in the lower part H0. Since the points of H0 and H1 alternate along the x-axis, there is a point c E H1 between a and b in the ordering by x-coordinates. This c is above the segment ab, and so it closes the cup C from above. We argue similarly for a 4-cap. D 3.2.5 Proposition. No Horton set contains a 7-hole. Proof. (Very similar to the previous one.) For contradiction, suppose there is a 7-hole X in the considered Horton set H. If X C Ho or X C H1 , we use induction. Otherwise, we select the part (Ho or H1) containing the larger portion of X; this has at least 4 points of X. If this part is, say, H 0, and it lies deep below H1, these 4 points must form a cup in H0, for if some 3 of them were a cap, no point of H1 could complete them to a convex independent set. By Lemma 3.2.4, H0 (being a Horton set) contains a point closing the 4-cup from above. Such a point must be contained in the convex hull of the 7-hole X, a contradiction. D Bibliography and remarks. The existence of a 5-hole in every 10-point planar set in general position was proved by Harborth [Har79]. Horton [Hor83] constructed arbitrarily large sets without a 7-hole; we followed a presentation of his construction according to Valtr [Val92a). The question of existence of k-holes can be generalized to point sets in Rd. Valtr [Val92b] proved that (2d+l)-holes exist in all sufficiently large sets in general position in R d, and he constructed arbitrarily large sets without k-holes for k > 2d-1 (P(d-1)+1), where P(d-1) is the product of the first d-1 primes. We outline the construction. Let H 38 Chapter 3: Convex Independent Subsets be a finite set in Rd, d > 2, in general position (no d+1 on a common hyperplane and no two sharing the value of any coordinate). Let H = { x1, x2, . . . , X11.} be enumeration of H by increasing first coordinate, and let Hq,r = {xi: i - r (mod q)}. Let Pl = 2,p2 = 3, . . . ,Pd-1 be the first d-1 primes, and let us write p = Pd-l for brevity. The set H is called d-Horton if (i) its projection on the first d-1 coordinates is a (d-1)-Horton set in Rd-l (where all sets in R1 are 1-Horton), and (ii) either IHI < 1 or all the sets Hp,r are d-Horton, r = 0, 1, . . . ,p-l, and for every subset I C { 0, 1 , . . . , p-I} of at least two indices, there is a partition I = J U K, J =I= 0 =I= K, such that UrEJ Hp,r lies high above UrEK Hp,r· Here A lies high above B if every hyperplane determined by d points of A lies above B (in the direction of the dth coordinate) and vice versa. Arbitrarily large d-Horton sets can be constructed by induc­ tion: We first construct the (d-1)-dimensional projection, and then we determine the dth coordinates suitably to meet condition (ii). The nonexistence of large holes is proved using an appropriate generaliza­ tion of r-closedness from above and from below. Since large sets generally need not contain k-holes, it is natural to look for other, less special, configurations. Bialostocki, Dierker, and Voxman (BDV91] proved the existence of k-holes modulo q: For every q and for all k > q+ 2, each sufficiently large set X (in terms of q and k) in general position contains a k-point convex independent subset Y such that the number of points of X in the interior of conv(Y) is divisible by q; see Exercise 6. Karolyi, Pach, and T6th [KPT01] obtained a similar result with the weaker condition k > ٩ q + 0(1). They also showed that every sufficiently large 1-almost convex set in the plane contains a k-hole, and Valtr [Val01] extended this to k-almost convex sets, where X is k-almost convex if no triangle with vertices at points of X contains more than k points of X inside. Exercises 1. Prove that an n-point Horton set contains no convex independent subset with more than 4 log2 n points. l3J 2. Find a configuration of 9 points in the plane in general position with no 5-hole. l3J 3. Prove that every sufficiently large set in general position in R 3 has a 7-hole. 0 4. Let H be a Horton set and let k > 7. Prove that if Y C H is a k-point subset in convex position, then IH n conv(Y)I > 2Lk/4J . Thus, not only does H contain no k-holes, but each convex k-gon has even exponentially many points inside. 8J 3.2 Horton Sets 39 This result is due to Nyklova [NykOO], who proved exact bounds for Horton sets and observed that the number of points inside each convex k-gon can be somewhat increased by replacing each point of a Horton set by a tiny copy of a small Horton set. 5. Call a set X C R2 in general position almost convex if no triangle with vertices at points of X contains more than 1 point of X in its interior. Let X C R2 be a finite set in general position such that no triangle with vertices at vertices of conv(X) contains more than 1 point of X. Prove that X is almost convex. 0 6. (a) Let q > 2 be an integer and let k = mq+2 for an integer m > 1. Prove that every sufficiently large set X c R 2 in general position contains a k-point convex independent subset Y such that the number of points of X in the interior of conv(Y) is divisible by q. Use Ramsey's theorem for triples. m (b) Extend the result of (a) to all k > q + 2. 0 4 Incidence Problems In this chapter we study a very natural problem of combinatorial geometry: the maximum possible number of incidences between m points and n lines . in the plane. In addition to its mathematical appeal, this problem and its relatives are significant in the analysis of several basic geometric algorithms. In the proofs we encounter number-theoretic arguments, results about graph drawing, the probabilistic method, forbidden subgraphs, and line arrange­ ments. 4.1 Formulation Point-line incidences. Consider a set P of m points and a set L of n lines in the plane. What is the maximum possible number of their incidences, i.e., pairs (p, f) such that p E P, f E L, and p lies on f? We denote the number of incidences for specific P and L by I(P, L), and we let I(m, n) be the maximum of l(P, L) over all choices of an m-element P and an n-element L. For example, the following picture illustrates that 1(3, 3) > 6, and it is not hard to see that actually 1(3, 3) = 6. A trivial upper bound is J(m, n) < mn, but it it can never be attained unless m = 1 or n = 1. In fact, if m has a similar order of rnagnitude as n then I(m, n) is asymptotically much smaller than mn. The order of magnitude is known exactly: 4.1.1 Theorem (Szemeredi-Trotter theorem). For all m, n > 1, we have l(m, n) = O(m213n213 +m+n), and this bound is asymptotically tight. 42 Chapter 4: Incidence Problems We give two proofs in the sequel, one simpler and one including techniques useful in more general situations. We will mostly consider only the most interesting case m = n. The general case needs no new ideas but only a little more complicated calculation. Of course, the problem of point-line incidences can be generalized in many ways. We can consider incidences between points and hyperplanes in higher dimensions, or between points in the plane and some family of curves, and so on. A particularly interesting case is that of points and unit circles, which is closely related to counting unit distances. Unit distances and distinct distances. Let U(n) denote the maximum possible number of pairs of points with unit distance in an n-point set in the plane. For n < 3 we have U(n) = (Û) (all distances can be 1), but already for n = 4 at most 5 of the 6 distances can be 1; i.e., U( 4) = 5: We are interested in the asymptotic behavior of the function U ( n) for n ---+ oo. This can also be reformulated as an incidence problem. Namely, consider an n-point set P and draw a unit circle around each point of p, thereby obtaining a set C of n unit circles. Each pair of points at unit distance con­ tributes two point-circle incidences, and hence U(n) < ÜIlcirc(n, n), where I1circ(m, n) denotes the maximum possible number of incidences between m points and n unit circles. Unlike the case of point-line incidences, the correct order of magnitude of U(n) is not known. An upper bound of O(n413) can be obtained by modifying proofs of the Szemeredi-Trotter theorem. But the best known lower bound is U(n) > n1+c1/loglogn, for some positive constant c1; this is superlincar in n but grows more slowly than n1+e for every fixed c > 0. A related quantity is the minimum possible number of distinct distances determined by n points in the plane; formally, g(n) = min l{dist(x, y): x, y E P}l. PcR2: IPI=n Clearly, g(n) > (Û)/U(n), and so the bound U(n) = O(n413) mentioned above gives g(n) = O(n213). This has been improved several times, and the current best lower bound is approximately O(n°·863). The best known upper bound is O(n/Jlogn). Arrangements of lines. We need to introduce some terminology concern­ ing line arrangements. Consider a finite set L of lines in the plane. They divide the plane into convex subsets of various dimensions, as is indicated in the following picture with 4 lines: 4. 1 Formulation 43 The intersections of the lines, indicated by black dots, are called the vertices. By removing all the vertices lying on a line f E L, the line is split into two unbounded rays and several segments, and these parts are the edges. Finally, by deleting all the lines of L, the plane is divided into open convex polygons, called the cells. In Chapter 6 we will study arrangements of lines and hyperplanes further, but here we need only this basic terminology and (later) the simple fact that an arrangement of n lines in general position has (Û) vertices, n2 edges, and (Ý) +n+ 1 cells. For the time being, the reader can regard this as an exercise, or wait until Chapter 6 for a proof. Many cells in arrangements. What is the maximum total number of vertices of m distinct cells in an arrangement of n lines in the plane? Let us denote this number by K(m, n). A simple construction shows that the maxi­ mum number of incidences J(m, n) is asymptotically bounded from above by K(m, n); more exactly, we have I(rn, n) < 5 K(m, 2n). To see this, consider a set P of m points and a set L of n lines realizing I ( m, n), and replace each line f E L by a pair of lines £', f" parallel to f and lying at distance c from £: f' -----.-ǐ . . 2e 1 . If c > 0 is sufficiently small, then a point p E P incident to k lines in the original arrangement now lies in a tiny cell with 2k vertices in the modified arrangement. It turns out that K ( m, n) has the same order of magnitude as I ( m, n), and the upper bound can be obtained by methods similar to those used for I(m, n). In higher-dimensional problems, even determining the maximum possible complexity of a single cell can be quite challenging. For example, the maximum complexity of a single cell in an arrangement of n hyperplanes is described by the so-called upper bound theorem from the 1970s, which will be discussed in Chapter 5. Chapter 4: Incidence Problems Bibliography and remarks. This chapter is partially based on a nice presentation of the discussed topics in the book by Pach and Agarwal [PA95], which we recommend as a source of additional in­ formation concerning history, bibliographic references, and various re­ lated problems. But we also include some newer results and techniques discovered since the publication of that book. The following neat problem concerning point-line incidences was posed by Sylvester [Syl93] in 1893: Prove that it is impossible to ar­ range a finite nurnber of points in the plane so that a line through every two of them passes through a third, unless they all lie on the same line. This problem remained unsolved until 1933, when it was asked again by Erdos and solved shortly afterward by Gallai. The so­ lution shows, in particular, that it is impossible to embed the points of a finite projective plane :F into R2 in such a way that points of each line of F lie on a straight line in R2. For example, the well-known drawing of the Fano plane of order 3 has to contain a curved line: Recently Pinchasi [Pin02] proved the following conjecture of Bez­ dek, resembling Sylvester's problem: For every finite family of at least 5 unit circles in the plane, every two of them intersecting, there exists an intersection point common to exactly 2 of the circles. The problems of estimating the maximum number of point-line incidences, the maximun1 nurnber of unit distances, and the minimum number of distinct distances were raised by Erdos [Erd46]. For point­ line incidences, he proved the lower bound I(m, n) = O(m213n213 + m + n) (see Section 4.2) and conjectured it to be the right order of magnitude. This was first proved by Szemeredi and Trotter [ST83]. Simpler proofs were found later by Clarkson, Edelsbrunner, Guibas, Sharir, and Welzl [CEG+9o], by Szekely [Sze97], and by Aronov and Sharir [ASOla}; they are quite different from one another, and we dis­ cuss them all in this chapter. T6th [T6t01a] proved the analogy of the Szemeredi-Trotter the­ orem for the con1plex plane; he used the original Szemeredi-Trotter technique, since none of the simpler proofs seems to work there. A beautiful application of techniques of Clarkson et al. [CEG+90] in geometric measure theory can be found in Wolff (Wol97]. This pa­ per deals with a variation of the Kakeya problem: It shows that any Borel set in the plane containing a circle of every radius has Hausdorff dimension 2. 4. 1 Formulation For unit distances in the plane Erdos [Erd46] established the lower bound U(n) = O(n1+cfloglogn) (Section 4.2) and again conjectured it to be tight, but the best known upper bound remains O(n413). This was first shown by Spencer, Szemeredi, and Trotter [SST84], and it can be re-proved by modifying each of the proofs mentioned above for point-line incidences. Further improvement of the upper bound prob­ ably needs different, more "algebraic," methods, which would use the "circularity" in a strong way, not just in the form of simple combi­ natorial axion1s (such as that two points determine at rnost two unit circles). For the analogous problem of unit distances among n points in R 3, Erdos (Erd60] proved O(n413 Ioglogn) from below and O(n513) from above. The example for the lower bound is the grid {1, 2, . . . , ln113 J }3 appropriately scaled; the bound n( n413) is entirely straightforward, and the extra log log n factor needs further number-theoretic consid­ erations. The upper bound follows by an argument with forbidden K3,3; similar proofs are shown in Section 4.5. The current best bound is close to O(n312); more precisely, it is n3122°(a2(n)) [CEG+9o]. Here the function a(n), to be defined in Section 7.2, grows extremely slowly, more slowly than log n, log log n, log log log n, etc. In dimensions 4 and higher, the number of unit distances can be O(n2) (Exer࢝ise 2). Here even the constant at the leading term is known; see [PA95]. Among other results related to the unit-distance problems and considering point sets with various restrictions, we mention a neat construction of Erdos, Hickerson, and Pach [EHP89] showing that, for every a E (0, 2), there is an n-point set on the 2-dimensional unit sphere with the dis­ tance Q occurring at least n( n log n) times (the special distance v'2 can even occur 0( n413) times), and the annoying (and still unsolved) problem of Erdos and Moser, whether the number of unit distances in an n-point planar set in convex position is always bounded by 0( n) (see [PA95] for partial results and references). For distinct distances in the plane, the best known upper bound, due to Erdos, is O(n/Jlogn). This bound is attained for the foxfo square grid. After a series of increases of the lower bound (Moser [Mos52], Chung [Chu84], Beck [Bec83], Clarkson et al. [CEG+9o], Chung, Szemeredi, and Trotter [CST92], Szekely [Sze97], Solymosi and T6th (STOl]) the current record is O(n4/(S-l/e)-e:) for every fixed c > 0 (the exponent is approximately 0.863) by Tardos [TarOl], who im­ proved a number-theoretic lemma in the Solymosi-T6th proof. Aronov and Sharir [ASOlb] obtained the lower bound of approximately n°·526 for distinct distances in R 3. Another challenging quantity is the number I eire ( m, n) of inci­ dences of m points with n arbitrary circles in the plane. The lower bound for point-line incidences can be converted to an example with 45 46 Chapter 4: Incidence Problems m points, n circles, and O(m213n213 + m + n) incidences, but in the case of I eire ( m, n), this lower bound is not the best possible for all m and n: Consider an example of an n-point set with t = O(njJIOgri) distinct distances and draw the t circles with these distances as radii around each point; the resulting tn = o( n2) circles have 0( n2) in­ cidences with the n points. The current record in the upper bound is due to Aronov and Sharir [ASOla], and for m = n it yields Icirc(n, n) = O(n15/ll+e) = O(nl.364). A little more about their ap­ proach is mentioned in the notes to Section 4.5, including an outline of a proof of a weaker bound Icirc(n, n) = O(n1·4). Two other methods for obtaining upper bounds are indicated in Exercises 4.4.2 and 4.6.4. More generally, one can consider I(P, f), the number of incidences between an m-point P c R2 and a family r of n planar curves. Pach and Sharir [PS98a] proved by Szekely's method that if r is a family of curves with k degrees of freedom and multiplicity type s, meaning that for any k points there are at most s curves of r passing through all of them and no two curves intersect in more than k points, then II(P, f)l = 0 (mk/(2k-l)nl-I/(2k-I) + m + n), with the constant of proportionality depending on k and s. Earlier [PS92], they proved the same bound with some additional technical assumptions on the family r by the technique of Clarkson et al. [CEG+9o]. Most likely this bound is not tight for k > 3. Aronov and Sharir [ASOla] improved the bound slightly for r a family of graphs of univariate polynomials of degree at most k. The best known lower bound is mentioned in the notes to Section 4.2 below. Point-plane incidences. Considering n points on a line in R3 and m , planes containing that line, we see that the number of incidences can be mn without further assumptions on the position of the points and/or planes. Agarwal and Aronov [AA92] proved the upper bound O(m315n415 + m + n) for the number of incidences between m planes and n points in R 3 if no 3 of the points are collinear, slightly improving on a result of Edelsbrunner, Guibas, and Sharir [EGS90]. In dimension d, the maximum number of incidences of n hyperplanes with m vertices of their arrangement is O(m213nd/3 + nd-I) [AA92], and this is tight for m > nd-2 (for smaller m, the trivial O(mn) bound is tight). The con1plexity of many cells in an arrangement of lines was first studied by Canham [Can69], who proved K(m, n) = O(m2 + n), using the fact that two cells can have at most 4 lines incident to both of them (essentially a "forbidden K2,5" argurnent; see Sec­ tion 4.5). The tight bound O(m213n213 + m + n) was first achieved by Clarkson et al. [CEG+9o]. Among results for the complexity of m cells in other types of arrangements we mention the bound O(m213n213 + na(n) + nlogm) for segments by Aronov, Edelsbrun­ ner, Guibas, and Sharir (AEGS92], O(m213n213a(n)113 + n) for unit 4. 1 Formulation circles (CEG+9o] (improved to O(m213n213) +n) by Agarwal, Aronov, and Sharir [AASOl]), O(m315n4152°.4n(n) + n) for arbitrary circles [CEG+9o] (also improved in [AASOl]; see the notes to Section 4.5), O(m213n+n2) for planes in R3 by Agarwal and Aronov (AA92] (which is tight), and O(m112nd12(log n)(ld/2J-l)f2) for hyperplanes in Rd by Aronov, Matousek, and Sharir (AMS94]. If one counts only facets of m cells in an arrangement of n hyperplanes in Rd, then the tight bound is O(m213nd/3 + nd-I) (AA92]. A few more references on this topic can be found in Agarwal and Sharir [ASOOa]. The number of similar copies of a configuration. The problem of unit distances can be rephrased as follows. Let K denote a set consisting of two points in the plane with unit distance. What is the maximum number of congruent copies of K that can occur in an n-point set in the plane? This reformulation opens the way to various interesting generalizations, where one can vary K, or one can consider homo­ thetic or similar copies of K, and so on. Elekes's survey (EleOl] nicely describes these problems, their relation to the incidence bounds, and other connections. Here we sketch some of the main developments. Beautiful results were obtained by Laczkovich and Ruzsa [LR97], who investigated the maximum number of similar copies of a given finite configuration K that can be contained in an n-point set in the plane. Earlier, Elekes and Erdos [EE94] proved that this number is O(n2-(logn)-c) for all K, where c > 0 depends on K, and it is O(n2) whenever all the coordinates of the points in K are algebraic num­ bers. Building on these results, Laczkovich and Ruzsa proved that the maximum number of similar copies of K is n(n2) if and only if the cross-ratio of every 4 points of K is algebraic, where the cross-ratio of points a, b, c, d E R 2 equals ٧ ٨ · Y ! , with a, b, c, d interpreted as complex numbers in this formula. Their proof makes use of very nice results from the additive the­ ory of numbers, most notably a theorem of Freiman [Fre73] (also see Ruzsa [Ruz94]): If A is a set of n integers such that lA + AI < en, where A + A = {a + b: a, b E A} and c > 0 is a constant, then A is contained in a d-dimensional generalized arithmetic progression of size at most Cn, with C and d depending on c only. Here a d-dimen­ sional generalized arithmetic progression is a set of integers of the form {zo+i1q1 +i2q2 +· · ·+idqd: i1 = O, l, . . . , n1, i2 = O, l,.. . . . , n2, · · · , id = 0, 1, . . . , nd} for some integers zo and QI, Q2, . . • , Qd· It is easy to see that lA + AI < CdiAI for every d-dimensional generalized arithmetic pro­ gression, and Freiman's theorem is a sort of converse statement: If lA + AI = O(IAI), then A is not too far from a generalized arithmetic progression. (Freiman's theorem has also been used for incidence­ related problems by Erdos, Fiiredi, Pach, and Ruzsa (EFPR93], and 47 48 Chapter 4: Incidence Problems Gowers's paper [Gow98] is an impressive application of results of this type in combinatorial number theory.) Polynomials attaining O(n) values on Cartesian products. Interesting results related to those of Freiman, as well as to incidence problems, were obtained in a series of papers by Elekes and his coworkers (they are described in the already mentioned survey [Ele01]). Perhaps even more significant than the particular results is the direction of research opened by them, combining algebraic and combinatorial tools. Let us begin with a conjecture of Purdy proved by Elekes and Ronyai [EROO] as a consequence of their theorems. Let P be a set of n distinct points lying on a line u C R 2, let Q be a set of n distinct points lying on a line v C R2, and let Dist(P, Q) = {liP - qll: p E P, q E Q}. If, for example, u and v are parallel and if both P and Q are placed with equal spacing along their lines, then I Dist(P, Q)l < 2n. Another such case is P = { ( Jl, 0): i = 1, 2, . . . , n} and Q = { (0, .JJ ): j = 1, 2, . . . , n }: This time u and v are perpendicular, and again I Dist(P, Q)l < 2n. According to Purdy's conjecture, these are the only possible positions of u and v if the number of distances is linear: For every C > 0 there is an no such that if n > no and I Dist(P, Q)l < Cn, then u and v are parallel or perpendicular. If we parameterize the line u by a real parameter x, and v by y, and denote the cosine of the angle of u and v by A, then Purdy's conjecture can be reformulated in algebraic terms as follows: Whenever X, Y C R are n-point sets such that the polynomial F(x, y) = x2 + y2 + 2Axy attains at most Cn distinct values on X x Y, i.e., I{F(x, y): x E X, y E Y}l < Cn, then necessarily A = 0 or .X = ±1, provided that n > no( C). Elekes and Ronyai [EROO] characterized all bivariate polynomials F(x, y) that attain only O(n) values on Cartesian products X x Y. For every C, d there exists an no such that if F( x, y) is a bivariate polynomial of degree at most d and X, Y C R are n-point sets, n > n0, such that F(x, y) attains at most Cn distinct values on X x Y, then F(x, y) has one of the two special forms f(g(x) +h(y)) or f(g(x)h(y)), where f, g, h are univariate polynomials. In fact, we need not consider the whole X x Y; it suffices to assume that F attains at most Cn values on an arbitrary subset of 8n2 pairs from X x Y (with no depending on 8, too). A siinilar result holds for a bivariate rational function F(x, y), with one more special form to consider, namely, F(x, y) = f((g(x) + h(y))/(1 - g(x)h(y))). We indicate a proof only for the special case of the polynomial F(x, y) = x2 + y2 + 2-Xxy from Purdy's conjecture (following Elekes [Ele99]); the basic idea of the general case is sirnilar, but several rnore tools are needed, especially from elementary algebraic geometry. So let Z = F(X, Y) be the set of values attained by F on X x Y. For each Yi E Y, put fi(x) = F(x, yi), and define the family f = {lij: i ,j = 4.1 Forn1ulation 1, 2, . . . , n, i #- j} of planar curves by rij = {(fi(t), fi(t)): t E R} (this is the key trick). Each rij contains at least G points of z X z' since among the n points (fi(xk), /j(xk)), Xk E X, no 3 can coincide, because the fi are quadratic polynomials. Moreover, a straightfor­ ward (although lengthy) calculation using resultants verifies that for A ¢. {0, ±1}, at most 8 distinct curves rij can pass through any two given distinct points a, b E R 2. Consequently, r contains at least z n2 distinct curves. Using the bound of Pach and Sharir (PS92), [PS98a) on the nurnber of incidences between points and algebraic curves men­ tioned above, with Z x Z as the points and the at least z n2 distinct curves of r as the curves, we obtain that IZI = O(n514). So there is even a significant gap: Either A E {0, ±1}, and then F(X, Y) can have only 2n distinct elements for suitable X, Y, or .A ¢ {0, ±1} and then IF{X, Y)l = n(n514) for all X, Y. Perhaps this latter bound can be improved to O(n2-e) for every e > 0 (so there would be an almost-dichotomy: either the number of values ofF can be linear, or it has to be always near-quadratic). On the other hand, it is known that the polynomial x2 + y2 + xy attains only O(n2 /Jlogn) distinct values for x, y ranging over {1, 2, . . . , n} , and so the bound need not always be linear or quadratic. It seems likely that in the general case of the Elekes-R6nyai theorem the number of values attained by F should be near-quadratic unless F is one of the special forms. Further generalizations of the Elekes-R6nyai theorem were ob­ tained by Elekes and Szabo; see [Ele01 J. Exercises 49 1. Let It eire ( m, n) be the maximum number of incidences of m points with n unit circles and let U ( n) be the maximum number of unit distances for an n-point set. (a) Prove that ltcirc(2n, 2n) = O(ltcirc(n, n)). II1 (b) We have seen that U(n) < 5Jlcirc(n, n). Prove that Itcirc(n, n) O(U(n)). ǣ 2. Show that an n-point set in R4 may determine O(n2) unit distances. [!] 3. Prove that if X c Rd is a set where every two points have distance 1, then lXI < d+1. m 4. What can be said about the maximum possible number of incidences of n lines in R3 with m points? ǣ 5. Use the Szemeredi-Trotter theorem to show that n points in the plane determine at most (a) O(n?13) triangles of unit area, 0 (b) O(n713) triangles with a given fixed angle a. ǣ 50 Chapter 4: Incidence Problems The result in (a) was first proved by Erdos and Purdy [EP71]. As for (b), Pach and Sharir [PS92] proved the better bound O(n2 logn); also see [PA95]. 6. (a) Using the Szen1eredi-Trotter theore1n, show that the maxi1nun1 pos­ sible number of distinct lines such that each of them contains at least k points of a given m-point set P in the plane is O(m2 jk3 + m/k). ill (b) Prove that such lines have at most 0( m2 j k2 + m) incidences with P. m 7. (Many points on a line or many lines) (a) Let P be an m-point set in the plane and let k < y'rii be an integer parameter. Prove (using Exercise 6, say) that at most O(m2 /k) pairs of points of P lie on lines containing at least k and at most y'rii points of P. 0 (b) Similarly, for K > y'rii , the number of pairs lying on lines with at least y'rii and at most K points is O(Km). III (c) Prove the following theorem of Beck [Bec83]: There is a constant e > 0 such that for any n-point P C R 2, at least en 2 distinct lines are determined by P or there is a line containing at least en points of P. 0 (d) Derive that there exists a constant c > 0 such that for every n-point set P in the plane that does not lie on a single line there exists a point p E P lying on at least en distinct lines determined by points of P. ITJ Part (d) is a weak form of the Dirac-Motzkin conjectnre; the full conjec­ ture, still unsolved, is the same assertion with c = 5 . 8. (Many distinct radii) (a) Assume that Icirc(m, n) = O(mo:nf3 +m+n) for some constants o: < 1 and f3 < 1, where Icirc(m, n) is the maximum number of incidences of m points with n circles in the plane. In analogy with to Exercise 7, derive that there is a constant e > 0 such that for any n-point set P c R 2, there are at least en3 distinct circles containing at least 3 points of P each or there is a circle or line containing at least en points of P. III (b) Using (a), prove the following result of Elekes (an answer to a question of Balog): For any n-point set P c R2 not lying on a common circle or line, the circles determined by P (i.e., those containing 3 or more points of P) have 0( n) distinct radii. [!] (c) Find an example of an n-point set with only O(n) distinct radii. II1 9. (Surns and products cannot both be few) Let A c R be a set of n distinct real numbers and let S = A + A = {a + b: a, b E A} and P = A · A = { ab: a, b E A}. (a) Check that each of the n2 lines {(x, y) E R2: y = a(x - b)}, a, b E A, contains at least n distinct points of S x P. II] (b) Conclude using Exercise 6 that IS X PI = n(n512), and consequently, max( lSI, IP) = O(n514); i.e., the set of sums and the set of products can never both have almost linear size. ǣ (This is a theorem of Elekes [Ele97] improving previous results on a problem raised by Erdos and Szemeredi.) 4.2 Lower Bounds: Incidences and Unit Distances 51 10. (a) Find n-point sets in the plane that contain O(n2) similar copies of the vertex set of an equilateral triangle. ill (b) Verify that the following set Pm has n = O(m4) points and contains O(n2) similar copies of the vertex set of a regular pentagon: Identify R2 with the complex plane C, let w = e27ri/S denote a primitive 5th root of unity, and put Pm = { io + i1w + i2w2 + i3w3: io, i1, i2, i3 E Z, lij I < m }. IT] The example in (b) is from Elekes and Erdos [EE94], and the set P00 is called a pentagonal psendolattice. The following picture shows P2: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ·. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . 4.2 Lower Bounds: Incidences and Unit Distances 4.2.1 Proposition (Many point-line incidences). W e have I(n, n) O(n413), and so the upper bound f or the maximum number of incidences of n points and n lines in the plane in the Szemeredi-TI-otter theorem is asymptotically optimal. It is not easy to come up with good constructions "by hand." Small cases do not seem to be helpful for discovering a general pattern. Surprisingly, an asymptotically optimal construction is quite simple. The appropriate lower bound for I(m, n) with n # m is obtained similarly (Exercise 1). Proof. For simplicity, we suppose that n Ü 4k3 for a natural number k. For the point set P, we choose the k x 4k2 grid; i.e., we set P = {(i,j) : i = 0, 1, 2, . . . , k-1, j = 0, 1, . . . , 4k2-1}. The set L consists of all the lines with equations y = ax + b, where a = 0, 1, . . . , 2k-1 and b = 0, 1, . . . , 2k2-1. These are n lines, as it should be. For x E [0, k), we have ax + b < ak + b < 52 Chapter 4: Incidence Problen1s 2k2 + 2k2 = 4k2. Therefore, for each i = 0, 1, . . . , k-1, each line of L contains a point of P with the x-coordinate equal to i, and so I(P, L) > k· ILI = ! n413. 0 Next, we consider unit distances, where the construction is equally simple but the analysis uses considerable number-theoretic tools. 4.2.2 Theorem (Many unit distances). For all n > 2, there exist con­ figurations of n points in the plane determining at least n 1 +c1 I log log n unit distances, with a positive constant c1 . A configuration with the asymptotically largest known number of unit distances is a y'n x y'n regular grid with a suitably chosen step. Here unit distances are related to the number of possible representations of an integer as a sun1 of two squares. We begin with the following claim: 4.2.3 Lemma. Let P1 < P2 < · · · < Pr be primes of the form 4k+1, and put M = P1P2 · · · Pr· Then M can be expressed as a surn of two squares of integers in at least 2r ways. Proof. As we know from Theorem 2.3.1, each Pi can be written as a sum of two squares: Pi = a] + b]. In the sequel, we work with the ring Z[i], the so-called Gaussian integers, consisting of all complex numbers u + iv, where u, v E Z. We use the fact that each element of Z[i] can be uniquely factored into primes. From algebra, we recall that a prime in the ring Z [i] is an element r E Z[i] such that whenever r = 11 1'2 with 11 , {2 E Z[i], then lr1 l = 1 or lr2l = 1. Both existence and uniqueness of prime factorization follows from the fact that Z [i] is a Euclidean ring (see an introductory course on algebra for an explanation of these notions). Let us put aj = aj + i bj, and let iii = aj - i bj be the complex con­ jugate of aj. We have ajiij = (aj + i bj)(aj - i bj) = a] + b] = Pi· Let us choose an arbitrary subset J C I = { 1, 2, . . . , r} and define AJ + iB J = ( njEJ aj) ( njEl\J &j) . Then AJ - iBJ = ( njEJ &j) ( njEl\J aj) , and hence M = (AJ + iBJ)(AJ - iBJ) = A} + B]. This gives one expression of the number M as a sum of two squares. It remains to prove that for two sets J =/= J', AJ + iB J =/= AJ' + iB J'. To this end, it suffices to show that all the aj and iij are primes in Z[i]. Then the numbers AJ + iBJ and AJ' + iBJ' are distinct, since they have distinct prime factorizations. (No ai or iij can be obtained from another one by Inultiplying it by a unit of the ring Z[i]: The units are only the elements 1, -1, i, and -i.) So suppose that aj = /'1 12, rl, 12 E Z[iJ. We have Pi = ajiij == rl/'21112 = lrd2l1'2 l2. Now, lr1 l2 and l12 l2 are both integers, and since pi is a prime, we get that lr1 l = 1 or lr2 l = 1. o Next, we need to know that the primes of the form 4k+ 1 are sufficiently dense. First we recall the well-known prime number theorem: If 1r(n) denotes the number of primes not exceeding n, then 4.2 Lower Bounds: Incidences and Unit Distances n 1r(n) = (1 + o(1)) 1 as n --+ oo. n n 53 Proofs of this fact are quite complicated; on the other hand, it is not so hard to prove weaker bounds en/ log n < 1r( n) < Cn/ log n for suitable positive constants c, C. We consider primes in the arithmetic progression 1, 5, 9, . . . , 4k+ 1, . . . . A famous theorem of Dirichlet asserts that every arithmetic progression con­ tains infinitely many primes unless this is impossible for a trivial reason, namely, unless all the terms have a nontrivial common divisor. The following theorem is still stronger: 4.2.4 Theorem. Let d and a be relatively prime natural numbers, and let 1rd,a(n) be the number of primes of the form a + kd (k = 0, 1, 2, . . .) not exceeding n. W e have 1 n 1l"d,a(n) = (1 + o(l)) 2, there are <.p( d) residue classes modulo d that can possi­ bly contain primes. The theorem shows that the primes are quite uniformly distributed among these residue classes. The proof of the theorem is not simple, and we omit it, but it is very nice, and we can only recommend to the reader to look it up in a textbook on number theory. Proof of the lower bound for unit distances (Theorem 4.2.2). Let us suppose that n is a square. For the set P we choose the points of the y'n x y'n grid with step 11 VIJ, where lv f is the product of the first r-1 primes of the form 4k+1, and r is chosen as the largest number such that M < F. It is easy to see that each point of the grid participates in at least as many unit distances as there are representations of M as a sum of two squares of nonnegative integers. Since one representation by a sum of two squares of nonnegative integers corresponds to at most 4 representations by a sum of two squares of arbitrary integers (the signs can be chosen in 4 ways), we have at least 21.1 /16 unit distances by Lemma 4.2.3. By the choice of r, we have 4P1P2 · · · Pr-1 < n < 4P1P2 · · · Pr, and hence 2r < n and Pr > (F)lfr. Further, we obtain, by Theorem 4.2.4, r = 7r 4,1 (Pr) > ( È - o( 1) )Pr / log Pr > yP; > n 1 13r for sufficiently large n, and thus r3r > n. Taking logarithms, we have 3r log r > log n, and hence r > log n I ( 3 log r) > log n I ( 3 log log n). The number of unit distances is at least n 2r-4 > n1+ct/ log logn, as Theorem 4.2.2 claims. Let us remark that for sufficiently large n the constant c1 can be made as close to 1 as desired. D Bibliography and remarks. Proposition 4.2.1 is due to Erdos [Erd46]. His example is outlined in Exercise 2 (also see [PA95]); the 54 Chapter 4: Incidence Problems analysis requires a bit of number theory. The simpler example in the text is from Elekes [EleO 1]. Its extension provides the best known lower bound for the number of incidences between m points and n > m(k-I)/2 curves with k degrees of freedom: For a parameter t < m11k, let P = {(i,j): 0 < i < t, 0 < j < 7}, and let r consist of the graphs of the polynomials 2.:; ȓ atx£ with a£ = 0, 1, . . . , l kt"i +t J , f = 0, 1, . . . ' k-1. Theorem 4.2.2 is due to Erdos [Erd46], and the proof uses ingredi­ ents well known in number theory. The prime number theorem (and also Theorem 4.2.4) was proved in 1896, by de la Valee Poussin and independently by Hadamard (see Narkiewicz [NarOO]). Exercises 1. By extending the example in the text, prove that for all m, n with n2 < m and m2 < n, we have J(m, n) = O(n213m213). 0 2. (Another exarnple for incidences) Suppose that n = 4t6 for an integer t > 1 and let P = {(i,j): 0 < i,j < y'n}. Let S = {(a, b), a, b = 1, 2, . . . , t, gcd (a, b) = 1}, where gcd (a, b) denotes the greatest common divisor of a and b. For each point p E P, consider the lines passing through p with slope ajb, for all pairs (a, b) E S. Let L be the union of all the lines thus obtained for all points p E P. (a) Check that ILl < n. 0 (b) Prove that lSI > ct2 for a suitable positive constant c > 0, and infer that I(P, L) = O(nt2) = O(n413). [I] 4.3 Point-Line Incidences via Crossing Numbers Here we present a very simple proof of the Szemeredi-Trotter theorem based on a result concerning graph drawing. We need the notion of the crossing number of a graph G; this is the minimum possible number of edge crossings in a drawing of G. To Inake this rigorous, let us first recall a for1nal definition of a drawing. An arc is the image of a continuous injective map [0, 1] --+ R 2. A drawing of a graph G is a mapping that assigns to each vertex of G a point in the plane (distinct vertices being assigned distinct points) and to each edge of G an arc connecting the corresponding two (in1ages of) vertices and not incident to any other vertex. We do not insist that the drawing be planar, so the arcs are allowed to cross. A crossing is a point common to at least two arcs but distinct from all vertices. In this section we will actually deal only with drawings where each edge is represented by a straight segment. Let G be a graph (or n1ultigraph). The crossing nurnber of a drawing of G in the plane is the number of crossings in the considered drawing, where a crossing incident to k > 2 edges is counted (€) times. So a drawing is planar 4. 3 Point-Line Incidences via Crossing Numbers 55 if and only if its crossing number is 0. The crossing number of the graph G is the smallest possible crossing number of a drawing of G; we denote it by cr(G). For example, cr(K5) = 1: As is well known, for n > 2, a planar graph with n vertices has at most 3n-6 edges. This can be rephrased as follows: If the number of edges is at least 3n-5 then cr( G) > 0. The following theorem can be viewed as a generalization of this fact. 4.3.1 Theorem (Crossing number theorem). Let G = (V, E) be a sim­ ple graph (no multiple edges). Then 1 IEI3 cr(G) > - · - lVI - 64 IVI2 (the constant 6 ٦ can be improved by a more careful calculation). The lower bound in this theorem is asymptotically tight; i.e., there exist graphs with n vertices, m edges, and crossing number O(m3 jn2); see Exer­ cise 1. The assumption that the graph is simple cannot be omitted. For a proof of this theorem, we need a simple lemma: 4.3.2 Lemma. The crossing number of any simple graph G = (V, E) is at least lEI - 3IVI. Proof. If lEI > 3IVI and some drawing of the graph had fewer than IEI-3IVI crossings, then we could delete one edge from each crossing and obtain a planar graph with more than 3IVI edges. 0 Proof of Theorem 4.3.1. Consider some drawing of a graph G = (V, E) with n vertices, m edges, and crossing number x. We may assume m > 4n, for otherwise, the claimed bound is negative. Let p E (0, 1) be a parameter; later on we set it to a suitable value. We choose a random subset V' C V by including each vertex v E V into V' independently with probability p. Let G' be the subgraph of G induced by the subset V'. Put n' == IV'I, m' == IE(G')I, and let x' be the crossing number of the graph G' in the drawing "inherited" from the considered drawing of G. The expectation of n' is E[n'] = np. The probability that a given edge appears in E( G') is p2, and hence E[rn'] == mp2, and similarly we get E[x'] == xp4. At the same time, by Lemma 4.3.2 we always have x' > m' - 3n', and so this relation holds for the expectations as well: E[x'J > E[m'J - 3E[n']. So we have xp4 > mp2 - 3np. Setting p = !;: (which is at most 1, since we assume m > 4n), we calculate that 56 Chapter 4: Incidence Problems 1 m3 x > - -­ - 64 n2 • The crossing number theorern is proved. 0 Proof of the Szemeredi-Trotter theorem (Theorem 4.1.1). We con­ sider a set P of m points and a set L of n lines in the plane realizing the max­ inlum number of incidences J(m, n). We define a certain topological graph G = (V, E), that is, a graph together with its drawing in the plane. Each point p E P becomes a vertex of G, and two points p, q E P are connected by an edge if they lie on a common line f E L next to one another. So we have a drawing of G where the edges are straight segments. This is illustrated below, with G drawn thick: If a line f E L contains k > 1 points of P, then it contributes k-1 edges to P, and hence I(rn, n) = lEI + n. Since the edges are parts of the n lines, at most (ƽ) pairs may cross: cr(G) < (ƽ). On the other hand, from the crossing number theorem we get cr(G) > l 4 • IEI3 /m2 - m. So l 4 · IEI3 /m2 - m < cr(G) < (ƽ), and a calculation gives lEI = O(n213m213 +m). This proves the Szemeredi-Trotter theorem. 0 The best known upper bound on the number of unit distances, U(n) = O(n413), can be proved along similar lines; see Exercise 2. Bibliography and remarks. The presented proof of the Szemeredi­ Trotter theorern is due to Szekely [Sze97]. The crossing number theorem was proved by Ajtai, Chvatal, New­ born, and Szemeredi [ACNS82] and independently by Leighton [Lei84]. This result belongs to the theory of geometric graphs, which studies the properties of graphs drawn in the plane (most often with edges drawn as straight segments). A nice introduction to this area is given in Pach and Agarwal [PA95], and a newer survey is Pach [Pac99]. In the rest of this section we mention mainly some of the more recent results. Pach and T6th [PT97] improved the constant l 4 in Theorem 4.3.1 to approximately 0.0296, which is already within a factor of 2.01 of the best known upper bound (obtained by connecting all pairs of points of distance at most d in a regular y'ri x y'ri grid, for a suitable d). The im­ provement is achieved by establishing a better version of Lemma 4.3.2, namely, cr(G) > 5IEI - 25IVI for lEI > 7IVI - 14. 4.8 Point-Line Incidences via Crossing Numbers Pach, Spencer, and T6th (PSTOO] proved that for graphs with cer­ tain forbidden subgraphs, the bound can be improved substantially: For example, if G has n vertices, m edges, and contains no cycle of length 4, then cr(G) = fl(m4jn3) for m > 400n, which is asymp­ totically tight. Generally, let g be a class of graphs that is mono­ tone (closed under adding edges) and such that any n-vertex graph in g has at most O(n1+0) edges, for some a E (0, 1). Then cr(G) > cm2+l/ajn1+l/a for any G E g with n vertices and m > Cnlog2 n edges, with suitable constants C, c > 0 depending on g. The proof applies a generally useful lower bound on the crossing number, which we outline next. Let bw( G) denote the bisection width of G, i.e., the minimum number of edges connecting V1 and V2, over all partitions (V1, V2) of V(G) with IVt l, IV 2I > ࢞IV( G) I. Leighton [Lei83] proved that cr(G) = O(bw(G)2) - IV(G)I for any graph G of maximum de­ gree bounded by a constant. Pach, Shahrokhi, and Szegedy [PSS96], and independently Sykora and Vrt'o [SV94], extended this to graphs with arbitrary degrees: 57 ( 4.1) where degc(v) is the degree of v in G. The proof uses the fol­ lowing version, due to Gazit and Miller [GM90], of the well-known Lipton-Tarjan separator theorem for planar graphs: For any planar graph H and any nonnegative weight function w: V(H) -+ [0, ԑ] with Lvev(H) w(v) = 1, one can delete at most 1.58JLvev(H) degH(v)2 edges in such a way that the total weight of vertices in each component of the resulting graph is at most ԑ. To deduce ( 4.1), consider a drawing of G with the minimum number of crossings, replace each crossing by a vertex of degree 4, assign weight 0 to these vertices and weight IVl G)I to the original vertices, and apply the separator theorem (see, e.g., [PA95) for a more detailed account). Djidjev and Vrt'o [DV02] have re­ cently strengthened ( 4.1), replacing bw( G) by the cutwidth of G. To define the cutwidth, we consider an injective mapping f: V (G) ---+ R. Each edge corresponds to a closed interval, and we find the maximum number of these intervals with a common interior point. The cutwidth is the minimum of this quantity over all f. To derive the result of Pach et al. [PSTOO] on the crossing number of graphs with forbidden subgraphs mentioned above from ( 4.1), we consider a graph G E g with n vertices and m edges. If cr( G) is small, then the bisection width is small, so G can be cut into two parts of almost equal size by removing not too many edges. For each of these parts, we bisect again, and so on, until parts of some suitable size s (depending on n and m) arc reached. By the assumption on Q, 58 Chapter 4: Incidence Problems each of the resulting parts has O(s1+a) edges, and so there are O(ns0) edges within the parts. This number of edges plus the number of edges deleted in the bisections add up to m, and this provides an inequality relating cr( G) to n and m; see [PSTOO] for the calculations. The notion of crossing number is a subtle one. Actually, one can give several natural definitions; a study of various notions and of their relations was made by Pach and T6th [PTOO]. Besides counting the crossings, as we did in the definition of cr( G), one can count the number of (unordered) pairs of edges that cross; the resulting no­ tion is called the pairwise crossing number in [PTOO], and we denote it by pair-cr( G). We always have pair-cr( G) < cr( G), but since two edges (arcs) are allowed to cross several times, it is not clear whether pair-cr( G) = cr( G) for all graphs G, and currently this see1ns to be a challenging open problem (see Exercise 4 for a typical false attempt at a proof). A simple argument shows that cr( G) < 2 pair-cr( G)2 (Exer­ cise 4( c)). A stronger claim, proved in [PTOO], is cr( G) < 2 odd-er( G)2, where odd-er( G) is the odd-crossing number of G, counting the num­ ber of pairs of edges that cross an odd number of times. An inspiration for their proof is a theorem of Hanani and Thtte claiming that a graph G is planar if and only if odd-er( G) = 0. In a drawing of G, call an edge e even if there is no edge crossed by e an odd number of times. Pach and T6th show, by a somewhat complicated proof, that if we consider a drawing of G and let Eo be the set of the even edges, then there is another drawing of G in which the edges of Eo are involved in no crossings at all. The inequality cr( G) < 2 odd-er( G)2 then follows by an argument similar to that in Exercise 4(c). Finally, let us remark that if we consider rectilinear drawings {where each edge is drawn as a straight segment), then the result­ ing rectilinear crossing number can be much larger than any of the crossing numbers considered above: Graphs are known with cr( G) = 4 and arbitrarily large rectilinear crossing numbers (Bienstock and Dean [BD93]). Exercises 1. Show that for any n and m, 5n < m < (ƽ), there exist graphs with n vertices, m edges, and crossing number O(m3 jn2). ǻ 2. In a manner similar to the above proof for point-line incidences, prove the bound ltcirc(n, n) = O(n413), where Ilcirc(m, n) denotes the maximum possible number of incidences between m points and n unit circles in the plane (be careful in handling possible multiple edges in the considered topological graph!). IIl 3. Let K ( n, m) denote the maximum total number of edges of m dis­ tinct cells in an arrangement of n lines in the plane. Prove K(n, m) = O(n213m213 + n + m) using the method of the present section (it may be 4.4 Distinct Distances via Crossing Numbers 59 convenient to classify edges into top and bottom ones and bound each type separately). 0 4. (a) Prove that in a drawing of G with the smallest possible number of crossings, no two edges cross more than once. Ԓ (b) Explain why the result in (a) does not imply that pair-cr( G) = cr( G) (where pair-cr(G) is the minimum number of pairs of crossing edges in a drawing of G). Ԓ (c) Prove that if G is a graph with pair-cr( G) = k, then cr( G) < (22 k) . 8J 4.4 Distinct Distances via Crossing Numbers Here we use the methods from the preceding sections to establish a lower bound on the number of distinct distances determined by an n-point set in the plane. We do not go for the best known bound, whose proof is too complicated for our purposes, but in the notes below we indicate how the improvement is achieved. 4.4.1 Proposition (Distinct distances in R2). The minimum number g( n) of distinct distances determined by an n -point set in the plane satisfies g(n) = O(n415). Proof. Fix an n-point set P, and let t be the number of distinct distances determined by P. This means that for each point p E P, all the other points are contained in t circles centered at p (the radii correspond to the t distances appearing in P). These tn circles obtained for all the n points of P have n(n-1) incidences with the points of P. The first idea is to bound this number of incidences from above in terms of n and t, in a way similar to the proof of the Szemeredi­ Trotter theorem in the preceding section, which yields a lower bound for t. First we delete all circles with at most 2 points on them (the innermost circle and the second outermost circle in the above picture). We have de­ stroyed at most 2nt incidences, and so still almost n 2 incidences remain (we may assume that t is much smaller than n, for otherwise, there is nothing to prove). Now we define a graph G: The vertices are the points of P and the edges are the arcs of the circles between the points. This graph has n vertices, almost n2 edges, and there are at most t2n2 crossings because every two circles intersect in at most 2 points. 60 Chapter 4: Incidence Problems Now if we could apply the crossing number theorem to this graph, we would get that with n vertices and n2 edges there must be at least O(n6 jn2) = n ( n 4) crossings, and so t = n ( n) would follow. This, of course, is too good to be true, and indeed we cannot use the crossing number theorem directly because our graph may have multiple edges: Two points can be connected by several arcs. u A multigraph can have arbitrarily many edges even if it is planar. But if we have a bound on the maximum edge multiplicity, we can still infer a lower bound on the crossing number: 4.4.2 Lemma. Let G = (V, E) be a multigraph with maximum edge multi­ plicity k. Then ( IEI3 ) 2 cr(G) = n kiVI2 - O(k lVI). We defer the proof to the end of this section. In the graph G defined above, it appears that the maximum edge multi­ plicity can be as high as t. If we used Lemma 4.4.2 with k = t in the manner indicated above, we would get only the estimate t = O(n213). The next idea is to deal with the edges of very high multiplicity separately. Namely, we observe that if a pair { u, v} of points is connected by k arcs, then the centers of these arcs lie on the symmetry axis fuv of the segment uv: v u So the line fuv has at least k incidences with the points of P. But the Sze­ meredi-Trotter theorem tells us that there cannot be too many distinct lines, each incident to many points of P. Let us make this precise. By a consequence of the Szemeredi-Trotter theorem stated in Exer­ cise 4.1.6(b ), lines containing at least k points of P each have altogether no more than O(n2 jk2 + n) incidences with P. Let M be the set of pairs { u, v} of vertices of G connected by at least k edges in G, and let E be the set of edges (arcs) connecting these pairs. Each edge in E connecting the pair { u, v} contributes one incidence of the bisecting line fuv with a point p E P. On the other hand, one incidence of 4.4 Distinct Distances via Crossing Numbers 61 such p with some fuv can correspond to at most 2t edges of E, because at most t circles are centered at p, and so fuv intersects at most 2t arcs with center p. So we have lEI = O(tn2 /k2 + tn). Let us set k as large as possible but so that lEI < º n2, i.e., k = Cv't for a sufficiently large constant C. If we delete all edges of E, the remaining graph still has O(n2) edges, but the maximum multiplicity is now below k. We can finally apply Lemma 4.4.2: With n vertices, O(n2) edges, and edge multiplicity at most k = 0( Vt), we have at least O(n4 /Vi) crossings. This number must be below t2n2, which yields t = 0( n415) as claimed. D Proof of Lemma 4.4.2. Consider a fixed drawing of G. We choose a subgraph G' of G by the following random experiment. In the first stage, we consider each edge of G independently, and we delete it with probability 1 - i. In the second stage, we delete all the remaining multiple edges, and this gives G', which has n vertices, m' edges, and x' crossing pairs of edges. Consider the probability Pe that a fixed edge e E E remains in G'. Clearly, Pe < k. On the other hand, if e was one of k' < k edges connecting the same pair of vertices, then the probability that e survives the first stage while all the other edges connecting its two vertices are deleted is 1 ( 1) k'-l 1 - 1 - -> -k k - 3k (since (1 -1/k)k-l > ). We get E[m'] > IEI/3k and E[x'] < x/k2. Applying the crossing number theorem for the graph G' and taking expectations, we have 1 E [m'3] E[x'] > - · - n. - 64 n2 By convexity (Jensen's inequality), we have E [m'3] > (E[m'])3 = O(IEI3 jk3). Plugging this plus the bound E[x'] < xjk2 into the above formula, we get and the lemma follows. Bibliography and remarks. The proof presented above is, with minor modifications, that of Szekely [Sze97]. The bound has su bse­ quently been improved by Solymosi and T6th [ST01] to O(n617) and then by Tardos [Tar01] to (approximately) O(n°·863). The weakest point of the proof shown above seems to be the lower bound on the number of incidences between the points of P and the "rich" bisectors fuv ( { u, v} being the pairs connected by k or more edges). We counted as if each such incidence could be responsible for as many as t edges. While this docs not look geometrically very plausible, D 62 Chapter 4: Incidence Problems it seems hard to exclude such a possibility directly. Instead, Solymosi and T6th prove a better lower bound for the number of incidences of P with the rich bisectors differently; they show that if there arc many edges with multiplicity at least k, then each of 0( n) suitable points is incident to many (namely fJ(n/t312) in their proof) rich bisectors. We outline this argument. We need to modify the definition of the graph G. The new definition uses an auxiliary parameter r (a constant, with r = 3 in the original Solymosi-T6th proof). First, we note that by the theorem of Beck mentioned in Exercise 4.1.7, there is a subset P' C P of O(n) points such that each p E P' sees the other points of P in 0( n) distinct directions. For each p E P', we draw the t circles around p. If several points of P are visible from p in the same direction, we temporarily delete all of them but one. Then, on each circle, we group the remaining points into groups by r consecutive points, and on each circle we delete the at most r-l leftover points fitting in no such group. This still leaves O(n) r-point groups on the circles centered at p. Next, we consider one such r-point group and all the (;) bisecting lines of its points. If at least one of these bisectors, call it fuv, contains fewer than k points of P (k being a suitable threshold), then we add the arc connecting u and v as an edge of G: :' If this bisector has at most k points of P, u : ___..-..-.:·-fuv / ' then the arc { u, v} is added to G. (This is not quite in agree1nent with our definition of a graph drawing, since the arc may pass though other vertices of G, but it is easy to check that if we permit arcs through vertices and modify the definition of the crossing number appropriately, Lemma 4.4.2 remains valid.) The groups where every bisector contains at least k points of P (call them rich groups) do not contribute any edges of G. Setting k = an2 jt2 for a small constant o:, we argue by Lemma 4.4.2 that G has at most f3n2 edges for a small f3 = j3(o:) > 0. It follows that most of the r-point groups must be rich, and so there is a subset P" C P' of fJ(n) points, each of them possessing fJ(n) rich groups on its circles. It remains to prove that each point p E P" is incident to many rich bisectors. We divide the plane around p into angular sectors such that each sector contains about 3rt points (of the 0( n) points in the rich groups belonging to p). Each sector contains at least t complete rich groups (since there are t circles, and so the sector's boundaries cut through at most 2t groups), and we claim that it has to contain many rich bisectors. This leads to the following number­ theoretic problem: we have tr distinct real numbers (corresponding to the angles of the points in the sector as seen from p), arranged into 4.4 Distinct Distances via Crossing Numbers t groups by r numbers, and we form all the (=) arithmetic averages of the pairs in each group (corresponding to the rich bisectors of the group). This yields t(=) real numbers, and we want to know how many of them must be distinct. It is not hard to see that for r = 3, there must be at least 0( t113) distinct numbers, because the three averages (a + b)/2, (a + c)/2, and ( b + c) /2 determine the numbers a, b, c uniquely. It follows, still for r = 3, that each of the 0( 7) sectors has O(t113) distinct bisectors, and so each point in P" has O(njt213) incidences with the rich lines. Applying Szemen§di-Trotter now yields the Solymosi-T6th bound of t = O(n617) distinct distances. Tardos [TarOI] considered the number-theoretic problem above for larger r, and he proved, by a complicated argument, that for r large but fixed, the number of distinct pairwise averages is O(t1fe+e), with c ---+ 0 as r ---+ oo. Plugging this into the proof leads to the current best bound mentioned above. An example by Ruzsa shows that the number of distinct pairwise averages can be 0( Jt) for any fixed r, and it follows that the Solymosi-T6th method as is cannot provide a bound better than O(n819). But surely one can look forward to the further continuation of the adventure of distinct distances. Exercises 63 1. Let Icirc(m, n) be the maximum number of incidences between m points and n arbitrary circles in the plane. Fill in the details of the following approach to bounding Icirc(n, n). Let K be a set of n circles, C the set of their centers, and P a set of n points. (a) First, assume that the centers of the circles are mutually distinct, i.e., JCI = JKJ. Proceed as in the proof in the text: Remove circles with at most 2 incidences, and let the others define a drawing of a multigraph G with vertex set P and arcs of the circles as edges. Handle the edges with multiplicity k or larger via Szemeredi-Trotter, using the incidences of the bisectors with the set C, and those with multiplicity < k by Lemma 4.4.2. Balance k suitably. What bound is obtained for the total number of incidences? ill (b) Extend the argument to handle concentric circles too. ill 2. This exercise provides another bound for I eire ( n, n), the maximum possi­ ble number of incidences between n arbitrary circles and n points in the plane. Let K be the set of circles and P the set of points. Let Pi be the points with at least di == 2i and fewer than 2i+l incidences; we will argue for each Pi separately. Define the multigraph G on Pi as usual, with arcs of circles of K con­ necting neighboring points of Pi (the circles with at most 2 incidences with Pi are deleted). Let E be the set of edges of G. For a point u E Pi, let N ( u) be the set of its neighboring points, and for a v E N ( u), let 64 Chapter 4: Incidence Problems J.t( u, v) be the number of edges connecting u and v. For an edge e, define its partner edge as the edge following after e clockwise around its circle. (a) Show that for each u E Pi, l{v E N(u): J.t(u, v) > 4Jd;}l < Jd;/2. CD (b) Let Eh C E be the edges of multiplicity at least 4Jd;. Argue that for at least ! of the edges in Eh, their partner edges do not belong to Eh, and hence IE \ Ehl B n(IEI). 0 (c) Delete the edges of Eh from the graph, and apply Lemma 4.4.2 to bound IE \ Ehl· What overall bound does all this give for Icirc(n, n)? ǣ A similar proof appears in Pach and Sharir (PS98a] (for the more general case of curves mentioned in the notes to Section 4.1). 4.5 Point-Line Incidences via Cuttings Here we explain another proof of the upper bound I(n, n) = O(n413) for point-line incidences. The technique is quite different. It leads to an efficient algorithm and seems more generally applicable than the one with the crossing number theorem. 4.5.1 Lemma (A worse but useful bound). I(m,n) = O(nVm+m), I(m, n) = O(my'ri + n). (4.2) (4.3) Proof. There are at most (ƽ) crossing pairs of lines in total. On the other hand, a point Pi E P with di incidences "consumes" (;i) crossing pairs (their intersections all lie at Pi). Therefore, E:n 1 ( ;t) < (ƽ). We want to bound E:n 1 di from above. Since points with no incidences can be deleted from P in advance, we may assume di > 1 for all i, and then we have (;i) > (di-1)2 /2. By the Cauchy-Schwarz inequality, m m L(di-1) < m L(di-1)2 < i=1 i=1 and hence l:::di = O(nfo+ m). The other inequality in the lemma can be proved similarly by looking at pairs of points on each line. Alternatively, the equality I(n, m) = I(m, n) for all m, n follows using the geometric duality introduced in Section 5.1. D Forbidden subgraph arguments. For integers r, s > 1, let Kr,s denote the complete bipartite graph on r + s vertices; the picture shows K3,4: 4.5 Point-Line Incidences via Cuttings 65 The above proof can be expressed using graphs with forbidden K2,2 as a subgraph and thus put into the context of extremal graph theory. A typical question in extremal graph theory is the maximum possible number of edges of a (simple) graph on n vertices that does not contain a given forbidden subgraph, such as K2,2. Here the subgraph is understood in a noninduced sense: For example, the complete graph K4 does contain K2,2 as a subgraph. More generally, one can forbid all subgraphs from a finite or infinite family F of graphs, or consider ''containment" relations other than being a subgraph, such as "being a minor." If the forbidden subgraph H is not bipartite, then, for example, the com­ plete bipartite graph Kn,n has 2n vertices, n2 edges, and no subgraph iso­ morphic to H. This shows that forbidding a nonhipartite H does not reduce the maximum number of edges too significantly, and the order of magnitude remains quadratic. On the other hand, forbidding Kr,s with some fixed r and s decreases the exponent of n, and forbidden bipartite subgraphs are the key to many estimates in incidence problems and elsewhere. 4.5.2 Theorem (Kovari-Sos-Turan theorem). Let r < s be fixed nat­ ural numbers. Then any graph on n vertices containing no Kr,s as a subgraph has at most O(n2-l/r) edges. If G is a bipartite graph with color classes of sizes m and n containing no subgraph Kr,s with the r vertices in the class of size m and the s vertices in the class of size n, then IE( G) I = 0 ( min(mn1-1/r + n, m1-118n + m)) . (In both parts, the constant of proportionality depends on r and s.) Note that in the second part of the theorem, the situation is not symmet­ ric: By forbidding the "reverse" placement of K.,.,s, we get a different bound in general. The upper bound in the theorem is suspected to be tight, but a matching lower bound is known only for some special values of r and s, in particular for r < 3 (and all s > r) . To see the relevance of forbidden K2,2 to the point-line incidences, we consider a set P of points and a set L of lines and we define a bipartite graph with vertex set P U L and with edges corresponding to incidences. An edge {p, f} means that the point p lies on the line f.. So the number of incidences equals the number of edges. Since two points determine a line, this graph contains no K2,2 as a subgraph: Its presence would mean that two distinct lines both contain the same two distinct points. The Kovari­ S6s-Turan theorem thus immediately implies Lemma 4.5.1, and the above proof of this lemma is the usual proof of that theorem, for the special case r = s = 2, rephrased in terms of points and lines. 66 Chapter 4: Incidence Problems As was noted above, for arbitrary bipartite graphs with forbidden K2,2, not necessarily being incidence graphs of points and lines in the plane, the bound in the Kovari-S6s-Turan theorem cannot be improved. So, in order to do better for point-line incidences, one has to use some more geometry than just the excluded K2,2. In fact, this was one of the motivations of the problem of point-line incidences: In a finite projective plane of order q, we have n == q2 +q+ 1 points, n lines, and ( q+ 1 )n 3 n312 incidences, and so the Szcmcredi-Trotter theorem strongly distinguishes the Euclidean plane from finite projective planes in a combinatorial sense. Proof of the Szemeredi-Trotter theorem (Theorem 4.1.1) for m = n. The bound from Lemma 4.5.1 is weaker than the tight Szemeredi-Trotter bound, but it is tight if n2 < m or m2 < n. The idea of the present proof is to convert the "balanced" case ( n points and n lines) into a collection of "unbalanced" subproblems, for which Lemma 4.5.1 is optimal. \Ve apply the following important result: 4.5.3 Lemma (Cutting lemma ). Let L be a set of n lines in the plane, and let r be a parameter, 1 < r < n. Then the plane can be subdivided into t generalized triangles (this means intersections of three half -planes) q 1 , q2, . . . , qt in such a way that the interior of each qi is intersected by at most lines of L, and we have t < Cr2 f or a certain constant C independent ofn and r. Such a collection q1, . • . , qt may look like this, for example: q1 q12 The lines of L arc not shown. In order to express ourselves more economically, we introduce the follow­ ing terminology. A cutting is a subdivision of the plane into finitely many generalized triangles. (We sometimes omit the adjective "generalized" in the sequel.) A given cutting is a Ŏ-cutting for a set L of n lines if the interior of each triangle of the cutting is intersected by at most ٥ lines of L. Proofs of the cutting lemma will be discussed later, and now we continue the proof of the Szemeredi-Trotter theorem. Let P be the considered n-point set, L the set of n lines, and I(P, L) the number of their incidences. We fix a "magic" value T = n 113, and we 4.5 Point-Line Incidences via Cuttings 67 divide the plane into t ! O(r2) ! O(n213) generalized triangles ¹1, . . • , ¹t so that the interior of each ¹i is intersected by at most n/r == n213 lines of L, according to the cutting lemma . Let Pi denote the points of P lying inside ¹i or on its boundary but not at the vertices of ¹i, and let Li be the set of lines of L intersecting the interior of Ť'i· The pairs (Li, Pi) define the desired "unbalanced" subproblems. We have ILil < n213, and while the size of the Pi may vary, the average IPil is 7 Ú n113, which is about the square root of the size of Li. \Ve have to be a little careful, since not all incidences of L and P are necessarily included among the incidences of some Li and Pi. One exceptional case is a point p E P not appearing in any of the Pi. Such a point has to be the vertex of some ¹i, and so there are no more than 3t such exceptional points. These points have at most I ( n, 3t) incidences with the lines of L. Another exceptional case is a line of L containing a side of ¹i but not intersecting its interior and therefore not included in Li, although it may be incident with some points on the boundary of ¹i· There are at most 3t such exceptional lines, and they have at most I ( 3t, n) incidences with the points of P. So we have t I(L, P) < I(n, 3t) + I(3t, n) + L I(Li, Pi)· i=l By Lemma 4.5.1, I(n, 3t) and I(3t, n) are both bounded by O(ty'n + n) ĝ O(n716) < < n413, and it remains to estimate the main term. We have ILil < n213 and L:º=l I Pi I < 2n, since each point of P goes into at most two Pi. Using the bound (4.2) for each I(Li, Pi) we obtain t t t L I(Li, Pi) < L I(n213, I Pi I) = L O(IPiln113 + n213) i=l i=l i=l 68 Chapter 4: Incidence Problems This finally shows that I(n, n) = O(n413). Bibliography and remarks. The bound in Lemma 4.5.1 using excluded K2,2 is due to Erdos [Erd46]. Determining the maximum possible number of edges in a Kr,s­ free bipartite graph with given sizes of the two color classes is known as the Zarankiewicz problem. The general upper bound given in the text was shown by Kovari, Sos, and Thran [KST54]. For a long time, matching lower bounds (constructions) were known only for r < 3 and all s > r (in these cases, even the constant in the leading term is known exactly; see Fiiredi [Fiir96] for some of these results and references). In particular, K2,2-free graphs on n vertices with O(n312) edges are provided by incidence graphs of finite projective planes, and K3,3-free graphs on n vertices with O(n513) edges were obtained by Brown [Bro66]. His construction is the "distance-k graph" in the 3-dimensional affine space over finite fields of order q _ -1 mod 4, for a suitable k = k(q). Recently, Kollar, R6nyai, and Szabo [KRS96) constructed asymptotically optimal Kr,8-free graphs for s very large compared to r, namely s > r!+ 1, using results of algebraic geometry. This was slightly improved by Alon, R6nyai, and Szabo [ARS99] to s > (r-1)!+1. They also obtained an alternative to Brown's construction of K3,3-free graphs with a better constant, and asymptotically tight lower bounds for some "asymmetric" instances of the Zarankiewicz problem, where one wants a Kr,s-free bipartite graph with color classes of sizes n and m (with the "orientation" of the forbidden Kr,s fixed). The approach to incidence problems using cuttings first appeared in a seminal paper of Clarkson, Edelsbrunner, Guibas, Sharir, and Welzl (CEG+9o], based on probabilistic methods developed in compu­ tational geometry ([Cla87], [HW87], and [CS89] are among the most influential papers in this development). Clarkson et al. did not use cuttings in our sense but certain "cuttings on the average." Namely, if ni is the number of lines intersecting the interior of qi, then their cuttings have t = 0 ( r2) triangles and satisfy I:!= 1 ni < C (c) · r2 ( ) c , where c > 1 is an integer constant, which can be selected as needed for each particular application, and C ( c) is a constant depending on c. This means that the cth degree average of the ni is, up to a con­ stant, the same as if all the ni were 0 ( ). Technically, these "cuttings on the average" can replace the optimal ; -cuttings in most applica­ tions. Clarkson et al. [CEG+9o] proved numerous results on various incidence problems and many-cells problerns by this method; see the notes to Section 4.1. The cutting lemma was first proved by Chazelle and Friedman [CF90] and, independently, by Matousek [Mat90a]. The former proof yields an optimal cutting lemma in every fixed dimension and will be discussed in Section 6.5, while the latter proof applies only to planar 0 4.5 Point-Line Incidences via Cuttings cutting and is presented in Section 4. 7. A third, substantially different, proof was discovered by Chazelle [Cha93a]. Yet another proof of the Szemeredi-Trotter theorem was recently found by Aronov and Sharir (it is a simplification of the techniques in [ASOla]). It is based on the case d = 2 of the following partition theorem of Matousek [Mat92]: For every n-point set X C R d, d fixed, and every r, 1 < r < n, there exists a partition X = X1 UX2 U · · · UXt, t = 0( r), such that Ý < I Xi I < 2 ;! f or all i and no hyperplane h crosses more than O(rl-l/d) of the sets Xi. Here h crossing Xi means that Xi is not completely contained in one of the open half-spaces defined by h or in h itself.1 This result is proved using the d-dimensional cutting lemma (see Section 4.6). The bound O(rl-l/d) is asymptotically the best possible in general. To use this result for bounding I(L, P), where L is a set of n lines and P a set of n points in the plane, we let X = V0(L) be the set of points dual to the lines of L (see Section 5.1). We apply the partition theorem to X with r = n213 and dualize back, which yields a partition L = Lt U £2 U · · · U Lt, t = O(r), with ILil Ú Ý = n113. The crossing condition implies that no point p is incident to lines from more than 0( Vr) of the Li, not counting the pathological Li where p is common to all the lines of Li. We consider the incidences of a point p E P with the lines of Li. The i where p lies on at most one line of Li contribute at most 0( Vr) incidences, which gives a total of O(nVr) = O(n413) for all p E P. On the other hand, if p lies on at least two lines of Li then it is a vertex of the arrangement of Li. As is easy to show, the number of in­ cidences of k lines with the vertices of their arrangement is 0( k2) (Exercise 6.1.6), and so the total contribution from these cases is 0(2::: 1Lil2) = O(n2 /r) = O(n413). This proves the balanced case of Szemen§di-Trotter, and the unbalanced case works in the same way with an appropriate choice of r. Unlike the previous proofs, this one does not directly apply with pseudolines instead of lines. Improved point-circle incidences. A similar method also proves that Icirc(n, n) = O(n1.4) (see Exercise 4.4.2 for another proof). Circles are dualized to points and points to surfaces of cones in R 3, and the appropriate partition theorem holds as well, with no surface of a cone crossing more than 0( r213) of the subsets Xi. Aronov and Sharir [ASOla] improved the bound to Icirc(m, n) = O(m213n213 + m) for large m, namely m > n<5-3e)/(4-9e), and to Icirc(m, n) = O(m<6+3c)/Iln(9-c-)/ll + n) for the smaller m (here, as usual, c > 0 can be chosen arbitrarily small, influencing the constants 69 1 A slightly stronger result is proved in [Mat92]: For every Xi we can choose a relatively open simplex ai :J Xi, and no h crosses more than O(r1-1/d) of the ai. 70 Chapter 4: Incidence Problems of proportionality). Agarwal et a1. [AASOl] obtained almost the same bounds for the maximum complexity of m cells in an arrangement of n circles. A key ingredient in the Aronov-Sharir proof are results on the following question of independent interest. Given a family of n curves in the plane, into how many pieces ( "pseudosegments") must we cut them, in the worst case, so that no two pieces intersect more than once? This problem, first studied by Tamaki and Tokuyama [TT98], will be briefly discussed in the notes to Section 11.1. For the curves being circles, Aronov and Sharir [ASOla] obtained the estimate O(n312+e-), improving on several previous results. To bound the number I(P, C) of incidences of an m-point set P and a set C of n circles, we delete the circles containing at most 2 points, we cut the circles into O(n312+c) pieces as above, and we define a graph with vertex set P and with edges being the circle arcs that connect consecutive points along the pieces. The number of edges is at least I(P, C) - O(n312+.::). The crossing number theorem applies (since the graph is simple) and yields I(P, C) = O(m213n213 + n312+c), which is tight for m about n5/4 and larger. For smaller m, Aronov and Sharir use the method with partition in the dual space outlined above to divide the original problem into smaller subproblems, and for these they use the bound just mentioned. Exercises 1. Let I1circ(m, n) be the maximum number of incidences between m points and n unit circles in the plane. Prove that Jlcirc(m, n) = O(mfo+n) by the method of Lemma 4.5.1. [I] 2. Let Icirc(m, n) be the maximum possible number of incidences between m points and n arbitrary circles in the plane. Prove that Icirc(m, n) = O(ny'rii + n) and Icirc(m, n) = O(mn213 + n). [I] 4.6 A Weaker Cutting Lemma Here we prove a version of the cutting lemma (Lemma 4.5.3) with a slightly worse bound on the number of the Lli. The proof uses the probabilistic method and the argument is very simple and general. We will improve on it later and obtain tight bounds in a more general setting in Section 6.5. In Section 4.7 below we give another, self-contained, elementary geometric proof of the planar cutting lemma . Here we are going to prove that every set of n lines admits a Ԑ-cutting consisting of O(r2 log2 n) triangles. But first let us see why at least fl(r2) triangles are necessary. 4.6 A W eaker Cutting Lemma 71 A lower bound. Consider n lines in general position. Their arrangement has, as we know, (ϧ)+n+l > n2 /2 cells. On the other hand, considering a triangle Ûi whose interior is intersected by k < ; lines ( k > 1), we see that Ûi is divided into at most (;) +k+ 1 < 2k2 cells. Since each cell of the arrangement has to show up in the interior of at least one triangle Ûi, the number of triangles is at least n2 /4k2 = O(r2). Hence the cutting lemma is asymptotically optimal for r --+ oo. Proof of a weaker version of the cutting lemma (Lemma 4.5.3). We select a random sample S C L of the given lines. We make s independent random draws, drawing a random line from L each time. These are draws with replacement: One line can be selected several times, and so S may have fewer than s lines. Consider the arrangement of S. Partition the cells that are not (general­ ized) triangles by adding some suitable diagonals, a.. " illustrated below: • . . . . . . . -.. · . .. -. . .. -' . . . . · .. "' : -. . . . : - -Ǐ- -. . . . . . . . . " . , ,. ě . \ Ě ' ':. ' . . . . "' -· - ? ... .. . . . . . Ȣ . . _ . . . . Ĝ . ... . ' . ' _ _ : · .. : -. . . . I ' . . . • I' ., ' ., . . . . . . -.. -.. -... . - -.. -lines of S added diagonals lines of L \ S This creates (generalized) triangles Û1, Û2, . . . , Ût with t = O(s2) (since we have a drawing of a planar graph with (Ϩ) + 1 vertices; also see Exercise 2). 4.6.1 Lemma. For s = 6r Inn, the following holds with a positive probabil­ ity: The Ûi f orm a ; -cutting f or L; that is, the interior of no 6.i is intersected by more than ; lines of L. This implies the promised weaker version of the cutting lemma: Since the probability of the sample S being good is positive, there exists at least one good S that yields the desired collection of triangles. Proof of Lemma 4.6.1. Let us say that a triangle T is dangerous if its interior is intersected by at least k = ; lines of L. We fix some arbitrary dangerous triangle T. What is the probability that no line of the sample S intersects the interior of T? We select a random line s times. The probability that we never hit one of the k lines intersecting the interior of T is at most 72 Chapter 4: Incidence Problems (1 - k/n)8• Using the well-known inequality 1+x < ex, we can bound this probability by e-ks/n = e-6ln n = n -6. Call a triangle T interesting (for L) if it can appear in a triangulation for some sample S C L. Any interesting triangle has vertices at some three ver­ tices of the arrangement of L, and hence there are fewer than n6 interesting triangles. 2 Therefore, with a positive probability, a random sample S inter­ sects the interiors of all the dangerous interesting triangles simultaneously. In particular, none of the triangles Di appearing in the triangulation of such a sample S can be dangerous. This proves Lemma 4.6.1. D More sophisticated probabilistic reasoning shows that it is sufficient to choose s = canst · r log r in Lemma 4.6.1, instead of canst · r log n, and still, with a positive probability no interesting dangerous triangle is missed by S (see Section 6.5 and also Exercise 10.3.4). This improvement is important for r small, say constant: It shows that the number of triangles in a ;-cutting can be bounded independent of n. To prove the asymptotically tight bound O(r2) by a random sampling argument seems considerably more complicated and we will discuss this in Section 6.5. Bibliography and remarks. The ideas in the above proof of the weaker cutting lemma can be traced back at least to early papers of Clarkson (such as [Cla87]) on random sampling in computational ge­ ometry. The presented proof was constructed ex post facto for didactic purposes; the cutting lennna was first proved, as far as I know, in a stronger form (with log r instead of log n). Exercises 1. Calculate the exact expected size of 8, a sample drawn from n elements by s independent random draws with replacements. 0 2. Calculate the number of (generalized) triangles arising by triangulating an arrangement of n lines in the plane in general position. (First, specify how exactly the unbounded cells are triangulated.) 0 3. (A cutting lemma for circles) Consider a set K of n circles in the plane. Select a sample S C K by s independent random draws with replacement. Consider the arrangement of S, and construct its vertical decomposition; that is, from each vertex extend vertical segments upwards and down­ wards until they hit a circle of 8 (or all the way to infinity). Similarly extend vertical segments from the leftmost and rightmost points of each circle. 2 The unbounded triangles have only 1 or 2 vertices, but they are completely determined by their two unbounded rays, and so their number is at most n2. 4. 7 The Cutting Lemma: A Tight Bound 73 (a) Show that this partitions the plane into 0( s2) "circular trapezoids" (shapes bounded by at most two vertical segments and at most two cir­ cular arcs) . [I] (b) Show that for s = Cr ln n with a sufficiently large constant C, there is a positive probability that the sample S intersects all the dangerous interesting circular trapezoids, where "dangerous" and "interesting" are defined analogously to the definitions in the proof of the weaker version of the cutting lemma . 0 4. Using Exercises 3 and 4.5.1, show that the nurnber of unit distances determined by n points in the plane is O(n413 log213 n). [I] 5. Using Exercises 3 and 4.5.2, show that Icirc(n, n) = O(nL4 logc n) (for some constant c), where Icirc(m, n) is the maximum possible number of incidences between m points and n arbitrary circles in the plane. m 4. 7 The Cutting Lemma: A Tight Bound Here we prove the cutting lemma in full strength. The proof is simple and elementary, but it does not seem to generalize to higher-dimensional situa­ tions. For simplicity, we suppose that the given set L of n lines is in general position. (If not, perturb it slightly to get general position, construct the j­ cutting, and perturb back; this gives a Ŏ-cutting for the original collection of lines; we omit the details.) First we need some definitions and observations concerning levels. Levels and their simplifications. Let L be a fixed finite set of lines in the plane; we assume that no line of L is vertical. The level of a point x E R 2 is defined as the number of lines of L lying strictly below x. We note that the level of all points of an (open) cell of the arrangement of L is the same, and similarly for a (relatively open) edge. On the other hand, the level of an edge can differ from the levels of its endpoints, for example. We define the level k of the arrangement of L, where 0 < k < n, as the set Ek of all edges of the arrangement of L having level exactly k. These edges plus their endpoints form an x-monotone polygonal line, where x-monotone means that each vertical line intersects it at exactly one point. It is easy to see that the level k makes a turn at each endpoint of its edges; it looks somewhat like this: 74 Chapter 4: Incidence Problems The level k is drawn thick, while the thin segments are pieces of lines of L and they do not belong to the level k. Let eo , e1, . . . , et be the edges of Ek numbered from left to right; e0 and et are the unbounded rays. Let us fix a point Pi in the interior of ei . For an integer parameter q > 2, we define the q-simplification of the level k as the monotone polygonal line containing the left part of e0 up to the point p0, the segments PoPq , PqP2q,· . . , PL(t- l)/qJ qPt , and the part of et to the right of Pt· Thus, the q-simplification has at most ! +2 edges. Here is an illustration for t = 9, q = 4: (We could have defined the q-simplification by connecting every qth vertex of the level, but the present way makes some future considerations a little simpler.) 4.7.1 Lemma. ( i) The portion II of the level k (considered as a polygonal line) between the points Pj and Pj+q is intersected by at most q+l lines of L. (ii) The segment PjPj+q is intersected by at most q+1 lines of L. (iii) The q-simplification of the level k is contained in the strip between the levels k - r q/21 and k + r q/21 . Proof. Part (i) is obvious: Each line of L intersecting II contains one of the edges ej, ej+l, . . . , ej+q· As for (ii), II is connected, and hence all lines intersecting its convex hull must intersect ll itself as well. The segment PjPj+q is contained in conv(ll). Concerning (iii), imagine walking along some segment PjPj+q of the q­ simplification. We start at an endpoint, which has level k. Our current level may change only as we cross lines of L. Moreover, having traversed the ,vhole segment we must be back to level k. Thus, to get from level k to k + i and back to k we need to cross at least 2i lines on the way. From this and (ii), 2i < q+1, and hence i < L(q+l)/2J = fq/21 . D Proof of the cutting lemma for lines in general position. Let r be the given parameter. If r = 0( n), then it suffices to produce a 0-cutting of size O(n2) by simply triangulating the arrangement of L. Hence we may assume that r is much smaller than n. Set q = f nj10r 1· Divide the levels Eo, E1, . . . , En-1 into q groups: The ith group contains all Ej with j congruent to i modulo q (i = 0, 1, . . . , q-1). Since the total number of edges in the arrangement is n 2, there is an i such 4. 7 The Cutting Lemma: A Tight Bound 75 that the ith group contains at most n2 jq edges. We fix one such i; from now on, we consider only the levels i, q+i, 2q+i, . . . , and we construct the desired ; -cutting from them. Let PJ be the q-simplification of the level jq+i. If Ejq+i has mj edges, then Pj has at most mj / q + 3 edges, and the total number of edges of the Pi, j = 0, 1, . . . , l(n-1)/qJ , can be estimated by n2 jq2 + 3(nfq+1) = O(n2 jq2). We note that the polygonal chains Pi never intersect properly: If they did, a vertex of some Pi, which has level qj + i, would be above Pj+l, and this is ruled out by Lemma 4. 7.1 (iii). We form the vertical decomposition for the Pi; that is, we extend vertical segrnents fron1 each vertex of PJ upwards and downwards until they hit Pj-l and Pj+l: p. J This subdivides the plane into O(n2 jq2) = O(r2) trapezoids. We claim that each such trapezoid is intersected by at most Ý lines of L. We look at a trapezoid in the strip between Pj and PJ+l· By Lemma 4. 7.l(iii), it lies between the levels qj + i - r q/21 and q(j+ 1) + i + r q/21 ' and therefore, each of its vertical sides is intersected by no more than 3q lines. The bottom side is a part of an edge of Pj, and consequently, it is intersected by no more than q+1 lines; similarly for the top side. Hence the number of lines intersecting the considered trapezoid is certainly at most 10q < . (A more careful analysis shows that one trapezoid is in fact intersected by at most 2q + 0 ( 1) lines; see Exercise 1.) Finally, a ; -cutting can be obtained by subdividing each trapezoid into two triangles by a diagonal. But let us remark that for applications of ; -cuttings, trapezoids are usually as good as triangles. D Bibliography and remarks. The basic ideas of the presented proof are from [Mat90a], and the presentation basically follows [Mat98]. The latter paper provides some estimates for the number of trian­ gles or trapezoids in a ϥ-cutting, as r --t oo: For example, at least 2.54( 1 - o( 1) )r2 trapezoids are sometimes necessary, and 8( 1 + o( 1)) r2 trapezoids always suffice. The notion of levels and their simplifications, as well as Lemma 4.7.1, are due to Edelsbrunner and Welzl [EW86]. 76 Chapter 4: Incidence Problems Exercises 1. (a) Verify that each trapezoid arising in the described construction is intersected by at most 2.5q+0(1) lines. Setting q appropriately, show that the plane can subdivided into 12.5r2 + 0( r) trapezoids, each intersected by at most ;. lines, assuming 1 < < r < < n. 0 (b) Improve the bounds from (a) to 2q+0(1) and 8r2+0(r), respectively. m 5 Convex Polytopes Convex polytopes are convex hulls of finite point sets in Rd. They constitute the most important class of convex sets with an enormous number of appli­ cations and connections. Three-dimensional convex polytopes, especially the regular ones, have been fascinating people since the antiquity. Their investigation was one of the main sources of the theory of planar graphs, and thanks to this well­ developed theory they are quite well understood. But convex polytopes in dimension 4 and higher are considerably more challenging, and a surprisingly deep theory, mainly of algebraic nature, was developed in attempts to under­ stand their structure. A strong motivation for the study of convex polytopes comes from prac­ tically significant areas such as combinatorial optimization, linear program­ ming, and computational geometry. Let us look at a simple example illus­ trating how polytopes can be associated with combinatorial objects. The 3-dimensional polytope in the picture 2341 1342 3421 2134 78 Chapter 5: Convex Polytopes is called the permutahedron. Although it is 3-dimensional, it is most natu­ rally defined as a subset of R4, namely, the convex hull of the 24 vectors obtained by permuting the coordinates of the vector (1, 2, 3, 4) in all possible ways. In the picture, the (visible) vertices are labeled by the correspond­ ing permutations. Similarly, the d-dimensional permutahedron is the con­ vex hull of the ( d+ 1)! vectors in R d+ 1 arising by permuting the coordinates of ( 1, 2, . . . , d+ 1). 0 ne can observe that the edges of the polytope connect exactly pairs of permutations differing by a transposition of two adjacent numbers, and a closer examination reveals other connections between the structure of the permutahedron and properties of permutations. There are many other, more sophisticated, examples of convex polytopes assigned to combinatorial and geometric objects such as graphs, partially or­ dered sets, classes of metric spaces, or triangulations of a given point set. In many cases, such convex polytopes are a key tool for proving hard theorems about the original objects or for obtaining efficient algorithms. Two impres­ sive examples are discussed in Chapter 12, and several others are scattered in other chapters. The present chapter should convey some initial working knowledge of convex polytopes for a nonpolytopist. It is just a sn1all sample of an extensive theory. A more comprehensive modern introduction is the book by Ziegler [Zie94]. 5.1 Geometric Duality First we discuss geometric duality, a simple technical tool indispensable in the study of convex polytopes and handy in many other situations. We begin with a simple motivating question. How can we visualize the set of all lines intersecting a convex pentagon as in the picture? A suitable way is provided by line-point duality. 5.1.1 Definition (Duality transform). The (geometric) duality transform is a mapping denoted by V0. To a point a E R d \ { 0} it assigns the hyperplane Vo(a) = {x E Rd: (a, x) = 1}, and to a hyperplane h not passing through the origin, which can be uniquely written in the forrn h = {x E Rd: (a, x) = 1}, it assigns the point V0(h) = a E Rd \ {0}. 5. 1 Geornetric Duality 79 Here is the geometric meaning of the duality transform. If a is a point at distance 8 from 0, then V0 (a) is the hyperplane perpendicular to the line Oa and intersecting that line at distance ٢ from 0, in the direction from 0 towards a. /,a 1 ,./ 8 6.٣-- ,' 0 1Jo(a) A nice interpretation of duality is obtained by working in Rd+1 and iden­ tifying the "primal" R d with the hyperplane 1r = { x E R d+ 1 : Xd+ 1 = 1} and the "dual" Rd with the hyperplane p = {x E Rd+l: xd+1 = -1}. The hyperplane dual to a point a E 1r is produced as follows: We construct the hyperplane in Rd+l perpendicular to Oa and containing 0, and we intersect it with p. Here is an illustration for d = 2: In this way, the duality 1J0 can be naturally extended to k-flats in Rd, whose duals are (d-k-1)-fiats. Namely, given a k-fiat f C 1r, we consider the (k+l)­ flat F through 0 and J, we construct the orthogonal complement of F, and we intersect it with p, obtaining Vo(f). Let us consider the pentagon drawn above and place it so that the origin lies in the interior. Let Vi = V0(£i), where £i is the line containing the side aiai+l· Then the points dual to the lines intersecting the pentagon a1 a2 . . . as fill exactly the exterior of the convex pentagon v1 v2 . . . 'V5 : . . 80 Chapter 5: Convex Polytopes This follows easily from the properties of duality listed below (of course, there is nothing special about a pentagon here). Thus, the considered set of lines can be nicely described in the dual plane. A similar passage from lines to points or back is useful in many geometric or computational problems. Properties of the duality transform. Let p be a point of Rd distinct from the origin and let h be a hyperplane in R d not containing the origin. Let h- stand for the closed half-space bounded by h and containing the origin, while h + denotes the other closed half-space bounded by h. That is, if h = {x E Rd: (a, x) = 1}, then h- = {x E Rd: (a, x) < 1}. 5.1.2 Lemma (Duality preserves incidences). (i) p E h if and only ifV0(h) E Do(p). (ii) p E h- if and only ifVo(h) E Vo(P)-. Proof. (i) Let h = {x E Rd: (a, x) = 1}. Then p E h means (a,p) = 1. Now, V0(h) is the point a, and V0(p) is the hyperplane {y E Rd: (y, p) = 1}, and hence V0(h) = a E V0(p) also means just (a,p) = 1. Part (ii) is proved similarly. D 5.1.3 Definition (Dual set). For a set X C Rd, we define the set dual to X, denoted by X, as follows: X = {y E Rd: (x, y) < 1 for all x E X} . Another common name used for the duality is polarity; the dual set would then be called the polar set. Sometimes it is denoted by X0• Geometrically, X is the intersection of all half-spaces of the form Do ( x)­ with x E X. Or in other words, X consists of the origin plus all points y such that X C Do (y)-. For example, if X is the pentagon a1 a2 . . • a5 drawn above, then X is the pentagon v1 v2 . . . Vs. For any set X, the set X is obviously closed and convex and contains the origin. Using the separation theorem (Theorem 1.2.4), it is easily shown that for any set X C Rd, the set (X) is the closure conv(XU {0} ). In particular, for a closed convex set containing the origin we have (X) = X (Exercise 3). For a hyperplane h, the dual set h is different from the point Vo(h).1 For readers familiar with the duality of planar graphs, let us remark that it is closely related to the geometric duality applied to convex polytopes in R 3. For example, the next drawing illustrates a planar graph and its dual graph (dashed): 1 In the literature, however, the "star" notation is sometimes also used for the dual point or hyperplane, so for a point p, the hyperplane Vo (p) would be denoted by p, and similarly, h may stand for Vo (h). 5.1 Geometric Duality ȡ·"'' . . . . ' . . .. .. _ .... .. -----... .. ---81 Later we will see that these are graphs of the 3-dimensional cube and of the regular octahedron, which are polytopes dual to each other in the sense defined above. A similar relation holds for all 3-dimensional polytopes and their graphs. Other variants of duality. The duality transform Do defined above is just one of a class of geometric transforms with similar properties. For some pur­ poses, other such transforms (dualities) are more convenient. A particularly important duality, denoted by D, corresponds to moving the origin to the "minus infinity" of the xd-axis (the xd-axis is considered vertical). A formal definition is as follows. 5.1.4 Definition (Another duality). A nonvertical hyperplane h can be uniquely written in the f orm h = {x E Rd: Xd = a1x1 + · · · + ad-IXd-1 - ad}· W e set D(h) = (a1, . . . , ad-1, ad)- Conversely, the point a = (a1, . . . , ad-1, ad) maps back to h. The property (i) of Lemma 5.1.2 holds for this D, and an analogue of (ii) . IS: (ii') A point p lies above a hyperplane h if and only if the point D( h) lies above the hyperplane D(p). Exercises 1. Let C = {x E Rd: lxd + · · · + lxdl < 1}. Show that C is the d-dimen­ sional cube {X E Rd: max lxi I < 1 }. Picture both bodies for d = 3. m 2. Prove the assertion made in the text about the lines intersecting a convex pentagon. m 3. Show that for any X C Rd, (X) equals the closure of conv(X U {0}), where X stands for the dual set to X. ë 4. Let C C Rd be a convex set. Prove that C is bounded if and only if 0 lies in the interior of c. m 5. Show that C = C if and only if C is the unit ball centered at the origin. m 6. (a) Let C = conv(X) C Rd. Prove that C = nxEX Do(x)- . m {b) Show that if c = nhEH h-' where H is a collection of hyperplanes not passing through 0, and if C is bounded, then C = conv{Do(h): h E H}. (c) What is the right analogue of {b) if C is unbounded? m 7. What is the dual set h for a hyperplane h, and what about h? Î 82 Chapter 5: Convex Polytopes 8. Verify the geometric interpretation of the duality V0 outlined in the text (using the embeddings of Rd into Rd+l ). m 9. (a) Let s be a segment in the plane. Describe the set of all points dual to lines intersecting s. CD (b) Consider n > 3 segments in the plane, such that none of them contains 0 but they all lie on lines passing through 0. Show that if every 3 among such segments can be intersected by a single line, then all the segments can be simultaneously intersected by a line. 0 (c) Show that the assumption in (b) that the extensions of the segments pass through 0 is essential: For each n > 1 , construct n+ 1 pairwise disjoint segments in the plane that cannot be simultaneously intersected by a line but every n of them can (such an example was first found by Hadwiger and Deb runner). C!J 5.2 H -Polytopes and V -Polytopes A convex polytope in the plane is a convex polygon. Famous examples of convex polytopes in R 3 are the Platonic solids: regular tetrahedron, cube, regular octahedron, regular dodecahedron, and regular icosahedron. A convex polytope in R3 is a convex set bounded by finitely many convex polygons. Such a set can be regarded as a convex hull of a finite point set, or as an intersection of finitely many half-spaces. We thus define two types of convex polytopes, based on these two views. 5.2.1 Definition (H-polytope and ¥-polytope). An H-polyhedron is an intersection of finitely many closed half -spaces in some Rd. An H-poly­ tope is a bounded H -polyhedron. A V-polytope is the conve x hull of a finite point set in Rd. A basic theorem about convex polytopes claims that from the mathemat­ ical point of view, H -polytopes and V -polytopes are equivalent. 5.2.2 Theorem. Each V-polytope is an H-polytope. Each H-polytope is a V -polytope. This is one of the theorems that may look "obvious" and whose proof needs no particularly clever idea but does require some work. In the present case, we do not intend to avoid it. Actually, we have quite a neat proof in store, but we postpone it to the end of this section. Although H-polytopes and V-polytopes are mathematically equivalent, there is an enormous difference between them from the computational point of view. That is, it matters a lot whether a convex polytope is given to us as a convex hull of a finite set or as an intersection of half-spaces. For example, given a set of n points specifying a V -polytope, how do we find its representation as an H-polytope? It is not hard to come up with some algorithm, but the problem is to find an efficient algorithm that would allow 5.2 H-Polytopes and V-Polytopes 83 one to handle large real-world problems. This algorithmic question is not yet satisfactorily solved. Moreover, in some cases the number of required half­ spaces may be astronomically large compared to the number n of points, as we will see later in this chapter. As another illustration of the computational difference between V -po­ lytopes and H-polytopes, we consider the maximization of a given linear function over a given polytope. For V -polytopes it is a trivial problem, since it suffices to substitute all points of V into the given linear function and select the maximum of the resulting values. But maximizing a linear function over the intersection of a collection of half-spaces is the basic problem of linear programrning, and it is certainly nontrivial. Terminology. The usual terminology does not distinguish V-polytopes and H-polytopes. A convex polytope means a point set in Rd that is a V-polytope (and thus also an H-polytope). An arbitrary, possibly unbounded, H-poly­ hedron is called a convex polyhedron. All polytopes and polyhedra considered in this chapter are convex, and so the adjective "convex'' is often omitted. The dimension of a convex polyhedron is the dimension of its affine hull. It is the smallest dimension of a Euclidean space containing a congruent copy of P. Basic examples. One of the easiest classes of polytopes is that of cubes. The d-dimensional cube as a point set is the Cartesian product [-1, lJd. d = 1 d = 2 d = 3 As a V-polytope, the d-dimensional cube is the convex hull of the set {-1, 1 }d (2d points), and as an H-polytope, it can be described by the inequalities -1 < Xi < 1, i = 1, 2, . . . , d, i.e., by 2d half-spaces. We note that it is also the unit ball of the maximum norm llxlloo = maxi lxi l· Another important example is the class of crosspolytopes (or generalized octahedra). The d-dimensional crosspolytope is the convex hull of the "co­ ordinate cross," i.e., conv{e1, -e1 , e2, -e2, . . . , ed, -ed}, where e1 , . . . , ed are the vectors of the standard orthonormal basis. d = l d = 2 84 Chapter 5: Convex Polytopes It is also the unit ball of the £1-norm llxll1 = L.,t 1 lxil . As an H-polytope, it can be expressed by the 2d half-spaces of the form (a, <)1, where a runs through all vectors in { -1, 1} d. The polytopes with the smallest possible nurnber of vertices (for a given dimension) are called simplices. 5.2.3 Definition (Simplex). A simplex is the convex hull of an affinely independent point set in some Rd. A d-dimensional simplex in R d can also be represented as an intersection of d+ 1 half-spaces, as is not difficult to check. A regular d-dimensional simplex is the convex hull of d+1 points with all pairs of points having equal distances . • d = O d = 1 d = 2 d = 3 Unlike cubes and crosspolytopes, d-dimensional regular sirnplices do not have a very nice coordinate representation in Rd. The simplest and most useful representation lives one dimension higher: The convex hull of the d+ 1 vectors e1, . . . , ed+ 1 of the standard orthonormal basis in R d+ 1 is a d-dimensional regular simplex with side length J2. (1, 0, 0) Proof of Theorem 5.2.2 (equivalence of H -polytopes and V -poly­ topes). We first show that any H-polytope is also a V-polytope. We proceed by induction on d. The case d = 1 being trivial, we suppose that d > 2. So let r be a finite collection of closed half-spaces in Rd such that P = n r is nonempty and bounded. For each 7 E r, let F'Y = Pn87 be the intersection of P with the bounding hyperplane of 'Y· Each nonempty F'Y is an H-polytope of dimension at most d-1 (correct?), and so it is the convex hull of a finite set V 'Y c F'Y by the inductive hypothesis. We claim that p = conv(V), where v = u"'(Er v 'Y. Let X E p and let R be a line passing through x. The intersection f n P is a segment; let y and z be its endpoints. There are a, {3 E r such that y E Fa and z E F13 (if y were 5.2 H -Polytopes and V -Polytopes 85 not on the boundary of any '"'I E r, we could continue along £ a little further within P). (3 We have y E conv(V a) and z E conv(V ,a), and thus x E conv(V a U V ,a) C conv(V). We have proved that any H-polytope is a V-polytope, and it remains to show that a V-polytope can be expressed as the intersection of finitely many half-spaces. This follows easily by duality (and implicitly uses the separation theorem). Let P =: conv(V) with V finite, and assume that 0 is an interior point of P. By Exercise 5.1.6(a), the dual body P equals nvEV Do(v)-, and by Exercise 5.1.4 it is bounded. By what we have already proved, P is a V­ polytope, and by Exercise 5.1.6(a) again, P =: (P) is the intersection of finitely many half-spaces. 0 Bibliography and remarks. The theory of convex polytopes is a well-developed area covered in numerous books and surveys, such as the already recommended recent monograph [Zie94) (with addenda and updates on the web page of its author), the very influential book by Grtinbaum (Grii67], the chapters on polytopes in the handbooks of discrete and computational geometry [G097], of convex geometry [GW93], and of combinatorics [GGL95J, or the books McMullen and Shephard [MS71] and Br0nsted [Br083], concentrating on questions about the numbers of faces. Recent progress in combinatorial and com­ putational polytope theory is reflected in the collection [KZOO]. For analyzing examples, one should be aware of (free) software systems for manipulating convex polytopes, such as polymake by Gawrilow and Joswig [GJOO]. Interesting discoveries about 3-dimensional convex polytopes were already made in ancient Greece. The treatise by Schlafli [Sch01 J writ­ ten in 1850-52 is usually rnentioned as the beginning of modern theory, and several books were published around the turn of the century. We refer to Griinbaum [Grii67], Schrijver [Sch86], and to the other sources mentioned above for historical accounts. The permutahedron mentioned in the introduction to this chapter was considered by Schoute [Schll], and it arises by at least two other quite different and natural constructions (see [Zie94]). There are several ways of proving the equivalence of H-polytopes and V-polytopes. Ours is inspired by a proof by Edmonds, as presented 86 Chapter 5: Convex Polytopes in Fukuda's lecture notes (ETH Zurich). A classical algorithmic proof is provided by the Fourier-Motzkin elimination procedure, which pro­ ceeds by projections on coordinate hyperplanes; see [Zie94] for a de­ tailed exposition. The double-description method is a similar algorithm formulated in the dual setting, and it is still one of the most efficient known computational methods. We will say a little more about the algorithmic problem of expressing the convex hull of a finite set as the intersection of half-spaces in the notes to Section 5.5. One may ask, What is a "vertex description" of an unbounded H­ polyhedron? Of course, it is not the convex hull of a finite set, but it can be expressed as the Minkowski sum P + C, where P is a V-poly­ tope and C is a convex cone described as the convex hull of finitely many rays emanating from 0. Exercises · 1. Verify that a d-dimensional simplex in Rd can be expressed as the inter­ section of d+ 1 half-spaces. III 2. (a) Show that every convex polytope in Rd is an orthogonal projection of a simplex of a sufficiently large dimension onto the space Rd (which we consider embedded as a d-flat in some Rn). 0 (b) Prove that every convex polytope P symmetric about 0 (i.e., with P = -P) is the affine image of a crosspolytope of a sufficiently large dimension. 0 5.3 Faces of a Convex Polytope The surface of the 3-dimensional cube consists of 8 "corner" points called vertices, 12 edges, and 6 squares called facets. According to the perhaps more usual terminology in 3-dimensional geometry, the facets would be called faces. But in the theory of convex polytopes, the word face has a slightly different meaning, defined below. For the cube, not only the squares but also the vertices and the edges are all called faces of the cube. 5.3.1 Definition (Face). A face of a convex polytope P is defined as • either P itself, or • a subset of P of the forrn P n h, where h is a hyperplane such that P is fully contained in one of the closed half -spaces determined by h. 5.3 Faces of a Convex Polytope 87 / h We observe that each face of P is a convex polytope. This is because P is the intersection of finitely many half-spaces and h is the intersection of two half-spaces, so the face is an H-polyhedron, and moreover, it is bounded. If P is a polytope of dimension d, then its faces have dimensions -1, 0, 1, . . . , d, where - 1 is, by definition, the dimension of the empty set. A face of dimension j is also called a j-face. Names of faces. The 0-faces are called vertices, the 1-faces are called edges, and the ( d-1 )-faces of a d-dimensional polytope are called facets. The (d-2)-faces of a d-dimensional polytope are ridges; in the familiar 3-dimen­ sional situation, edges = ridges. For example, the 3-dimensional cube has 28 faces in total: the empty face, 8 vertices, 12 edges, 6 facets, and the whole cube. The following proposition shows that each V-polytope is the convex hull of its vertices, and that the faces can be described combinatorially: They are the convex hulls of certain subsets of vertices. This includes some intuitive facts such as that each edge connects two vertices. A helpful notion is that of an extremal point of a set: For a set X C Rd, a point x E X is extremal if x Ĥ conv(X \ { x} ). 5.3.2 Proposition. Let P c Rd be a (bounded) convex polytope. (i) ("V ertices are extremal'') The extremal points of P are exactly its ver­ tices, and P is the convex hull of its vertices. (ii) ("Face of a f ace is a face") Let F be a face of P. The vertices of F are exactly those vertices of P that lie in F. More generally, the faces of F are exactly those f aces of P that are contained in F. The proof is not essential for our further considerations, and it is given at the end of this section (but Exercise 9 below illustrates that things are not quite as simple as it might perhaps seem). The proposition has an appropriate analogue for polyhedra, but in order to avoid technicalities, we treat the bounded case only. Graphs of polytopes. Each !-dimensional face, or edge, of a convex poly­ tope has exactly two vertices. We can thus define the graph G(P) of a polytope P in the natural way: The vertices of the polytope are vertices of the graph, and two vertices are connected by an edge in the graph if they are vertices of the same edge of P. (The terrns "vertices" and "edges" for graphs actually come from the corresponding notions for 3-dimensional convex polytopes.) 88 Chapter 5: Convex Polytopes Here is an example of a 3-dimensional polytope, the regular octahedron, with its graph: For polytopes in R3, the graph is always planar: Project the polytope from its interior point onto a circumscribed sphere, and then make a "cartographic map" of this sphere, say by stereographic projection. Moreover, it can be shown that the graph is vertex 3-connected. (A graph G is called vertex k­ connected if IV(G)I > k+1 and deleting any at most k-1 vertices leaves G connected.) Nicely enough, these properties characterize graphs of convex 3-polytopes: 5.3.3 Theorem (Steinitz theorem). A finite graph is isomorphic to the graph of a 3-dimensional convex polytope if and only if it is planar and vertex 3-connected. We omit a proof of the considerably harder "if" part (exhibiting a poly­ tope for every vertex 3-connected planar graph); all known proofs are quite complicated. Graphs of higher-dimensional polytopes probably have no nice description comparable to the 3-dimensional case, and it is likely that the problem of deciding whether a given graph is isomorphic to a graph of a 4-dimensional convex polytope is NP-hard. It is known that the graph of every d-dimen­ sional polytope is vertex d-connected (Balinski's theorem), but this is only a necessary condition. Examples. A d-dimensional simplex has been defined as the convex hull of a (d+l)-point affinely independent set V. It is easy to see that each subset of V determines a face of the simplex. Thus, there are (ȋ!Ȍ) faces of dimension k, k = -1, 0, . . . , d, and 2d+ 1 faces in total. The d-dimensional cross polytope has V = { e1, -e1, . . . , ed, -ed} as the vertex set. A proper subset F c V determines a face if and only if there is no i such that both ei E F and -ei E F (Exercise 2). It follows that there are 3d+1 faces, including the empty one and the whole crosspolytope. The nonempty faces of the d-dimensional cube [-1, 1]d correspond to vectors v E { -1, 1, 0} d. The face corresponding to such v has the vertex set { u E { -1, 1} d: ui = Vi for all i with Vi =I= 0}. Geometrically, the vector v is the center of gravity of its face. The face lattice. Let F(P) be the set of all faces of a (bounded) convex polytope P (including the empty face 0 of dimension -1). We consider the partial ordering of :F( P) by inclusion. 5.3 Faces of a Convex Polytope 89 5.3.4 Definition (Combinatorial equivalence) .. Two convex polytopes P and Q are called combinatorially equivalent if F(P) and F(Q) are isomor­ phic as partially ordered sets. We are going to state some properties of the partially ordered set F(P) without proofs. These are not difficult and can be found in [Zie94]. It turns out that :F(P) is a lattice (a partially ordered set satisfying additional axioms). We recall that this means the following two conditions: • Meets condition: For any two faces F, G E F(P), there exists a face 1. \1 E :F(P), called the meet of F and G, that is contained in both F and G and contains all other faces contained in both F and G. • Joins condition: For any two faces F, G E F(P), there exists a face J E :F(P), called the join of F and G, that contains both F and G and is contained in all other faces containing both F and G. The meet of two faces is their geometric intersection F n G. For verifying the joins and meets conditions, it may be helpful to know that for a finite partially ordered set possessing the minimum element and the maxirnum element, the meets condition is equivalent to the joins condition, and so it is enough to check only one of the conditions. Here is the face lattice of a 3-dimensional pyramid: p 5 12 45 1 2 p 0 The vertices are numbered 1-5, and the faces are labeled by the vertex sets. The face lattice is graded, meaning that every maximal chain has the sarne length (the rank of a face F is dim(F)+l). Quite obviously, it is atomic: Every face is the join of its vertices. A little less obviously, it is coatomic; that is, every face is the meet (intersection) of the facets containing it. An important consequence is that combinatorial type of a polytope is determined by the vertex-facet incidences. More precisely, if we know the dimension and all subsets of vertices that are vertex sets of facets (but without knowing the coordinates of the vertices, of course), we can uniquely reconstruct the whole face lattice in a simple and purely combinatorial way. Face lattices of convex polytopes have several other nice properties, but no full algebraic characterization is known, and the problem of deciding whether 90 Chapter 5: Convex Polytopes a given lattice is a face lattice is algorithmically difficult (even for 4-dimen­ sional polytopes). The face lattice can be a suitable representation of a convex polytope in a co1nputer. Each j-face is connected by pointers to its (j-1)-faces and to the (j+1)-faces containing it. On the other hand, it is a somewhat redundant representation: Recall that the vertex-facet incidences already contain the full information, and for some applications, even less data may be sufficient, say the graph of the polytope. The dual polytope. Let P be a convex polytope containing the origin in its interior. Then the dual set P is also a polytope; we have verified this in the proof of Theorem 5.2.2. 5.3.5 Proposition. For each j = -1, 0, . . . , d, the j-faces of P are in a bijective correspondence with the (d-j-1)-faces of P. This correspondence also reverses inclusion; in particular, the f ace lattice of P arises by turning the face lattice of P upside down. Again we refer to the reader's diligence or to (Zie94] for a proof. Let us examine a few examples instead. Among the five regular Platonic solids, the cube and the octahedron are dual to each other, the dodecahedron and the icosahedron are also dual, and the tetrahedron is dual to itself. More generally, if we have a 3-dimensional convex polytope and G is its graph, then the graph of the dual polytope is the dual graph to G, in the usual graph-theoretic sense. The dual of a d-simplex is a d-simplex, and the d-dimensional cube and the d-dimensional crosspolytope are dual to each other. We conclude with two notions of polytopes "in general position." 5.3.6 Definition (Simple and simplicial polytopes). A polytope P is called simplicial if each of its facets is a simplex (this happens, in particular, if the vertices of P are in general position, but general position is not necessary). A d-dimensional polytope P is called simple if each of its vertices is contained in exactly d facets. The faces of a simplex are again simplices, and so each proper face of a sim­ plicial polytope is a simplex. Among the five Platonic solids, the tetrahedron, the octahedron, and the icosahedron are simplicial; and the tetrahedron, the cube, and the dodecahedron are simple. Crosspolytopes are simplicial, and cubes are simple. An example of a polytope that is neither simplicial nor simple is the 4-sided pyramid used in the illustration of the face lattice. The dual of a simple polytope is simplicial, and vice versa. For a simple d-dimensional polytope, a small neighborhood of each vertex looks coinbina­ torially like a neighborhood of a vertex of the d-dimensional cube. Thus, for each vertex v of a d-dimensional simple polytope, there are d edges emanat­ ing from v, and each k-tuple of these edges uniquely determines one k-face incident to v. Consequently, v belongs to (v) k-faces, k = 0, 1, . . . , d. 5.3 Faces of a Convex Polytope 91 Proof of Proposition 5.3.2. In (i) ( "vertices are extremal"), we assume that P is the convex hull of a finite point set. Among all such sets, we fix one that is inclusion-minimal and call it V 0. Let ƺ1 be the vertex set of P, and let V e be the set of all extremal points of P. We prove that V 0 = V v = Ve, which gives (i). We have V e C V 0 by the definition of an extremal point. Next, we show that V v C V e. If v E V v is a vertex of P, then there is a hyperplane h with P n h = { v}, and all of P \ { v} lies in one of the open half-spaces defined by h. Hence P \ {v} is convex, which means that v is an extremal point of P, and so V v C V e. Finally we verify V 0 C Vv . Let v E V 0; by the inclusion-minimality of V 0, we get that v ¢ C = conv(Vo \ { v}). Since C and { v} are disjoint compact convex sets, they can be strictly separated by a hyperplane h. Let hv be the hyperplane parallel to h and containing v; this hv has all points of V 0 \ { v} on one side. We want to show that P n hv = { v} (then v is a vertex of P, and we are done). The set P \ hv = conv(V 0) \ hv, being the intersection of a convex set with an open half-space, is convex. Any segment vx, where x E P \ hv, shares only the point v with the hyperplane hv, and so ( P \ hv) U { v} is convex as well. Since this set contains Vo and is convex, it contains P = conv(V 0), and so p n hv = {v} indeed. As for (ii) ("face of a face is a face"), it is clear that a face G of P contained in F is a face of F too (use the same witnessing hyperplane). For the reverse direction, we begin with the case of vertices. By a consideration similar to that at the end of the proof of (i), we see that F = conv(V) n h = conv(V n h). Hence all the extremal points of F, which by (i) are exactly the vertices of F, are in V. Finally, let F be a face of P defined by a hyperplane h, and let G c F be a face of F defined by a hyperplane g within h; that is, g is a ( d-2 )-dimen­ sional affine subspace of h with G = g n F and with all of F on one side. Let 'Y be the closed half-space bounded by h with P c 'Y· We start rotating the boundary h of 'Y around g in the direction such that the rotated half-space 1' still contains F. ƹ Z=====-Ũ==::. h' 92 Chapter 5: Convex Polytopes If we rotate by a sufficiently small amount, then all the vertices of P not lying in F are still in the interior of 1'. At the same time, the interior of 1' contains all the vertices of F not lying in G, while all the vertices of G remain on the boundary h' of 1'. So h' defines a face of P (since all of P is on one side), and this face has the same vertex set as G, and so it equals G by the first part of (ii} proved above. D Bibliography and remarks. Most of the material in this section is quite old, and we restrict ourselves to a few comments and remarks on recent developments. Graphs of polytopes. The Steinitz theorem was published in [Ste22]. A proof (of the harder implication) can be found in (Zie94). In this type of proof, one starts with the planar graph K4, which is obviously re­ alizable as a graph of a 3-dimensional polytope, and creates the given 3-connected planar graph by a sequence of suitable elementary opera­ tions, the so-called qy transformations, which are shown to preserve the realizability. Another type of proof first finds a suitable straight edge planar drawing of the given graph G and then shows that the vertices of such a drawing can be lifted to R 3 to form the appropriate polytope. The drawings needed here are "rubber band" drawings: Pin down the vertices of an outer face and think of the edges as rubber bands of various strengths, which left alone would contract to points. Then the equilibrium position, where the forces at every inner vertex add up to 0, specifies the drawing (see, e.g., Richter-Gebert [RG97] for a presentation). These ideas go back to Maxwell; the result about the equilibrium position specifying straight edge drawing for every 3-connected planar graph was proved by Tutte [Tut60]. Very interest­ ing related results about graphs with higher connectivity are due to Linial, Lovasz, and Wigderson [LW88]. Another way of obtaining suit­ able drawings is via Koebe 's representation theorem (see, e.g., [PA95] for an exposition): Every planar graph G can be represented by touch­ ing circles; that is, every vertex v E V (G) can be assigned a circular disk in the plane in such a way that the disks have pairwise disjoint interiors and two of them touch if and only if their two vertices are connected by an edge. 5.3 Faces of a Convex Polytope On the other hand, Koebe's theorem follows easily from a stronger version of the Steinitz theorem due to Andreev: Every 3-connected planar graph has a cage representation, i.e., as the graph of a 3-di­ mensional convex polytope P whose edges are all tangent to the unit sphere (each vertex of P can see a cap of the unit sphere, and a suitable stereographic projection of these caps yields the disks as in Koebe's theorem). These beautiful results, as well as several others along these lines, would certainly deserve to be included in a book like this, but here they are not for space and time reasons. A result of Blind and Mani-Levitska, with a beautiful simple new proof by Kalai [Kal88], shows that a simple polytope is determined by its dimension and its graph; that is, if two d-dimensional simple poly­ topes P and Q have isomorphic graphs, then they are combinatorially equivalent. One of the most challenging problems about graphs of convex poly­ topes is the Hirsch conjecture. In its basic form, it states that the graph of any d-dimensional polytope with n facets has diameter at most n-d; i.e., every two vertices can be connected by a path of at most n-d edges. This conjecture is implied by its special case with n = 2d , the so-called d-step conjecture. There are several variants of the Hirsch conjecture. Some of them are known to be false, such as the Hirsch conjecture for d-dimensional polyhedra with n-facets; their graph can have diameter at least n-d+ Ld/5J . But even here the con­ jecture fails just by a little, while the crucial and wide open question is whether the diameter of the graph can be bounded by a fixed poly­ nomial in d and n. The Hirsch conjecture is motivated by linear programming (and it was published in Dantzig's book [Dan63]), since the running time of all variants of the simplex algorithm is bounded from below by the number of edges that must be traversed in order to get from the start­ ing vertex of the polyhedron of admissible solutions to the optimum vertex. The best upper bound is due to KalaL He published several papers on this subject, successively improving and si1nplifying his argu1nents, and this sequence is concluded with [Kal92]. He proves the following: Let P be a convex polyhedron in Rd with n f acets. Assume that no edge of P is horizontal and that P has a (unique) topmost vertex w. Then f rom every vertex v of P there is a path to w consisting of at most J ( d, n) < 2n ( d+ L l١g2 {" J -1) < 2n Iog2 d+ 1 edges and going upward all the time. The proof is quite short and uses only very simple properties of polytopes (also see [Zie94] or (Kal97]). Kalai [Kal92] also discovered a randomized variant of the simplex algorithm for linear programming for which the expected number of pivot steps, for every linear program with n constraints in R d, is 93 94 Chapter 5: Convex Polytopes bounded by a subexponential function of n and d, namely by n°C v'd) . All the previous worst-case bounds were exponential. Interestingly, es­ sentially the same algorithm (in a dual setting) was found by Sharir and Welzl and a little later analyzed in [MSW96], independent of Kalai's work and at almost the same time, but coming from a quite different direction. The Sharir-Welzl algorithm is formulated in an abstract framework, and it can be used for many other oy>timization problems besides linear programming. Realizations of polytopes. By a realization of a d-diinensional polytope P we mean any polytope Q C Rd that is combinatorially equivalent to P. The proof of Steinitz's theorem shows that every 3-dimension­ al polytope has a realization whose vertices have integer coordinates. For 3-polytopes with n vertices, Richter-Gebert [RG97] proved that the vertex coordinates can be chosen as positive integers no larger than 218n2, and if the polytope has at least one triangular facet, the upper bound becomes 43n (a previous, slightly worse, estimate was given by Onn and Sturmfels ). No nontrivial lower bounds seem to be known. Let us remark that for straight edge drawings of planar graphs, the vertices of every n-vertex graph can be placed on a grid with side O(n). This was first proved by de Fraysseix, Pach, and Pollack [dFPP90] with the (2n-4) x (n-2) grid, and re-proved by Schnyder [Sch90] by a different method, with the ( n-1) x ( n-1) grid; see also Kant (Kan96] for more recent results in this direction. For higher-dimensional polytopes, the situation is strikingly differ­ ent. Although all simple polytopes and all simplicial polytopes can be realized with integer vertex coordinates, there are 4-dimensional poly­ topes for which every realization requires irrational coordinates (we will see an 8-dimensional example in Section 5.6). There are also 4-di­ mensional n-vertex polytopes for which every realization with integer coordinates uses doubly exponential coordinates, of order 220(n) . There are numerous other results indicating that the polytopes of dimension 4 and higher are complicated. For example, the problem of deciding whether a given finite lattice is isomorphic to the face lattice of a 4-dimensional polytope is algorithmically difficult; it is polynomially equivalent to the problem of deciding whether a system of polynomial inequalities with integer coefficients in n variables has a solution. This latter problem is known to be NP-hard, but most likely it is even harder; the best known algorithm needs exponential time and poly­ nomial space. An overview of such results, and references to previous work on which they are built, can be found in Richter-Gebert [RG99J, and detailed proofs in [RG97]. Section 6.2 contains a few more remarks on realizability (see, in particular, Exercise 6.2.3). 5.3 Faces of a Convex Polytope 95 Exercises 1. Verify that if V c Rd is affinely independent, then each subset F C V determines a face of the simplex conv(V). m 2. Verify the description of the faces of the cube and of the crosspolytope given in the text. m 3. Consider the ( n-1 )-diinensional pern1utahedron as defined in the intro­ duction to this chapter. (a) Verify that it really has n! vertices corresponding to the permutations of { 1' 2' . . . ' n}. m (b) Describe all faces of the permutahedron combinatorially (what sets of perrnutations are vertex sets of faces?). 0 (c) Determine the dimensions of the faces found in (b). In particular, show that the facets correspond to ordered partitions (A, B) of {1, 2, . . . , n }, A, B # 0, and count them. 0 4. Let P C R4 = conv{ ±ei ± ej: i, j = 1, 2, 3, 4, i # j}, where e1 , . . . , e4 is the standard basis (this P is called the 24-cell). Describe the face lattice of P and prove that P is combinatorially equivalent to P (in fact, P can be obtained from P by an isometry and scaling). 0 5. Using Proposition 5.3.2, prove the following: (a) If F is a face of a convex polytope P, then F is the intersection of P with the affine hull of F. m (b) If F and G are faces of a convex polytope P, then F n G is a face, too. IT1 6. Let P be a convex polytope in R3 containing the origin as an interior point, and let F be a j-face of P, j = 0, 1, 2. (a) Give a precise definition of the face F' of the dual polytope P cor­ responding to F (i.e., describe F' as a subset of R3). [I] (b) Verify that F' is indeed a face of P. m 7. Let V C Rd be the vertex set of a convex polytope and let U C V. Prove that U is the vertex set of a face of conv(V) if and only if the affine hull of U is disjoint from conv(V \ U). 0 8. Prove that the graph of any 3-dimensional convex polytope is 3-connected; i.e.' removing any 2 vertices leaves the graph connected. m 9. Let C be a convex set. Call a point x E C exposed if there is a hyperplane h with Cnh = { x} and all the rest of C on one side. For convex polytopes, exposed points are exactly the vertices, and we have shown that any extremal point is also exposed. Find an example of a compact convex set C c R2 with an extremal point that is not exposed. 0 10. (On extremal points) For a set X C Rd, let ex(X) = {x E X: x f/. conv( X \ { x})} denote the set of extremal points of X. (a) Find a convex set C C Rd with C # conv(ex(C)). [!] (b) Find a compact convex C C R3 for which ex( C) is not closed. [!] 96 Chapter 5: Convex Polytopes (c) By modifying the proof of Theorem 5.2.2, prove that C = conv( ex( C)) for every compact convex C C Rd (this is a finite-dimensional version of the well known Krein-Milman theorem). 0 5.4 Many Faces: The Cyclic Polytopes A convex polytope P can be given to us by the list of vertices. How difficult is it to recover the full face lattice, or, more modestly, a representation of P as an intersection of half-spaces? The first question to ask is how large the face lattice or the collection of half-spaces can be, compared to the number of vertices. That is, what is the maximum total number of faces, or the maximum number of facets, of a convex polytope in Rd with n vertices? The dual question is, of course, the maximum number of faces or vertices of a bounded intersection of n half-spaces in Rd. Let fi = /j(P) denote the number of j-faces of a polytope P. The vector (fo, ft, . . . , /d) is called the !-vector of P. We thus assume fo = n and we are interested in estimating the maximum value of /d-l and of 'Ev=O fk· In dimensions 2 and 3, the situation is simple and favorable. For d = 2, our polytope is a convex polygon with n vertices and n edges, and so fo = /1 = n, f2 = 1. The /-vector is even determined uniquely. A 3-dimensional polytope can be regarded as a drawing of a planar graph, in our case with n vertices. By well-known results for planar graphs, we have /1 < 3n-6 and !2 < 2n-4. Equalities hold if and only if the polytope is simplicial (all facets are triangles). In both cases the total number of faces is linear in n. But as the dirnension grows, polytopes become much more complicated. First of all, even the total number of faces of the most innocent convex polytope, the d-dimensional simplex, is exponential in d. But here we consider d fixed and relatively small, and we investigate the dependence on the number of vertices n. Still, as we will see, for every n > 5 there is a 4-dimensional convex polytope with n vertices and with every two vertices connected by an edge, i.e., with (ƽ) edges! This looks counterintuitive, but our intuition is based on the 3-dimensional case. In any fixed dimension d, the number of facets can be of order nld/2J , which is rather disappointing for someone wishing to handle convex polytopes efficiently. On the other hand, complete desperation is perhaps not appropriate: Certainly not all polytopes exhibit this very bad behavior. For example, it is known that if we choose n points uniformly at random in the unit ball Bd, then the expected number of faces of their convex hull is only o( n), for every fixed d. It turns out that the number of faces for a given dimension and number of vertices is the largest possible for so-called cyclic polytopes, to be introduced next. First we define a very useful curve in Rd. 5.4 Many Faces: The Cyclic Polytopes 97 5.4.1 Definition (Moment curve). The curve 7 == {(t, t2, . • • , td): t E R} in Rd is called the moment curve. 5.4.2 Lemma. Any hyperplane h intersects the moment curve 7 in at most d points. If there are d intersections, then h cannot be tangent to 7, and thus at eacl1 intersection, 7 passes f rorn or1e side of h to the other. Proof. A hyperplane h can be expressed by the equation (a, x) == b, or in coordinates at Xt + a2x2 + · · · + adXd == b. A point of 7 has the form (t, t2, . • • , td), and if it lies in h, we obtain a1t + a2t2 + · · · + adtd - b == 0. This means that t is a root of a nonzero polynornial Ph ( t) of degree at rnost d, and hence the number of intersections of h with 7 is at most d. If there are d distinct roots, then they must be all simple. At a simple root, the polynomial Ph ( t) changes sign, and this means that the curve 7 passes from one side of h to the other. D As a corollary, we see that every d points of the moment curve are affinely independent, for otherwise, we could pass a hyperplane through them plus one more point of 'Y· So the moment curve readily supplies explicit examples of point sets in general position. 5.4.3 Definition (Cyclic polytope). The convex hull of finitely many points on the mornent curve is called a cyclic polytope. How many facets does a cyclic polytope have? Each facet is deterrnined by a d-tuple of vertices, and distinct d-tuples determine distinct facets. Here is a criterion telling us exactly which d-tuples determine facets. 5.4.4 Proposition (Gale's evenness criterion). Let V be the vertex set of a cyclic polytope P considered with the linear ordering < along the mo­ ment curve (larger vertices have larger values of the parameter t). Let F == { Vt, v2, . . . , vd} C V be a d-tuple of vertices of P, where Vt < v2 < · · · < vd. Then F determines a f acet of P if and only if for any two vertices u, v E V \ F, the number of vertices Vi E F with u < Vi < v is even. Proof. Let hF be the hyperplane affinely spanned by F. Then F determines a facet if and only if all the points of V \ F lie on the same side of h F. Since the moment curve 7 intersects hp in exactly d points, namely at the points of F, it is partitioned into d+l pieces, say 'Yo, . . . , 'Yd, each lying completely in one of the half-spaces, as is indicated in the drawing: 98 Chapter 5: Convex Polytopes Hence, if the vertices of V \ F are all contained in the odd-nurnbered pieces 11, !3, . . . , as in the picture, or if they are all contained in the even-numbered pieces !o, 12, • • . , then F determines a facet. This condition is equivalent to Gale's criterion. D Now we can count the facets. 5.4.5 Theorem. The nurnber of f acets of a d-dimensional cyclic polytope with n vertices (n > d+1) is (n - ld/2J) (n - ld/2J - 1) ld/2J + ld/2J _ 1 for d even, and (n - ld/2J - 1) 2 Ld/2J for d odd. F or fixed d, this has the order of magnitude nld/2J . Proof. The number of facets equals the number of ways of placing d black circles and n - d white circles in a row in such a way that we have an even number of black circles between each two white circles. Let us say that an arrangement of black and white circles is paired if any contiguous segment of black circles has an even length (the arrangements permitted by Gale's criterion need not be paired because of the initial and final segments). The number of paired arrangements of 2k black circles and n - 2k white circles is (n k k) , since by deleting every second black circle we get a one-to-one correspondence with selections of the positions of k black circles among n - k possible positions. Let us return to the original problem, and first consider an odd d = 2k+l. In a valid arrangement of circles, we must have an odd number of consecutive black circles at the beginning or at the end (but not both). In the former case, we delete the initial black circle, and we get a paired arrangement of 2k black and n-1-2k white circles. In the latter case, we similarly delete the black circle at the end and again get a paired arrangement as in the first case. This establishes the formula in the theorem for odd d. For even d = 2k, the number of initial consecutive black circles is ei­ ther odd or even. In the even case, we have a paired arrangement, which contributes (n k k) possibilities. In the odd case, we also have an odd num­ ber of consecutive black circles at the end, and so by deleting the first and last black circles we obtain a paired arrangement of 2(k-1) black circles and n-2k white circles. This contributes (n k kϦ2) possibilities. D Bibliography and remarks. The convex hull of the moment curve was studied by by Caratheodory [Car07). In the 1950s, Gale con­ structed neighborly polytopes by induction. Cyclic polytopes and the evenness criterion appear in Gale [Gal63]. The moment curve is an important object in many other branches besides the theory of convex 5.4 Many Faces: The Cyclic Polytopes polytopes. For example, in elementary algebraic topology it is used for proving that every (at most countable) d-dimensional simplicial complex has a geometric realization in R2d+I. Convex hulls of random sets. Baniny [Bar89] proved that if n points are chosen uniformly and independently at random from a fixed d­ dimensional convex polytope K (for example, the unit cube), then the number of k-dimensional faces of their convex hull has the order (log n)d-1 for every fixed d and k, 0 < k < d-1 (the constant of pro­ portionality depending on d, k, and K). If K is a smooth convex body (such as the unit ball), then the order of magnitude is n(d-l)/(d+1), again with d, k, and K fixed. For more references and wider context see, e.g., Weil and Wieacker [WW93]. Exercises 99 1. (a) Show that if V is a finite subset of the moment curve, then all the points of V are extreme in conv(V); that is, they are vertices of the corresponding cyclic polytope. [I] (b) Show that any two cyclic polytopes in R d with n vertices are com­ binatorially the sarne: They have isomorphic face lattices. Thus, we can speak of the cyclic polytope. m 2. (Another curve like 'Y) Let j3 C R d be the curve { ( t¶ 1 , tȊ2, . • . , tȉd): t E R, t > 0}. Show that any hyperplane intersects /3 in at most d points (and if there are d intersections, then there is no tangency), and conclude that any n distinct points on j3 form the vertex set of a polytope com­ binatorially isomorphic to the cyclic polytope. GJ (Let us remark that many other curves have these properties as well; the moment curve is just the most convenient example.) 3. (Universality of the cyclic polytope) (a) Let x 1 , . . . , Xn be points in Rd. Let Yi denote the vector arising by appending 1 as the (d+1)st component of Xi. Show that if the determi­ nants of all matrices with columns Yi1 , • • • , Yid+l , for all choices of indices i1 < i2 < · · · < id+1, have the same nonzero sign, then x1, . . . , Xn form the vertex set of a convex polytope combinatorially equivalent to the n­ vertex cyclic polytope in Rd. GJ (b) Show that for any integers n and d there exists N such that among any N points in Rd in general position, one can choose n points forming the vertex set of a convex polytope combinatorially equivalent to the n-vertex cyclic polytope. 0 (This can be seen as a d-dimensional generalization of the Erdos-Szekeres theorern.) 4. Prove that if n is sufficiently large in terms of d, then for every set of n points in R d in general position, one can choose d+ 1 simplices of di­ mension d with vertices at some of these points such that any hyperplane avoids at least one of these simplices. Use Exercise 3. [I] 100 Chapter 5: Convex Polytopes This exercise is a special case of a problem raised by Lovasz, and it was communicated to me by Barany. A detailed solution can be found in [Bvs+99]. 5. Show that for cyclic polytopes in dimensions 4 and higher, every pair of vertices is connected by an edge. For dimension 4 and two arbitrary vertices, write out explicitly the equation of a hyperplane intersecting the cyclic polytope exactly in this edge. m 6. Determine the /-vector of a cyclic polytope with n vertices in dimensions 4, 5, and 6. m 5.5 The Upper Bound Theorem The upper bound theorem, one of the earlier major achievements of the theory of convex polytopes, clahns that the cyclic polytope has the largest possible number of faces. 5.5.1 Theorem (Upper bound theorem). Among all d-dimensional con­ vex polytopes with n vertices, the cyclic polytope maximizes the number of faces of each dimension. In this section we prove only an approximate result, which gives the cor­ rect order of magnitude for the maximum number of facets. 5.5.2 Proposition (Asymptotic upper bound theorem). A d-dimcn­ sional convex polytope with n vertices has at most 2(Ldf2J) f acets and no more than 2d+t (Ldf2J) f aces in total. For d fixed, both quantities thus have the order of magnitude nld/2J . First we establish this proposition for simplicial polytopes, in the following form. 5.5.3 Proposition. Let P be a d-dimensional simplicial polytope. Then (a) fo(P) + ft(P) + · · · + /d(P) < 2d/d-t (P), and (b) fd-t(P) < 2fLd/2J-t(P). This implies Proposition 5.5.2 for simplicial polytopes, since the number of ( L d/2 J -1)-faces is certainly no bigger than ( Ldf2J) , the number of all ld/2 J­ tuples of vertices. Proof of Proposition 5.5.3. We pass to the dual polytope P, which is simple. Now we need to prove (ȑ=O fk(P) < 2d f0(P) and fo(P) < 2/rd/21 (P). Each face of P has at least one vertex, and every vertex of a simple d-polytope is incident to 2d faces, which gives the first inequality. 5.5 Tl1e Upper Bound Theorem 101 We now bound the number of vertices in terms of the number of r d/21-faces. This is the heart of the proof, and it shows where the mysterious exponent ld/2J comes from. Let us rotate the polytope P so that no two vertices share the xd-co­ ordinate (i.e., no two vertices have the same vertical level). Consider a vertex v with the d edges emanating from it. By the pigeonhole principle, there are at least r d/2l edges directed downwards or at least r d/2l edges directed upwards. In the former case, every f d/21-tuple of edges going up determines a f d/21-face for which v is the lowest vertex. In the latter case, every r d/2l-tuple of edges going down determines a r d/2l-face for which v is the highest vertex. Here is an illustration, unfortunately for the not too interesting 3-dimensional case, showing a situation with 2 edges going up and the corresponding 2-dimensional face having v as the lowest vertex: v We have exhibited at least one I d/21-face for which v is the lowest vertex or the highest vertex. Since the lowest vertex and the highest vertex are unique for each face, the number of vertices is no more than twice the number of r d/21-faces. D Warning. For simple polytopes, the total combinatorial complexity is pro­ portional to the number of vertices, and for simplicial polytopes it is pro­ portional to the number of facets (considering the dimension fixed, that is). For polytopes that are neither simple nor simplicial, the number of faces of intermediate dimensions can have larger order of magnitude than both the number of facets and the number of vertices; see Exercise 1. Nonsimplicial polytopes. To prove the asymptotic upper bound theorem, it remains to deal with nonsimplicial polytopes. This is done by a perturba­ tion argument, similar to numerous other results where general position is convenient for the proof but where we want to show that the result holds in degenerate cases as well. In most instances in this book, the details of perturbation arguments are omitted, but here we make an exception, since the proof seems somewhat nontrivial. 5.5.4 Lemma. For any d-dimensional convex polytope P there exists a d­ dimensional simplicial polytope Q with fo(P) = fo(Q) and fk(Q) > fk(P) f or all k = 1, 2, . . . , d. Proof. The basic idea is very simple: Move (perturb) every vertex of P by a very small amount, in such a way that the vertices are in general position, and show that each k-face of P gives rise to at least one k-face of the perturbed polytope. There are several ways of doing this proof. 102 Chapter 5: Convex Polytopes We process the vertices one by one. Let V be the vertex set of P and let v E V. The operation of €-pushing v is as follows: We choose a point v' lying in the interior of P, at distance at most c from v, and on no hyperplane determined by the points of V, and we set V' = (V \ { v}) U { v'}. If we successively cv-push each vertex v of the polytope, the resulting vertex set is in general position and we have a simple polytope. It remains to show that for any polytope P with vertex set V and any v E V, there is an c > 0 such that £-pushing v does not decrease the number of faces. Let U c V be the vertex set of a k-face of P, 0 < k < d-1, and let V' arise from V by £-pushing v. If v Ĥ U, then no doubt, U determines a face of conv(V'), and so we assume that v E U. First suppose that v lies in the affine hull of U \ { v }; we claim that then U \ { v} determines a k-face of conv(V'). This follows easily from the criterion in Exercise 5.3. 7: A subset U c V is the vertex set of a face of conv(V) if and only if the affine hull of U is disjoint from conv(V \ U). We leave a detailed argument to the reader (one must use the fact that v is pushed inside). If v lies outside of the affine hull of U \ { v }, then we want to show that U' = (U \ { v}) U { v'} deterrnines a k-face of conv(V'). The affine hull of U is disjoint from the compact set conv(V \ U). If we tnove v continuously by a sufficiently small amount, the affine hull of U moves continuously, and so there is an c > 0 such that if we move v within c from its original position, the considered affine hull and conv(V \ U) remain disjoint. 0 The h-vector and such. Here we introduce some notions extremely useful for deeper study of the /-vectors of convex polytopes. In particular, they are crucial in proofs of the (exact) upper bound theorem. Let us go back to the setting of the proof of Proposition 5.5.3. There we considered a simple polytope that used to be called P but now, for simplicity, let us call it P. It is positioned in Rd in such a way that no edge is horizontal, and so for each vertex v, there are some iv edges going upwards and d - iv edges going downwards. The central definition is this: The h-vector of P is ( ho, h1, . . . , hd), where hi is the number of vertices v with exactly i edges going upwards. So, for example, we have ho = hd = 1. Next, we relate the h-vector to the /-vector. Each vertex v is the lowest vertex for exactly (ik') faces of dimension k, and each k-face has exactly one lowest vertex, and so (5.1) (for i < k we have (k) = 0). So the h-vector determines the f-vector. Less obviously, the h-vector can be uniquely reconstructed from the /-vector! A quick way of seeing this is via generating functions: If f ( x) is the polynomial d k d . L:k=O fkx and h(x) = I:i=O hix"', then (5.1) translates to f(x) = h(x+l), 5.5 The Upper Bound Theorem 103 and therefore h(x) = f(x-1). Explicitly, we have (5.2) We have defined the h-vector using one particular choice of the vertical direction, but now we know that it is determined by the /-vector and thus independent of the chosen direction. By turning P upside down, we see that hi = hd-i for all i = 0, 1, . . . , d. These equalities are known as the Dehn-Sommerville relations. They include the usual Euler formula fo + /2 = /1 + 2 for 3-dirnensional polytopes. Let us stress once again that all we have said about h-vectors concerns only simple polytopes. For a simplicial polytope P, the h-vector can now be defined as the h-vector of the dual simple polytope P . Explicitly, Ȉ . k(d - k) hj = L,_,(-1)1-d - . !k-l· k=O J The upper bound theorem has the following neat reformulation in terms of h-vectors: For any d-dimensional simplicial polytope with fo = n vertices, we have (n - d + i - 1) hi < . ' 'l i = 0, 1, . . . , ld/2J. (5.3) Proving the upper bound theorem is not one of our main topics, but an outline of a proof can be found in this book. It starts in the next section and finishes in Exercise 11.3.6, and it is not arnong the most direct possible proofs. Deriving the upper bound theorem from (5.3) is a pure and direct calculation, veri(ying that the h-vector of the cyclic polytope satisfies (5.3) with equality. We omit this part. Bibliography and remarks. The upper bound theorem was con­ jectured by Motzkin in 1957 and proved by McMullen [McM70]. Many partial results have been obtained in the meantime. Perhaps most no­ tably, Klee [Kle64] found a simple proof for polytopes with not too few vertices (at least about d2 vertices in dimension d). That proof applies to simplicial complexes much more general than the boundary com­ plexes of simplicial polytopes: It works for Eulerian pseudornanifolds and, in particular, for all simplicial spheres, i.e., simplicial complexes homeomorphic to sd-l. Presentations of McMullen's proof and Klee's proof can be found in Ziegler's book [Zie94]. A nice variation was de­ scribed by Alon and Kalai [AK85]. Another proof, based on linear programming duality and results on hyperplane arrangements, was given by Clarkson [Cla93]. An elegant presentation of similar ideas, 104 Chapter 5: Convex Polytopes using the Gale transform discussed below in Section 5.6, can be found in Welzl [WelOl) and in Exercises 11.3.5 and 11.3.6. Our exposition of the asymptotic upper bound theorem is based on Seidel [Sei95]. The ordering of the vertices of a sirnple polytope P by their height in the definition of the h-vector corresponds to a linear ordering of the facets of P. This ordering of the facets is a shelling. Shelling, even in the strictly peaceful mathematical sense, is quite important, also beyond the realm of convex polytopes. Let K, be a finite cell complex whose cells are convex polytopes (such as the boundary complex of a convex polytope), and suppose that all maximal cells have the same dimension k. Such /C is called shellable if k ² 0 or k > 1 and K, has a shelling. A shelling of IC is an enumeration F1, F2, . . . , Fn of the facets (maximum-dimension cells) of K, such that (i) the boundary cornplex of F1 is shellable, and (ii) for every i > 1, there is a shelling of the complex Fi n UȆ-ȇ Fi that can be extended to a shelling of the boundary complex of Fi. The boundary complex of a convex polytope is homeomorphic to a sphere, and a shelling builds the sphere in ͯuch a way that each new cell is glued by contractible part of its boundary to the previously built part, except for the last cell, which closes the remaining hole. McMullen's proof of the upper bound theorem does not generalize to simplicial spheres (i.e., finite simplicial complexes homeomorphic to spheres), for example because they need not be shellable, counter­ intuitive as this may look. The upper bound theorem for them was proved by Stanley [Sta75] using much heavier algebraic and algebraic­ topological tools. An interesting extension of the upper bound theorem was found by Kalai [Kal91]. Let P be a simplicial d-dimensional polytope. All proper faces of P are simplices, and so the boundary is a simplicial complex. Let K be any subcomplex of the boundary (a subset of the proper faces of P such that if F E K, then all faces of F also lie in K). The strong upper bound theorem, as Kalai's result is called, asserts that if K has at least as many ( d-1 )-faces as the d-dimensional cyclic polytope on n vertices, then K has at least as many k-faces as that cyclic polytope, for all k = 0, 1, . . . , d-1. (Note that we do not assurne that P has n vertices!) The proof uses methods developed for the proof of the g-theorem mentioned below as well as Kalai's technique of algebraic shifting. Another major achievement concerning the !-vectors of polytopes is the so-called g-theorem. The inventive name g-vector of a d-dimen­ sional simple polytope refers to the vector (g0, 91, . . . , 9Ld/2J ), where go ࢙ ho and 9i ² hi - hi-l, i ² 1, 2, . . . , ld/2J . The g-theorem char­ acterizes all possible integer vectors that can appear as the g-vector of a d-dimensional simple (or simplicial) polytope. Since the g-vector 5.5 The Upper Bound Theorem uniquely determines the /-vector, we have a complete characteriza­ tion of !-vectors of simple polytopes. In particular, the g-theorem guarantees that all the components of the g-vector are always non­ negative (this fact is known as the generalized lower bound theorem), and therefore the h-vector is unimodal: We have h0 < h1 < · · · < hld/2J = hrd/21 > · · · > hd. (On the other hand, the /-vector of a simple polytope need not be unimodal; more exactly, it is unimodal in dimensions up to 19, and there are 20-dimensional nonunimodal exarnples.) We again refer to [Zie94] for a full statement of the g­ theorem. The proof has two independent parts; one of them, due to Biller a and Lee [BL81], constructs suitable polytopes, and the other part, first proved by Stanley [Sta80], shows certain inequalities for all simple polytopes. For studying the most elementary proof of the sec­ ond part currently available, one can start with McMullen [McM96] and continue with [McM93]. For nonsimple (and nonsimplicial) polytopes, a characterization of possible /-vectors remains elusive. It seems, anyway, that the flag vector might be a more appropriate parameter for nonsimple poly­ topes. The flag vector counts, for every k = 1, 2, . . . , d and for every i1 < i2 < · · · < ik, the number of chains F1 C F2 c · · · C Fk, where F1, . . . , Fk are faces with dim(Fj) = ij (such a chain is called a flag). No analogue of the upper bound theorem is known for centrally symmetric polytopes. A few results concerning their face counts, ob­ tained by 1nethods quite different front the ones for arbitrary poly­ topes, will be mentioned in Section 14.5. The proof of Lemma 5.5.4 by pushing vertices inside is similar to an argument in Klee [Kle64], but he proves more and presents the proof in more detail. Convex hull computation. What does it mean to compute the convex hull of a given n-point set V c Rd? One possible answer, briefly touched upon in the notes to Section 5.2, is to express conv(V) as the intersection of half-spaces and to compute the vertex sets of all facets. (As we know, the face lattice can be reconstructed from this information purely combinatorially; see Kaibel and Pfetsch [KP01] for an efficient algorithm.) Of course, for some applications it may be sufficient to know much less about the convex hull, say only the graph of the polytope or only the list of its vertices, but here we will discuss only algorithms for computing all the vertex-facet incidences or the whole face lattice. For a more detailed overview of convex hull algorithms see, e.g., Seidel [Sei97]. For the dimension d considered fixed, there is a quite simple and practical randomized algorithm. It computes the convex hull of n points in Rd in expected time O(nld/2J + n log n) (Seidel [Sei91], simplifying Clarkson and Shor (CS89]), and also a very complicated 105 106 Chapter 5: Convex Polytopes but deterministic algorithm with the same asymptotic running time ( Chazelle (Cha93b]; somewhat simplified in Bronnimann, Chazelle, and Matousek [BCM99]). This is worst-case optimal, since an n-vertex polytope may have about nld/2J facets. There are also output-sensitive algorithms, whose running time depends on the total number f of faces of the resulting polytope. Recent results in this direction, including an algorithm that computes the convex hull of n points in general posi­ tion in Rd (d fixed) in time O(n log f + (n/)l-l/(Ld/2J+1)(Iogn)c(d)), can be found in Chan [ChaOOb]. Still, none of the known algorithms is theoretically fully satisfac­ tory, and practical computation of convex hulls even in moderate di­ mensions, say 10 or 20, can be quite challenging. Some of the algo­ rithms are too complicated and with too large constants hidden in the asymptotic notation to be of practical value. Algorithms requiring gen­ eral position of the points are problematic for highly degenerate point configurations (which appear in many applications), since small per­ turbations used to achieve general position often increase the number of faces tremendously. Some of the randomized algorithms compute intermediate polytopes that can have many more faces than the fi­ nal result. Often we are interested just in the vertex---facet incidences, but many algorithms compute all faces, whose number can be much larger, or even a triangulation of every face, which may again increase the complexity. Such problems of existing algorithms are discussed in A vis, Bremner, and Seidel [ABS97]. For actual computations, simple and theoretically suboptimal al­ gorithms are often preferable. One of them is the double-description method mentioned earlier, and another algorithm that seems to be­ have well in many difficult instances is the reverse search of A vis and Fukuda [AF92]. It enumerates the vertices of the intersection of a given set H of half-spaces one by one, using quite small storage. Conceptu­ ally, one thinks of optimizing a generic linear function over n H by a simplex algorithm with Bland's rule. This defines a spanning tree in the graph of the polytope, and this tree is searched depth-first starting from the optimum vertex, essentially by running the simplex algorithm "backwards." The main problem of this algorithm is with degenerate vertices of high degree, which rnay correspond to an enorrnous nun1ber of bases in the simplex algorithm. Also, it sometimes helps if one knows some special properties of the convex hull in a particular problem, say many symmetries. For ex­ ample, very extensive computations of convex hulls were performed by Deza, Fukuda, Pasechnik, and Sato [DFPSOO], who studied the metric polytope. Before we define this interesting polytope, let us first intro-duce the metric cone Mn. This is a set in R(h) representing all metrics on {1, 2, . . . , n}, where the coordinate X{id} specifies the distance of 5.6 The Gale Transf orm i to j, 1 < i < j < n. So Mn is defined by the triangle inequalities X{i,j} + X{j,k} < X{i,k}, where i, j, k are three distinct indices. The metric polytope mn is the subset of Mn defined by the additional inequalities saying that the perimeter of each triangle is at most 2, namely x{i,j} + x{j,k} + x{i,k} < 2. Deza et al. were able to enumerate all the approximately 1.5 · 109 vertices of the 28-dimensional polytope m8; this may give some idea of the extent of these computational prob­ lems. Without using many symmetries of mn, a polytope of this size would currently be out of reach. Such computations might provide in­ sight into various conjectures concerning the metric polytope, which are important for combinatorial optimization problems (see, e.g., Deza and Laurent [DL97] for background). Exercises 107 1. (a) Let P be a k-dimensional convex polytope in R k, and Q an £-dimen­ sional convex polytope in Rf. Show that the Cartesian product P x Q c Rk+t is a convex polytope of dimension k + f. [I] (b) If F is an i-face of P, and G is a j-face of Q, i, j > 0, then F x G is an (i + j)-face of P x Q. Moreover, this yields all the nonempty faces of P x Q. 0 (c) Using the product of suitable polytopes, find an example of a "fat­ lattice" polytope, i.e., a polytope for which the total number of faces has a larger order of magnitude than the number of vertices plus the number of facets together (the din1ension should be a constant). [I] (d) Show that the following yields a 5-dimensional fat-lattice polytope: The convex hull of two regular n-gons whose affine hulls are skew 2-fiats in R5. [I] For recent results on fat-lattice polytopes see Eppstein, Kuperberg, and Ziegler [EKZ01]. 5.6 The Gale Transform On a very general level, the Gale transform resembles the duality transform defined in Section 5.1. Both convert a (finite) geometric configuration into another geometric configuration, and they may help uncover some properties of the original configuration by making them more apparent, or easier to visualize, in the new configuration. The Gale transform is more complicated to explain and probably more difficult to get used to, but it seems worth the effort. It was invented for studying high-dimensional convex polytopes, and recently it has been used for solving problems about point configurations by relating them to advanced theorems on convex polytopes. It is also closely related to the duality of linear programming (see Section 10.1), but we will not elaborate on this connection here. 108 Chapter 5: Convex Polytopes The Gale transform assigns to a sequence a = (at, a2, . . . , an) of n > d+1 points in R d another sequence g ::::;: (91 , 92, . . . , 9n) of n points. The points 9t, 92, . . . , 9n live in a different dimension, namely in R n-d-1. For example, n points in the plane are transformed to n points in R n-3 and vice versa. In the literature one finds many results about k-dimensional polytopes with k+3 or k+4 vertices; this is because their vertex sets have a low-dimensional Gale transform. Let us stress that the Gale transform operates on sequences, not individual points: We cannot say what 91 is without knowing all of a1, a2, . . . , an. We also require that the affine hull of the ai be the whole Rd; otherwise, the Gale transform is not defined. (On the other hand, we do not need any sort of general position, and some of the ai may even coincide.) The reader might wonder why the points of the Gale transform are written with bars. This is to indicate that they should be interpreted as vectors in a vector space, rather than as points in an affine space. As we will see, "affine" properties of the sequence a, such as affine dependencies, correspond to "linear" properties of the Gale transform, such as linear dependencies. In order to obtain the Gale transform of a, we first convert the ai into (d+l)-dimensional vectors: ai E Rd+l is obtained from ai by appending a (d+1)st coordinate equal to 1. This is the embedding Rd --t Rd+l often used for relating affine notions in R d to linear notions in R d+ 1; see Section 1.1. Let A be the d x n matrix with ai as the ith column. Since we assume that there are d+ 1 affinely independent points in a, the matrix A has rank d+ 1, and so the vector space V generated by the rows of A is a ( d+ 1 )-dimensional subspace of Rn. We let V j_ be the orthogonal complement of V in Rn; that is, Vl_ = {w E Rn: (v, w) = 0 for all v E V}. We have dim(Vj_) = n -d-1. Let us choose some basis ( b1, b2, . . . , bn-d-1) of V 1_ , and let B be the ( n -d -1) x n matrix with bj as the jth row. Finally, we let 9i E Rn-d-l be the ith column of B. The sequence g = (91, 92, . . . , 9n) is the Gale transforrn of a. Here is a pictorial summary: n dllllllllll a1 an point sequence 5.6.1 Observation. 1 1 1 1 1 1 1 1 d+l 1 basis of n-d-1 orthogonal complement Gale transform -91 Yn (i) (The Gale transf orm is determined up to linear isomorphism) In the construction of g, we can choose an arbitrary basis of V j_. Choosing a diff erent basis corresponds to multiplying the matrix B f rom the left by a nonsingular (n -d-1) x (n-d-l) matrix T (Exercise 1), and this means transf orming (91, . . . , 9n) by a linear isomorphism of R n-d-1. 5. 6 The Gale Transform 109 (ii) A sequence g in Rn-d-1 is the Gale transf orm of some a if and only if it spans Rn-d-1 and has 0 as the center of gravity: L:ơ 1 Bi = 0. (iii) Let us consider a sequence g in Rn-d-1 satisf ying the condition in (ii). If we interpret it as a point sequence (breaking the convention that the result of the Gale transf orm should be thought of as a sequence of vec­ tors), apply the Gale transform to it, again consider the result as a point sequence, and apply the Gale transf orm the second time, we recover the original g, up to linear isomorphism (Exercise 5). Two ways of probing a configuration. We would like to set up a dictio­ nary for translating between geometric properties of a sequence a and those of its Gale transform. First we discuss how some familiar geometric proper­ ties of a configuration of points or vectors are reflected in the values of affine or linear functions on the configuration, and how they manifest themselves in affine or linear dependencies. For a sequence a = (a1, . . • , an) of vectors in Rd+1, we define two vector subspaces of Rn: LinVal(a) = {(f(a1), f(a2), . . . , f(an)): f: Rd+1 -4 R is a linear function}, LinDep(a) = {a E Rn: a1a1 + a2a2 + · · · + aniin = 0}. For a point sequence a = (a1, . . . , an), we then let AffVal(a) = LinVal(a) and AffDep(a) == LinDep(a), where a is obtained from a as above, by appending 1 's. Another description is AfNal(a) = {(j(a1), f(a2), . . . , f(an)): f: Rd -4 R is an affine function}, AffDep(a) = {a E Rn: a1a1 + · · · + anan = 0, a1 + · · · + an = 0}. The knowledge of LinVal(a) tells us a lot about a, and we only have to learn to decode the information. As usual, we assume that a linearly spans all of Rd+1. Each nonzero linear function f: Rd+l -4 R determines the linear hy­ perplane ht = {x E Rd+l: f(x) = 0} (by a linear hyperplane we mean a hyperplane passing through 0). This ht is oriented (one of its half-spaces is positive and the other negative), and the sign of f(ai) determines whether iii lies on h f , on its positive side, or on its negative side. _ .... . . .. . f(x) > 0 _ .. - · - · .. ... . . . . . .. · · · · · h f : f (X) = 0 f(x) < 0 We begin our decoding of the properties of a with the property "span­ ning a linear hyperplane." That is, we choose our favorite index set I C 110 Chapter 5: Convex Polytopes {1, 2, . . . , n }, and we ask whether the points of the subsequence a1 = (ai: i E I) span a linear hyperplane. First, we observe that they lie in a common linear hyperplane if and only if there is a nonzero 'P E LinVal(a) such that 'Pi = 0 for all i E J. It could still happen that all of a1 lies in a lower-dimensional linear subspace. Using the assumption that a spans Rd+l, it is not difficult to see that a1 spans a linear hyperplane if and only if all 'P E LinVal(a) that vanish on a1 have identical zero sets; that is, the set { i: 'Pi = 0} is the same for all such 0} and I_ (a) = { i E {1, 2, . . . , n }: ai < 0}. As we learned in the proof of Radon's lemma (Lemma 1.3.1), /+ = I+ (a) and / = / (a) correspond to Radon partitions of a. Namely, 'L:iEJ+ aiai = 'L:iEI- (-ai)ai, and dividing by EiEJ+ ai = LiEI- (-ai), we have convex combinations on both sides, and so conv(a1+)n conv( a1 _ ) i= 0. Conversely, if /1 and I2 are disjoint index sets with conv( a12 )n conv(aJ2) i= 0, then there is a nonzero a E AffDep(a) with /+(a) C /1 and I_ (a) C I 2. For example, ai is a vertex of conv( a) if and only if there is no a E AffDep(a) with /+ (a) = { i}. For a sequence a of vectors, linear dependencies correspond to expressing 0 as a convex combination. Namely, for disjoint index sets /1 and !2, we have 0 E conv( {ai: i E I1} U {-ai: i E 12}) if and only if there is a nonzero a E LinDep(a) with I+(a) C I1 and / (a) C /2. Together with these geometric interpretations of LinVal(a), AfNal(a), LinDep(a), and AffDep(a), the following lemma (whose proof is left to Ex­ ercise 8) allows us to translate properties of point configurations to those of their Gale transforms. 5.6.2 Lemma. Let a be a sequence ofn points in Rd whose points aJHnely span Rd, and let g be its Gale transf orm. Then LinVal(g) = AffDep(a) and LinDep(g) = AfNal(a). D So a Radon partition of a corresponds to a partition of g by a linear hyperplane, and a partition of a by a hyperplane translates to a linear de­ pendence (i.e., a "linear Radon partition") of g. Let us list several interesting connections, again leaving the simple but instructive proofs to the reader. 5.6 The Gale Transform 111 5.6.3 Corollary (Dictionary of the Gale transform). (i) (Lying in a common hyperplane) For every (d+l)-point index set I C {1, 2, . . . , n }, the points ai with i E I lie in a common hyperplane if and only if all the vectors gj with j ¢ I lie in a common linear hyperplane. (ii) (General position) In particular, the points of a are in general position (no d+l on a common hyperplane) if and only if every n-d-1 vectors among g1, . . . ,gn span Rn-d-l (which is a natural condition of general position f or vectors). (iii) (F aces of the convex hull) The points ai with i E I are contained in a common f acet of P = conv(a) if and only if 0 E conv{gj: j tJ I}. In par­ ticular, if P is a simplicial polytope, then its k-f aces exactly correspond to complements of the (n-k-1)-element subsets of g containing 0 in the convex hull. (iv) (Convex independence) The ai f orm a convex independent set if and only if there is no oriented linear hyperplane with exactly one of the 9) on the positive side. Here is, finally, a picture of a 3-dimensional convex polytope with 6 ver­ tices and the (planar) Gale transform of its vertex set: -94 For exarnple, the facet a1a2a5a6 is reflected by the complementary pair 93,94 of parallel oppositely oriented vectors, and so on. Signs suffice. As was noted above, in order to find out whether some ai is a vertex of conv( a), we ask whether there is an o: E AffDep( a) with I+(a) = { i}. Only the signs of the vectors in AffDep(a) are important here, and this is the case with all the combinatorial-geometric information about point sequences or vector sequences in Corollary 5.6.3. For such purposes, the knowledge of sgn(AffDep(a)) = {(sgn(o:t), . . . , sgn(an)): a E AffDep(a)} is as good as the knowledge of AffDep(a). We can thus declare two sequences a and b combinatorially isomorphic if sgn(AffDep(a)) = sgn(AffDep(b)) and sgn(AfNal(a)) = sgn(AfNal(b)).2 We will hear a little more about this notion of combinatorial isomorphism in Section 9.3 when we discuss order types, and also in the notes to Section 6.2 in connection with oriented matroids. 2 It is nontrivial but true that either of these equalities implies the other one. 112 Chapter 5: Convex Polytopes Here we need only one very special case: If g = (91 , . . . , 9n) is a sequence of vectors, t1, . . . , tn > 0 are positive real numbers, and g' = (t1g1, . . . , tn9n), then clearly, sgn(Li nVal(g)) = sgn(Li nVal(g')) and sgn(LinDep(g)) = sgn(LinDep(g')), and so g and g' are combinatorially isomorphic vector configurations. Affine Gale diagrams. We have seen a certain asymmetry of the Gale transform: While the sequence a is interpreted affi.nely, as a point sequence, its Gale transform needs to be interpreted linearly, as a sequence of vectors (with 0 playing a special role). Could one reduce the dimension of g by 1 and pass to an "affine version" of the Gale transform? This is indeed possible, but one has to distinguish "positive" and "negative" points in the affine version. Let g be the Gale transform of some a, g1, . . . , 9n E R n-d-l. Let us assume for simplicity that all the 9i are nonzero. We choose a hyperplane h not parallel to any of the 9i and not passing through 0, and we project the 9i centrally from 0 into h, obtaining points Y1, . . . , Yn E h f".j R n-d-2. If Yi lies on the same side of 0 as gi, i.e., if 9i = ti9i with ti > 0, we set ai = +1, and call 9i a positive point. For Yi lying on the other side of 0 than 9i we let ai = -1, and we call 9i a negative point. Here is an example with the 2-dimensional Gale transform from the previous drawing: 91 ' 96 93' 94 92 95 ٷ----ٺ ٻ(}-----affine Gale diagran1 -91 h The positive 9i are marked by full circles, the negative ones by empty circles, and we have borrowed the (incomplete) yin-yang symbol for marking the positions shared by one positive and one negative point. This sequence g of positive and negative points in Rn-d-2, or more formally the pair (g, a), is called an affine Gale diagram of a. It conveys the same combinatorial information as g, although we cannot reconstruct a from it up to linear isomorphism, as was the case with g. (For this reason, we speak of Gale diagram rather than Gale transform.) One has to get used to interpreting the positive and negative points properly. If we put AffVal(g, a) = { (a1/(g1), . . . , anf(gn)): f: Rn-d-2 -+ R affine}, AffDep(g, a) = {a E Rn: I:à 1 aiaigi = 0, I:à 1 aiai = 0 }, then, as is easily checked, 5.6 The Gale Transform 113 sgn(AffDep(g, a)) = sgn(LinDep(g)) and sgn(AffVal(g, a)) = sgn(LinVal(g)). Here is a reinterpretation of Corollary 5.6.3 in terms of the affine Gale dia­ gram. 5.6.4 Proposition (Dictionary of affine Gale diagrams). Let a be a sequence ofn points in Rd, let g be the Gale transf orm of a, and assume that all the gi are nonzero. Let (g, a) be an affine Gale diagram of a in Rn-d-2• (i) A subsequence a1 lies in a common f acet of conv(a) if and only if conv ( {g i: j ¢ I, a i = 1}) n conv( {g i : j ¢ I, a j = -1}) # 0. (ii) The points of a are in convex position if and only if f or every oriented hyperplane in Rn-d-2, the number of positive points of g on its positive side plus the number of negative points of g on its negative side is at least 2. D So far we have assumed that Yi :/=- 0 for all i. This need not hold in general, and points Yi = 0 need a special treatment in the affine Gale diagram: They are called the special points, and for a full specification of the affine Gale diagram, we draw the positive and negative points and give the number of special points. It is easy to find out how the presence of special points influences the conditions in the previous proposition. A nonrational polytope. Configurations of k+4 points in R k have planar affine Gale diagrams. This leads to many interesting constructions of k-dimen­ sional convex polytopes with k+4 vertices. Here we give just one example: an 8-dimensional polytope with 12 vertices that cannot be realized with rational coordinates; that is, no polytope with ison1orphic face lattice has all vertex coordinates rational. First one has to become convinced that if 9 distinct points are placed in R 2 so that they are not all collinear and there are collinear triples and 4-tuples as is marked by segments in the left drawing below, then not all coordinates of the points can be rational. We omit the proof, which has little to do with the Gale transform or convex polytopes. Next, we declare some points negative, some positive, and some both positive and negative, as in the right drawing, obtaining 12 points. These points have a chance of being an affine Gale diagram of the vertex set of an 8-dimensional convex polytope, since condition (ii) in Proposition 5.6.4 114 Chapter 5: Convex Polytopes is satisfied. How do we construct such a polytope? For 9i = (Xi, Yi), we put 9i = (tiXi, tiYi, ti) E R3, choosing ti > 0 for positive 9i and ti < 0 for negative ti, in such a way that 'L:J2 1 9i = 0. Then the Gale transform of g is the vertex set of the desired convex polytope P (see Observation 5.6.1(ii) and (iii)). Let P' be some convex polytope with an isomorphic face lattice and let (g', a') be an affine Gale diagram of its vertex set a'. We have, for exam­ ple, g٫ = gÞ0 because {aƪ: i =/= 7, 10} form a facet of P', and similarly for the other point coincidences. The triple g٬, gÞ2 , gƉ (where gƉ is positive) is collinear, because {aƪ: i =!= 1, 8, 12} is a facet. In this way, we see that the point coincidences and collinearities are preserved, and so no affine Gale dia­ gram of P' can have all coordinates rational. At the same time, by checking the definition, we see that a point sequence with rational coordinates has at least one affine Gale diagram with rational coordinates. Thus, P cannot be realized with rational coordinates. Bibliography and remarks. Gale diagrams and the Gale transforn1 emerged from the work of Gale (Gal56] and were further developed by Perles, as is documented in [Grii67] (also see, e.g., [MS71]). Our exposition essentially follows Ziegler's book [Zie94] (his treatment is combined with an introduction to oriented matroids). We aim at con­ creteness, and so, for example, the Gale transform is defined using the orthogonal complement, although it might be mathematically more elegant to work with the annihilator in the dual space (Rn ) , and so on. The construction of an irrational 8-polytope is due to Perles. In Section 11.3 (Exercise 6) we mention an interpretation of the h-vector of a simplicial convex polytope via the Gale transform. Using this correspondence, Wagner and Welzl [WW01] found an interesting continuous analogue of the upper bound theorem, which speaks about probability distributions in Rd. For other recent applications of a sim­ ilar correspondence see the notes to Section 11.3. Exercises 1. Let B be a k x n matrix of rank k < n. Check that for any k x n matrix B' whose rows generate the san1e vector space as the rows of B, there exists a nonsingular k x k matrix T with B' = T B. Infer that if g = (g1, . . . , 9n) is a Gale transform of a, then any other Gale transform of a has the form (Tg1, Tg2, . . . , Tgn) for a nonsingular square matrix T. ë 2. Let a be a sequence of d+1 affinely independent points in Rd. What is the Gale transform of a, and what are AfNal(a) and AffDep(a)? ITJ 3. Let g be a Gale transform of the vertex set of a convex polytope P c R d, and let h be obtained from g by appending the zero vector. Check that h is again a Gale transform of a convex independent set. What is the relation of this set to P? [}] 5. 7 Voronoi Diagrams 115 4. Using affine Gale diagrams, count the number of classes of combinatorial equivalence of d-dimensional convex polytopes with d+2 vertices. How many of them are simple, and how many simplicial? 0 5. Verify the characterization in Observation 5.6.l(ii) of sequences g in Rn-d-l that are Gale transforms of some a, and check that if the Gale transform is applied twice to such g, we obtain g up to linear isomor­ phism. 0 6. Let a = (a1, . . . , an) be a point sequence in Rd whose affine hull is all of Rd, and let P = conv{ a1, . . . , an}· Given AfNal(a), explain how we can determine which of the ai are the vertices of P and how we reconstruct the face lattice of P. ц 7. Let a be a sequence of n vectors in Rd+l that spans Rd+l. (a) Find dim LinVal(a) and dim LinDep(a). ǣ (b) Check that LinVal(a) is the orthogonal complement of LinDep(a). ц 8. Prove Lemma 5.6.2. 0 9. Verify Corollary 5.6.3. 0 5. 7 Voronoi Diagrams Consider a finite set P c Rd. For each point p E P, we define a region reg(p), which is the "sphere of influence" of the point p: It consists of the points x E R d for which p is the closest point among the points of P. Formally, reg (p) = { x E R d: dist ( x, p) < dist ( x, q) for all q E P}, where dist(x, y) denotes the Euclidean distance of the points x and y. The Voronoi diagram of P is the set of all regions reg (p) for p E P. (More precisely, it is the cell complex induced by these regions; that is, every intersection of a subset of the regions is a face of the Voronoi diagram.) Here an example of the Voronoi diagram of a point set in the plane: (Of course, the Voronoi diagram is clipped by a rectangle so that it fits into a finite page.) The points of P are traditionally called the sites in the context of Voronoi diagrams. 116 Chapter 5: Convex Polytopes 5.7.1 Observation. Each region reg(p) is a convex polyhedron with at most IPI-1 fa.cets. Indeed, reg(p) = n {x: dist(x,p) < dist(x, q)} qEP{p} is an intersection of IPI - 1 half-spaces. 0 For d = 2, a Voronoi diagram of n points is a subdivision of the plane into n convex polygons (some of them are unbounded). It can be regarded as a drawing of a planar graph (with one vertex at the infinity, say), and hence it has a linear combinatorial complexity: n regions, 0( n) vertices, and 0( n) edges. In the literature the Voronoi diagram also appears under various other names, such as the Dirichlet tessellation. Examples of applications. Voronoi diagrams have been reinvented and used in various branches of science. Sometimes the connections are surprising. For instance, in archaeology, Voronoi diagrams help study cultural influences. Here we mention a few applications, mostly algorithmic. • ("Post office problem" or nearest neighbor searching) Given a point set P in the plane, we want to construct a data structure that finds the point of P nearest to a given query point x as quickly as possible. This prob­ lem arises directly in some practical situations or, more significantly, as a subroutine in more complicated problems. The query can be answered by determining the region of the Voronoi diagram of P containing x. For this problem (point location in a subdivision of the plane), efficient data structures are known; see, e.g., the book [dBvKOS97] or other introduc­ tory texts on computational geometry. • (Robot motion planning) Consider a disk-shaped robot in the plane. It should pass among a set P of point obstacles, getting from a given start position to a given target position and touching none of the obstacles. If such a passage is possible at all, the robot can always walk along the edges of the Voronoi diagram of P, except for the initial and final 5. 7 Voronoi Diagrams 117 segments of the tour. This allows one to reduce the robot motion problem to a graph search problem: We define a subgraph of the Voronoi diagram consisting of the edges that are passable for the robot. • (A nice triangulation: the Delaunay triangulation) Let P c R2 be a finite point set. In many applications one needs to construct a triangulation of P (that is, to subdivide conv(P) into triangles with vertices at the points of P) in such a way that the triangles are not too skinny. Of course, for some sets, some skinny triangles are necessary, but we want to avoid them as much as possible. One particular triangulation that is usually very good, and provably optimal with respect to several natural criteria, is obtained as the dual graph to the Voronoi diagram of P. Two points of P are connected by an edge if and only if their Voronoi regions share an edge. If no 4 points of P lie on a common circle then this indeed defines a triangulation, called the Delaunay triangulation3 of P; see Exercise 5. The definition extends to points sets in Rd in a straightforward manner. • (Interpolation) Suppose that 1: R2 R is some smooth function whose values are known to us only at the points of a finite set P c R2• We would like to interpolate I over the whole polygon conv(P). Of course, we cannot really tell what I looks like outside P, but still we want a reasonable interpolation rule that provides a nice smooth function with the given values at P. Multidimensional interpolation is an extensive semiempirical discipline, which we do not seriously consider here; we explain only one elegant method based on Voronoi diagrams. To compute the interpolated value at a point x E conv(P), we construct the Voronoi diagram of P, and we overlay it with the Voronoi diagram of P U {x}. 3 Being a transcription from Russian, the spelling of Delaunay's name varies in the literature. For example, in crystallography literature he is usually spelled "Delane." 118 Chapter 5: Convex Polytopes The region of the new point x cuts off portions of the regions of some of the old points. Let wp be the area of the part of reg (p) in the Voronoi diagram of P that belongs to reg ( x) after inserting x. The interpolated value f(x) is f(x) = L 2:: Wp w f(p). pEP qEP q An analogous method can be used in higher dimensions, too. Relation ofVoronoi diagrams to convex polyhedra. We now show that Voronoi diagrams in Rd correspond to certain convex polyhedra in Rd+I. First we define the unit paraboloid in R d+ 1: U { Rd+ 1. 2 2 2} = x E . xd+l = x1 + x2 + · · · + xd . For d х 1, U is a parabola in the plane. In the sequel, let us imagine the space Rd as the hyperplane xd+l = 0 in R d+ 1 . For a point p = (PI, . . . , Pd) E R d, let e(p) denote the hyperplane in Rd+l with equation Geometrically, e(p) is the hyperplane tangent to the paraboloid U at the point u(p) = (PI,P2, . . . , pd,PI + · · · + pģ) lying vertically above p. It is perhaps easier to remember this geornetric definition of e(p) and derive its equation by differentiation when needed. On the other hand, in the forthcoming proof we start out from the equation of e(p), and as a by-product, we will see that e(p) is the tangent to U at u(p) as claimed. 5. 7.2 Proposition. Let p, x E R d be points and let u( x) be the point of U vertically above x. Then u(x) lies above the hyperplane e(p) or on it, and the vertical distance of u(x) to e(p) is 82, where 8 = dist(x,p). u e(p) u(p) X Proof. We just substitute into the equations of U and of e(p). The xd+I­ coordinate of u( x) is xi + · · · + x࢚, while the xd+1-coordinate of the point 5. 7 Voronoi Diagrams 119 of e(p) above x is 2p1x1 + · · · + 2pdxd - PI - · · · - pģ. The difference is (xi - Pt)2 + · · · + (xd - Pd)2 = 62. D Let [ (p) denote the half-space lying above the hyperplane e(p). Consider an n-point set P c Rd. By Proposition 5. 7. 2, x E reg (p) holds if and only if e(p) is vertically closest to U at x among all e(q), q E P. Here is what we have derived: 5. 7.3 Corollary. The Voronoi diagram of P is the vertical projection of the f acets of the polyhedron npEP e(p) onto the hyperplane Xd+l = 0. 0 Here is an illustration for a planar Voronoi diagrarn: 5. 7.4 Corollary. The maximum total number of f aces of all regions of the V oronoi diagram of an n-point set in Rd is O(nfd/21 ). Proof. We know that the combinatorial complexity of the Voronoi diagram equals the combinatorial complexity of an H-polyhedron with at most n facets in Rd+I. By intersecting this H-polyhedron with a large simplex we can obtain a bounded polytope with at most n+d+2 facets, and we have not decreased the number of faces compared to the original H-polyhedron. Then the dual version of the asymptotic upper bound theorem (Theorem 5.5.2) implies that the total number of faces is O(nfd/21 ), since L(d+1)/2J = r d/21. 0 The convex polyhedra in Rd+l obtained from Voronoi diagrams in Rd by the above construction are rather special, and so a lower bound for the combinatorial complexity of convex polytopes cannot be automatically trans­ ferred to Voronoi diagrams. But it turns out that the number of vertices of a Voronoi diagram on n points in Rd can really be of order nrd/21 (Exercise 2). Let us remark that the trick used for transforming Voronoi diagrams to convex polyhedra is an example of a more general technique, called lin­ earization or Veronese mapping, which will be discussed a little more in 120 Chapter 5: Convex Polytopes Section 10.3. This method sometimes allows us to convert a problem about algebraic curves or surfaces of bounded degree to a problem about k-flats in a suitable higher-dimensional space. The farthest-point Voronoi diagram. The projection of the H-poly­ hedron npEP f(p )0P, where ')'0p denotes the half-space Opposite to !, forms the farthest-neighbor Voronoi diagram, in which each point p E P is assigned the regions of points for which it is the farthest point. It can be shown that all nonempty regions of this diagram are unbounded and they correspond precisely to the points appearing on the surface of conv(P). Bibliography and remarks. The concept of Voronoi diagrams in­ dependently emerged in various fields of science, for example as the medial axis transform in biology and physiology, the Wigner-Seitz zones in chemistry and physics, the domains of action in crystallo­ graphy, and the Thiessen polygons in meteorology and geography. Ap­ parently, the earliest documented reference to Voronoi diagrams is a picture in the famous Principia Philosopiae by Descartes from 1644 (that picture actually seems to show a power diagram, a generalization of the Voronoi diagram to sites with different strengths of influence). Mathematically, Voronoi diagrams were first introduced by Dirichlet [Dir50] and by Voronoi [VorOS] for the investigation of quadratic forms. For more information on the interesting history and a surprising va­ riety of applications we refer to several surveys: Aurenhammer and Klein (AKOO], Aurenhammer [Aur91], and the book Okabe, Boots, and Sugihara [OBS92]. Every computational geometry textbook also has at least a chapter devoted to Voronoi diagrams, and most papers on this subject appear in computational geometry. The Delaunay triangulation (or, more correctly, the Delaunay tes­ sellation, since it need not be a triangulation in general) was first considered by Voronoi as the dual to the Voronoi diagram, and later by Delaunay [Del34] with the definition given in Exercise 5(b) below. The Delaunay triangulation of a planar point set P optimizes sev­ eral quality measures among all triangulations of P: It maximizes the minimum angle occurring in any triangle, minimizes the maximum circumradius of the triangles, maximizes the sum of inradii, and so on (see [AKOO] for references). Such optimality properties can usually be proved by local flipping. We consider an arbitrary triangulation T of a given finite P c R2 (say with no 4 cocircular points). If there is a 4-point Q C P such that conv(Q) is a quadrilateral triangulated by two triangles of T but in such a way that these two triangles are not the Delaunay triangulation of Q, then the diagonal of Q can be flipped: 5. 7 V oronoi Diagrams not locally Delaunay locally Delaunay It can be shown that every sequence of such local flips is finite and finishes with the Delaunay triangulation of P (Exercise 7). This pro­ cedure has an analogue in higher dimensions, where it gives a simple and practically successful algorithm for computing Delaunay trian­ gulations (and Voronoi diagrams); see, e.g., Edelsbrunner and Shah [ES96]. Generalizations of Voronoi diagrams. The example in the text with robot motion planning, as well as other applications, motivates var­ ious notions of generalized Voronoi diagrams. First, instead of the Euclidean distance, one can take various other distance functions, say the fp-metrics. Second, instead of the spheres of influence of points, we can consider the spheres of influence of other sites, such as dis­ joint polygons (this is what we get if we have a circular robot moving amidst polygonal obstacles). We do not attempt to survey the numer­ ous results concerning such generalizations, again referring to [AKOO]. Results on the combinatorial complexity of Voronoi diagrams under non-Euclidean metrics and/or for nonpoint sites will be mentioned in the notes to Section 7. 7. In another, very general, approach to Voronoi diagrams, one takes the Voronoi diagram induced by two objects as a primitive notion. So for every two objects we are given a partition of space into two regions separated by a bisector, and Voronoi diagrams for more than two ob­ jects are built using the 2-partitions for all pairs. If one postulates a few geometric properties of the bisectors, one gets a reasonable theory of Voronoi diagrams {the so-called abstract Voronoi diagrams), includ­ ing efficient algorithms. So, for example, we do not even need a notion of distance at this level of generality. Abstract Voronoi diagrams (in the plane) were suggested by Klein [Kle89]. A geometrically significant generalization of the Euclidean Voronoi diagram is the power diagram: Each point p E P is assigned a real weight w(p), and reg(P) = {x E Rd: llx - Pll2 - w(p) < llx - qll2 -w(q) for all q E P}. While Voronoi diagrams in Rd are projections of certain convex polyhedra in Rd+l, the projection into Rd of every intersection of finitely many nonvertical upper half-spaces in Rd+l is a power diagram. Moreover, a hyperplane section of a power diagram is again a power diagram. Several other generalized Voronoi diagrams in Rd (for example, with multiplicative weights of the sites) can be obtained by intersecting a suitable power diagram in Rd+l with a simple surface and projecting into Rd, which yields fast algorithms; see Aurenhammer and Imai [AI88]. 121 122 Chapter 5: Convex Polytopes Another generalization are higher-order Voronoi diagrams. The kth-order Voronoi diagram of a finite point set P assigns to each k­ point T C P the region reg(T) consisting of all x E Rd for which the points of T are the k nearest neighbors of x in P. The usual Voronoi diagram arises for k = 1, and the farthest-point Voronoi diagram for k = IPI - 1. The kth-order Voronoi diagram of P C Rd is the projec­ tion of the kth level facets in the arrangement of the hyperplanes e(p), p E P (see Chapter 6 for these notions). Lee [Lee82] proved that the kth-order Voronoi diagram of n points in the plane has combinato­ rial complexity O(k(n-k)); this is better than the maximum possible complexity of level k in an arrangement of n arbitrary planes in R3. Applications of Voronoi diagrams are too numerous to be listed here, and we add only a few remarks to those already mentioned in the text. Using point location in Voronoi diagrams as in the post office problem, several basic computational problems in the plane can be solved efficiently, such as finding the closest pair in a point set or the largest disk contained in a given polygon and not containing any of the given points. Besides providing good triangulations, the Delaunay triangulation contains several other interesting graphs as subgraphs, such as a min­ inlum spanning tree of a given point set (Exercise 6). In the plane, this leads to an 0 ( n log n) algorithm for the minimum spanning tree. In R 3, subcomplexes of the Delaunay triangulation, the so-called a­ complexes, have been successfully used in molecular modeling (see, e.g., Edelsbrunner [Ede98]); they allow one to quickly answer ques­ tions such as, "how many tunnels and voids are there in the given molecule?" Robot motion planning using Voronoi diagrams (or, more gener­ ally, the retraction approach, where the whole free space for the robot is replaced by some suitable low-dimensional skeleton) was first con-[ , sidered by O'Dunlaig and Yap [OY85]. Algorithn1ic motion planning is an extensive discipline with innumerable variants of the problem. For a brief introduction from the computational-geometric point of view see, e.g., [dBvKOS97]; among several monographs we mention Laumond and Overmars [L096] and Latombe [Lat91]. The spatial interpolation of functions using Voronoi diagrams was considered by Sibson (Sib81]. Exercises 1. Prove that the region reg(p) of a point p in the Voronoi diagram of a finite point set P c R d is unbounded if and only if p lies on the surface of conv(P). !}] 5. 7 Voronoi Diagrams 123 2. (a) Show that the Voronoi diagram of the 2n-point set { ( ٟ, 0, 0): i = 1, 2, . . . , n} U {(0, 1, ): j = 1, 2, . . . , n} in R3 has O(n2) vertices. 0 (b) Let d = 2k+ 1 be odd, let e1, . . . , ed be vectors of the standard orthonormal basis in R d, and let e0 stand for the zero vector. For i = 0, 1, . . . , k and j = 1, 2, . . . , n, let Pi,j = e2i + Êe2i+l· Prove that for every choice of j0, j1, . . . , jk E { 1, 2, . . . , n}, there is a point in R d for which the nearest points among the P·i,j are exactly PO,jo, Pt,j1 , • • • , Pk,jk. Conclude that the Voronoi diagram of the Pi,j has combinatorial com­ plexity O(nk) = O(nrd/21 ) . 0 3. (Voronoi diagram of flats) Let c1, . . . , Ed- I be small distinct positive numbers and for i = 1, 2, . . . , d-1 and j = 1, 2, . . . , n, let Fi,j be the (d-2)-fiat {x E Rd: xi = j, xd = ci}· For every choice of it, j2, . . . ,Jd-1 E {1, 2, . . . , n }, find a point in Rd for which the nearest sites (under the Euclidean distance) among the Fi,j are exactly Ft,j1 , F2,j2, • • • , Fd-l,)d-l . Conclude that the Voronoi diagram of the Fi,j has combinatorial com­ plexity O(nd-l ). 0 This example is from Aronov [AroOO]. 4. For a finite point set in the plane, define the farthest-point Voronoi dia­ gram as indicated in the text, verify the claimed correspondence with a convex polyhedron in R 3, and prove that all nonempty regions are un­ bounded. m 5. (Delaunay triangulation) Let P be a finite point set in the plane with no 3 points collinear and no 4 points co circular. (a) Prove that the dual graph of the Voronoi diagram of P, where two points p, q E P are connected by a straight edge if and only if the bound­ aries of reg(p) and reg(q) share a segment, is a plane graph where the outer face is the complement of conv( P) and every inner face is a trian­ gle. Ϸ (b) Define a graph on P as follows: Two points p and q are connected by an edge if and only if there exists a circula\ disk with both p and q on the boundary and with no point of P in its interior. Prove that this graph is the same as in (a), and so we have an alternative definition of the Delaunay triangulation. Ϸ 6. (Delaunay triangulation and minimum spanning tree) Let P C R 2 be a finite point set with no 3 points collinear and no 4 cocircular. Let T be a spanning tree of minimum total edge length in the complete graph with the vertex set P, where the length of an edge is just its Euclidean length. Prove that all edges of T are also edges of the Delaunay triangulation of P. 0 7. (Delaunay triangulation by local flipping) Let P C R2 be an n-point set with no 3 points collinear and no 4 cocircular. Let T be an arbitrary triangulation of conv( P). Suppose that triangulations Ti , T2, . . . are ob­ tained from T by successive local flips as described in the notes above (in each step, we select a convex quadrilateral in the current triangulation 124 Chapter 5: Convex Polytopes partitioned into two triangles in a way that is not the Delaunay triangu­ lation of the four vertices and we flip the diagonal of the quadrilateral). (a) Prove that the sequence of triangulations is always finite (and give as good an estimate for its maximum length as you can). 0 (b) Show that if no local flipping is possible, then the current triangula­ tion is the Delaunay triangulation of P. [!] 8. Consider a finite set of disjoint segments in the plane. What types of curves may bound the regions in their Voronoi diagram? The region of a given segment is the set of points for which this segment is a closest one. lii 9. Let A and B be two finite point sets in the plane. Choose a0 E A arbi­ trarily. Having defined a0, . . . , ai and b1, . . . , bi-1, define bi+ 1 as a point of B \ {b1, . . . , bi} nearest to ai, and ai+1 as a point of A \ { a0, . . • , ai} nearest to bi+ 1· Continue until one of the sets becomes empty. Prove that at least one of the pairs (ai, bi+I), (bi+I, ai+I), i = 0, 1, 2, . . . , realizes the shortest distance between a point of A and a point of B. (This was used by Eppstein (Epp95] in some dynamical geometric algorithms.) 0 10. (a) Let C be any circle in the plane x3 = 0 (in R3). Show that there exists a half-space h such that C is the vertical projection of the set h n U onto X3 = 0, where U = {x E R3: x3 = xi + xǞ} is the unit paraboloid. ITl (b) Consider n arbitrary circular disks K1, . • . , Kn in the plane. Show that there exist only 0 ( n) intersections of their boundaries that lie inside no other Ki (this means that the boundary of the union of the Ki consists of 0 ( n) circular arcs). m 11. Define a "spherical polytope" as an intersection of n balls in R 3 (such an object has facets, edges, and vertices similar to an ordinary convex polytope). (a) Show that any such spherical polytope in R3 has O(n2) faces. rrou may assume that the spheres are in general position. [!] (b) Find an example of an intersection of n balls having quadratically many vertices. m (c) Show that the intersection of n unit balls has 0( n) complexity only. [!] 6 Number of Faces in Arrangements Arrangements of lines in the plane and their higher-dimensional generaliza­ tion, arrangements of hyperplanes in Rd, are a basic geometric structure whose significance is comparable to that of convex polytopes. In fact, ar­ rangements and convex polytopes are quite closely related: A cell in a hyper­ plane arrangement is a convex polyhedron, and conversely, each hyperplane arrangement in Rd corresponds canonically to a convex polytope in Rd+l of a special type, the so-called zonotope. But as is often the case with dif­ ferent representations of the same mathematical structure, convex polytopes and arrangements of hyperplanes emphasize different aspects of the structure and lead to different questions. Whenever we have a problem involving a finite point set in Rd and parti­ tions of the set by hyperplanes, we can use geometric duality, and we obtain a problem concerning a hyperplane arrangement. Arrangements appear in many other contexts as well; for example, some models of molecules give rise to arrangements of spheres in R3, and automatic planning of the motion of a robot among obstacles involves, implicitly or explicitly, arrangements of surfaces in higher-dimensional spaces. Arrangements of hyperplanes have been investigated for a long time from various points of view. In several classical areas of mathematics one is mainly interested in topological and algebraic properties of the whole arrangement. Hyperplane arrangements are related to such marvelous objects as Lie alge­ bras, root systems, and Coxeter groups. In the theory of oriented matroids one studies the systems of sign vectors associated to hyperplane arrangements in an abstract axiomatic setting. We are going to concentrate on estimating the combinatorial complexity (number of faces) in arrangements and neglect all the other directions. 126 Chapter 6: Number of Faces in Arrangements General probabilistic techniques for bounding the complexity of geomet­ ric configurations constitute the second main theme of this chapter. These methods have been successful in attacking many more problems than can even be mentioned in this book. We begin with a simple but powerful sanl­ pling argument in Section 6.3 (somewhat resembling the proof of the crossing number theorem), add more tricks in Section 6.4, and finish with quite a so­ phisticated method, demonstrated on a construction of optimal ; -cuttings, in Section 6. 5. 6.1 Arrangements of Hyperplanes We recall from Section 4.1 that for a finite set H of lines in the plane, the arrangement of H is a partition of the plane into relatively open convex subsets, the faces of the arrangen1ent. In this particular case, the faces are the vertices ( 0-faces), the edges ( 1-faces), and the cells ( 2-faces). 1 An arrangement of a finite set H of hyperplanes in R d is again a partition of Rd into relatively open convex faces. Their dimensions are 0 through d. As in the plane, the 0-faces are called vertices, the 1-faces edges, and the d-faces cells. Sometimes the ( d-1 )-faces are referred to as facets. The cells are the connected components of R d \ U H. To obtain the facets, we consider the ( d-1 )-dimensional arrangements induced in the hyperplanes of H by their intersections with the other hyperplanes. That is, for each h E H we take the connected components of h \ uh'EH: h'#h h'. To obtain k-faces, we consider every possible k-flat L defined as the intersection of sorne d-k hyperplanes of H. The k-faces of the arrangement lying within L are the connected components of L \ U(H \ HL), where HL :::={h E H: L C h}. Remark on sign vectors. A face of the arrangement of H can be described by its sign vector. First we need to fix the orientation of each hyperplane h E H. Each h E H partitions Rd into three regions: h itself and the two open half-spaces determined by it. We choose one of these open half-spaces as positive and denote it by hffi, and we let the other one be negative, denoted by h8. Let F be a face of the arrangement of H. We define the sign vector of F (with respect to the chosen orientations of the hyperplanes) as a( F) (ah: h E H), where O"h :::= + 1 if F c h tiJ , 0 if F c h, -1 if F C he. The sign vector deterrnines the face F, since we have F :::= nhEH hall ' where h 0 = h, h 1 = h ffi , and h -1 :::= he. The following drawing shows the sign 1 This terminology is not unified in the literature. What we call faces are sometimes referred to as cells ( 0-cells, 1-cells, and 2-cells). 6.1 Arrangements of Hyperplanes 127 vectors of the marked faces in a line arrangement. Only the signs are shown, and the positive half-planes lie above their lines. Of course, not all possible sign vectors correspond to nonempty faces. For n lines, there are 3n sign vectors but only O(n2) faces, as we will derive below. Counting the cells in a hyperplane arrangement. We want to count the maximum number of faces in an arrangement of n hyperplanes in Rd. As we will see, this is rnuch sirnpler than the sirnilar task for convex polytopes! If a set H of hyperplanes is in general position, which means that the intersection of every k hyperplanes is ( d-k )-dimensional, k = 2, 3, . . . , d+ 1, the arrangement of H is called simple. For IHI > d+ 1 it suffices to require that every d hyperplanes intersect at a single point and no d+1 have a common point. Every d-tuple of hyperplanes in a simple arrangement determines exactly one vertex, and so a simple arrangement of n hyperplanes has exactly (9) vertices. We now calculate the number of cells; it turns out that the order of magnitude is also nd for d fixed. 6.1.1 Proposition. The number of cells (d-f aces) in a simple arrangement of n hyperplanes in Rd equals Md(n) = (¹) + (¸) + · · · + (º). (6.1) First proof. We proceed by induction on the dimension d and the number of hyperplanes n. For d = 1 we have a line and n points in it. These divide the line into n+ 1 one-dimensional pieces, and forrnula ( 6.1) holds. (The forrnula is also correct for n = 0 and all d > 1, since the whole space, with no hyperplanes, is a single cell.) Now suppose that we are in dimension d, we have n-1 hyperplanes, and we insert another one. Since we assume general position, the n-1 previous hyperplanes divide the newly inserted hyperplane h into d-l ( n-1) cells by the inductive hypothesis. Each such ( d-1 )-dimensional cell within h parti­ tions one d-dimensional cell into exactly two new cells. The total increase in the number of cells caused by inserting h is thus d-l ( n-1), and so 128 Chapter 6: Number of Faces in Arrangements Together with the initial conditions (for d = 1 and for n = 0), this recurrence determines all values of , and so it remains to check that formula (6.1) satisfies the recurrence. We have d(n - 1) + d- t(n - 1) = (no 1) + [(n;l) + (no 1)] + ((n;-1) + (nll)] + . . . + [ (nd 1) + (å=D] = (no 1) + (ٝ) + (ƽ) + . . . + (å) = d(n). 0 Second proof. This proof looks simpler, but a complete rigorous presenta­ tion is perhaps somewhat more demanding. We proceed by induction on d, the case d = 0 being trivial. Let H be a set of n hyperplanes in R d in general position; in particular, we assume that no hyperplane of H is horizontal and no two vertices of the arrangement have the same vertical level ( xd-coordinate). Let g be an auxiliary horizontal hyperplane lying below all the vertices. A cell of the arrangement of H either is bounded from below, and in this ca.. '5e it ha.. '5 a unique lowest vertex, or is not bounded from below, and then it intersects g. The number of cells of the former type is the same as the number of vertices, which is (ٞ). The cells of the I at ter type correspond to the cells in the (d-1)-dimensional arrangement induced within g by the hyperplanes of H, and their number is thus d-1 ( n). 0 What is the number of faces of the intermediate dimensions 1, 2, . . . , d-1 in a simple arrangement of n hyperplanes? This is not difficult to calculate using Proposition 6.1.1 (Exercise 1); the main conclusion is that the total number of faces is 0 ( n d) for a fixed d. What about nonsimple arrangements? It turns out that a simple arrange­ ment of n hyperplanes maximizes the number of faces of each dimension among arrangements of n hyperplanes. This can be verified by a perturbation argument, which is considerably simpler than the one for convex polytopes (Lemma 5.5.4), and which we ornit. Bibliography and remarks. The paper of Steiner [Ste26] from 1826 gives formulas for the number of faces in arrangements of lines, circles, planes, and spheres. Of course, his results have been extended in many ways since then (see, e.g., Zaslavsky [Zas75]). An early monograph on arrangements is Griinbaum [Grii72]. The questions considered in the subsequent sections, such as the combinatorial complexity of certain parts of arrangements, have been studied mainly in the last twenty years or so. A recent survey dis­ cussing a large part of the material of this chapter and providing many more facts and references is Agarwal and Sharir [ASOOa]. 6.1 Arrangements of Hyperplanes The algebraic and topological investigation of hyperplane arrange­ ments {both in real and complex spaces) is reflected in the book Orlik and Terao [OT91]. Let us remark that in these areas, one usually considers central arrangernents of hyperplanes, where all the hyper­ planes pass through the origin (and so they are linear subspaces of the underlying vector space). If such a central arrangement in R d is intersected with a generic hyperplane not passing through the origin, one obtains a ( d-1 )-dimensional "affine" arrangement such as those considered by us. The correspondence is bijective, and so these two views of arrangements are not very different, but for many results, the formulation with central arrangements is more elegant. The correspondence of arrangements to zonotopes is thoroughly explained in Ziegler [Zie94]. Exercises 129 1. (a) Count the number of faces of dimensions 1 and 2 for a simple ar­ rangement of n planes in R 3. [!] (b) Express the number of k-faces in a simple arrangement of n hyper­ planes in Rd. ǣ 2. Prove that the number of unbounded cells in an arrangement of n hyper­ planes in R d is 0( nd-l) (for a fixed d). ril 3. (a) Check that an arrangement of d or fewer hyperplanes in Rd has no bounded cell. ǣ (b) Prove that an arrangement of d+ 1 hyperplanes in general position in R d has exactly one bounded cell. 0 4. How many d-dimensional cells are there in the arrangement of the (ȃ) hyperplanes in Rd with equations {xi = xj}, where 1 < i < j < d? 0 5. How many d-dimensional cells are there in the arrangement of the hy­ perplanes in Rd with the equations {xi - Xj = 0}, {xi - Xj = 1}, and {xi - Xj = -1}, where 1 < i < j < d? ĥ 6. (Flags in arrangements) (a) Let H be a set of n lines in the plane, and let V be the set of vertices of their arrangement. Prove that the number of pairs (v, h) with v E V, h E H, and v E h, i.e., the number of incidences I(V, H), is bounded by 0( n2). (Note that this is trivially true for simple arrangements.) ц (b) Prove that the maximum number of d-tuples ( F0, F1, . . . , Fd) in an arrangement of n hyperplanes in Rd, where Fi is an i-dimensional face and Fi-I is contained in the closure of Fi, is O(nd) (d fixed). Such d­ tuples are sometimes called flags of the arrangement. 0 7. Let P = {p1, . . . , Pn} be a point set in the plane. Let us say that points x, y have the same view of P if the points of P are visible in the same cyclic order from them. If rotating light rays emanate from x and from y, the points of P are lit in the same order by these rays. We assume that 130 Chapter 6: Number of Faces in Arrangements neither x nor y is in P and that neither of them can see two points of P in occlusion. (a) Show that the maximum possible number of points with mutually distinct views of P is 0 ( n 4) . 12] (b) Show that the bound in (a) cannot be improved in general. [II 6.2 Arrangements of Other Geolll etric Objects Arrangements can be defined not only for hyperplanes but also for other geometric objects. For example, what is the arrangement of a finite set H of segments in the plane? As in the case of lines, it is a decomposition of the plane into faces of dimension 0, 1, 2: the vertices, the edges, and the cells, respectively. The vertices are the intersections of the segments, the edges are the portions of the segments after removing the vertices, and the cells ( 2-faces) are the connected components of R2 \ U H. (Note that the endpoints of the segments are not included among the vertices.) While the cells of line arrangements are convex polygons, those in arrangements of segments can be complicated regions, even with holes: It is almost obvious that the total number of faces of the arrangement of n segments is at most O(n2). What is the maximum number of edges on the boundary of a single cell in such an arrangements? This seemingly innocuous question is surprisingly difficult, and most of Chapter 7 revolves around it. Let us now present the definition of the arrangement for arbitrary sets A1, A2, . . . , An C Rd. The arrangement is a subdivision of space into con­ nected pieces again called the faces. Each face is an inclusion-maximal con­ nected set that "crosses no boundary." More precisely, first we define an equivalence relation 3 on Rd: We put x 3 y whenever x and y lie in the same subcollection of the Ai, that is, whenever { i: x E Ai} = { i: y E Ai}· So for each I C {1, 2, . . . , n }, we have one possible equivalence class, namely {x E Rd: x E Ai # i E I} (this is like a field in the Venn diagram of the Ai)· But in typical geometric situations, most of the classes are empty. The faces of the arrangement of the Ai are the connected components of the equivalence classes. The reader is invited to check that for both hyperplane arrangements and arrangements of segments this definition coincides with the earlier ones. Arrangements of algebraic surfaces. Quite often one needs to con­ sider arrangements of the zero sets of polynomials. Let p1 (x1, x2, . . . , xd), . . . , Pn(x1, x2, . . . , xd) be polynomials with real coefficients in d variables, and let Zi = { x E R d: Pi ( x) = 0} be the zero set of Pi. Let D denote the n1aximum 6.2 Arrangements of Other Geometric Objects 131 of the degrees of the Pi; when speaking of the arrangement of Z1, . . . , Zn, one usually assumes that D is bounded by some (small) constant. Without a bound on D, even a single Zi can have arbitrarily many connected compo­ nents. In many cases, the Zi are algebraic surfaces, such as ellipsoids, paraboloids, etc., but since we are in the real domain, sometimes they need not look like surfaces at all. For example, the zero set of the polynomial p( x1, x2) = xi+ x٪ consists of the single point (0, 0). Although it is sometimes convenient to think of the Zi as surfaces, the results stated below apply to zero sets of arbitrary polynomials of bounded degree. It is known that if both d and D are considered as constants, the maximum number of faces in the arrangement of Z 1, Z2, . . . , Zn as above is at most O(nd). '"fhis is one of the most useful results about arrangements, with many surprising applications (a few are outlined below and in the exercises). In the literature one often finds a (formally weaker) version dealing with sign patterns of the polynomials Pi . A vector a E { -1, 0, + 1} n is called a sign pat tern of PI, P2, . . . , Pn if there exists an x E R d such that the sign of Pi ( x) is a i, for all i = 1, 2, . . . , n. Trivially, the number of sign patterns for any n polynomials is at most 3n. For d = 1, it is easy to see that the actual number of sign patterns is much smaller, namely at most 2nD + 1 (Exercise 1). It is not so easy to prove, but still true, that there are at most C(d, D) · nd sign patterns in dimension d. This result is generally called the Milnor-Thom theorem (and it was apparently first proved by Oleinik and Petrovskii, which fits the usual pattern in the history of mathe1natics). Here is a more precise (and more recent) version of this result, where the dependence on D and d is specified quite precisely. 6.2.1 Theorem (Number of sign patterns). Let p1,p2, . . . ,pn be d­ variate real polynomials of degree at most D. The number of f aces in the arrangement of their zero sets Z1, Z2, . . . , Zn C R d, and consequently the 11umber of sign ])at terns of P1, . . . , Pn as well is at 1110st 2(2D)d t 0 2i (4nii). For n > d > 2, this expression is bounded by Proofs of these results are not included here because they would require at least one more chapter. They belong to the field of real algebraic geometry. The classical, deep, and extremely extensive field of algebraic geometry mostly studies algebraic varieties over algebraically closed fields, such as the complex numbers (and the questions of combinatorial complexity in our sense are not among its main interests). Real algebraic geometry investigates algebraic varieties and related concepts over the real numbers or other real-closed fields; the presence of ordering and the missing roots of polynomials makes its flavor distinctly different. 132 Chapter 6: Number of Faces in Arrangements Arrangements of pseudolines. An arrangement of pseudolines is a nat­ ural generalization of an arrangement of lines. Lines are replaced by curves, but we insist that these curves behave, in a suitable sense, like lines: For ex­ ample, no two of them intersect more than once. This kind of generalization is quite different from, say, arrangements of planar algebraic curves, and so it perhaps does not quite belong to the present section. But besides mentioning pseudoline arrangements a s a useful and interesting concept, we also need them for a (typical) example of application of Theorem 6.2.1, and so we kill two birds with one stone by discussing them here. An (affine} arrangement of pseudolines can be defined as the arrangement of a finite collection of curves in the plane that satisfy the following conditions: (i) Each curve is x-monotone and unbounded in both directions; in other words, it intersects each vertical line in exactly one point. ( ii) Every two of the curves intersect in exactly one point and they cross at the intersection. (We do not permit "parallel'' pseudolines, for they would complicate the definition unnecessarily. )2 The curves are called pseudolines, but while "being a line" is an absolute no­ tion, "being a pseudoline" makes sense only with respect to a given collection of curves. Here is an example of a (simple) arrangement of 5 pseudolines: 5 4 --:----..c: 3 ----+---2 --Ǻ ----1 Much of what we have proved for arrangements of lines is true for arrange­ ments of pseudolines as well. This holds for the maximum number of vertices, edges, and cells, but also for more sophisticated results like the Szemeredi­ Trotter theorem on the maximum number of incidences of m points and n lines; these results have proofs that do not use any properties of straight lines not shared by pseudolines. One might be tempted to say that pseudolines are curves that behave topologically like lines, but as we will see below, in at least one sense this is 2 This "affine" definition is a little artificial, and we use it only because we do not want to assume the reader's familiarity with the topology of the projective plane. In the literature one usually considers arrangements of pseudolines in the projective plane, where the definition is very natural: Each pseudoline is a closed curve whose removal does not disconnect the projective plane, and every two pseudo lines intersect exactly once (which already implies that they cross at the intersection point). Moreover, one often adds the condition that the curves do not form a single pencil; i.e., not all of them have a common point, since otherwise, one would have to exclude the case of a pencil in the formulation of many theorems. But here we are not going to study pseudoline arrangements in any depth. 6.2 Arrangements of Other Geometric Objects 133 profoundly wrong. The correct statement is that every two of them behave topologically like two lines, but arrangements of pseudolines are more general than arrangements of lines. We should first point out that there is no problem with the "local" struc­ ture of the pseudolines, since each pseudoline arrangement can be redrawn equivalently (in a sense defined precisely below) by polygonal lines, as a wiring diagram: 5 4 3 2 1 The difference between pseudoline arrangements and line arrangements is of a more global nature. The arrangement of 5 pseudolines drawn above can be realized by straight lines: 5 4 What is the meaning of "realization by straight lines"? To this end, we need a suitable notion of equivalence of two arrangements of pseudolines. There are several technically different possibilities; we again use an "affine" notion, one that is very simple to state but not the most common. Let H be a col­ lection of n pseudo lines. We number the pseudolines 1, 2, . . . , n in the order in which they appear on the left of the arrangement, say from the bottom to the top. For each i, we write down the numbers of the other pseudolines in the order they are encountered along the pseudoline i from left to right. For a simple arrangement we obtain a permutation 1ri of {1, 2, . . . , n} \ { i} for each i. For the arrangement in the pictures, we have 1r1 = (2, 3, 5, 4), 1r2 = (1, 5, 4, 3), 1r3 = (1, 5, 4, 2), 1r4 = (5, 1, 3, 2), and ?rs = (4, 1, 3, 2). For a nonsimple arrangement, some of the 1ri are linear quasiorderings, meaning that several consecutive numbers can be chunked together. We call two ar­ rangements affinely isomorphic if they yield the same 1r1, . . • , 1rn, i.e., if each pseudoline meets the others in the same (quasi)order as the corresponding pseudoline in the other arrangement. Two affinely isomorphic pseudoline ar­ rangements can be converted one to another by a suitable homeomorphism of the plane. 3 3 The more usual notion of isomorphism of pseudoline arrangements is defined for arrangements in the projective plane. The arrangement of H is isomorphic to the 134 Chapter 6: Number of Faces in Arrangements An arrangement of pseudolines is stretchable if it is affinely isomorphic to an arrangement of straight lines. 4 It turns out that all arrangements of 8 or fewer pseudolines are stretchable, but there exists a nonstretchable arrange­ ment of 9 pseudolines: The proof of nonstretchability is based on the Pappus theorem in projective geometry, which states that if 8 straight lines intersect as in the drawing, then the points p, q, and r are collinear. By modifying this arrangement suitably, one can obtain a simple nonstretchable arrangement of 9 pseudolines as well. Next, we show that most of the simple pseudoline arrangements are non­ stretchable. The following construction shows that the number of isomor­ phism classes of simple arrangements of n pseudolines is at least 2°(n2): 9m P2 -----Ő ő< ŒL-x:---ŗ--Pl hm We have m ::::: g, and the lines ht, . . . , hm and 91, . . . , 91n form a regular grid. Each of the about g pseudolines Pi in the middle passes near fl(n) vertices of arrangement of H' if there exists a homeomorphism a2 > · · · > an. The x-coordinate of the intersection fi n f1· is bt -b 2 • To determine the ordering 1ri of the in-a.J -al tersections along ii, it suffices to know the ordering of the x-coordinates of these intersections, and this can be inferred from the signs of the polynomials Pijk(ai, bi, aj, bj, ak, bk) = (bi - bj)(ak - ai) - (bi - bk)(aj - ai)· So the num­ ber of nonisomorphic arrangements of n lines is no larger than the number of possible sign patterns of the 0( n3) polynomials Pijk in the 2n variables a1, b1, . . . , an, bn, and Theorem 6.2.1 yields the upper bound of 2°(n logn). For large n, this is a negligible fraction of the total number of simple pseudoline arrangements. (Similar considerations apply to nonsimple arrangements as well.) The problem of deciding the stretchability of a given pseudoline arrange­ ment has been shown to be algorithmically difficult (at least NP-hard). One can easily encounter this problem when thinking about line arrangements and drawing pictures: What we draw by hand are really pseudolines, not lines, and even with the help of a ruler it may be almost impossible to decide ex­ perimentally whether a given arrangement can really be drawn with straight lines. But there are computational methods that can decide stretchability in reasonable time at least for moderate numbers of lines. Bibliography and remarks. A comprehensive account of real al­ gebraic geometry is Bochnak, Coste, and Roy [BCR98]. Among the many available introductions to the "classical" algebraic geometry we mention the lively book Cox, Little, and O'Shea [CL092). The original bounds on the number of sign patterns, less precise than Theorem 6.2.1 but still implying the O(nd) bound for fixed d, were given independently by Oleinik and Petrovskii [OP49], Milnor [Mil64], and Thorn [Tho65}. Warren [War68] proved that the number of d-dimensional cells in the arrangement as in Theorem 6.2.1, and consequently the number of sign patterns consisting of ±l's only, is at most 2(2D)d ~t 0 2i (7). The extension to faces of all dimensions, and to sign patterns including O's, was obtained by Pollack and Roy [PR93]. Sometimes we have polynomials in many variables, but we are in­ terested only in sign patterns attained at points that satisfy some additional algebraic conditions. Such a situation is covered by a re­ sult of Basu, Pollack, and Roy [BPR96): The number of sign patterns attained by n polynomials of degree at most D on a k-dimensional algebraic variety V C R d, where V can be defined by polynomials of degree at most D, is at most (Ë)O(D)d. 136 Chapter 6: Number of Faces in Arrangements While bounding the number of sign patterns of multivariate poly­ nomials appears complicated, there is a beautiful short proof of an almost tight bound on the number of zero patterns, due to R6nyai, Babai, and Ganapathy (RBGOl], which we now sketch (in the sim­ plest form, giving a slightly suboptimal result). A vector ( E {0, l}n is a zero pattern of d-variate polynomials p1 , . . . , Pn with coefficients in a field F if there exists an x = x( () E pd with Pi ( x) = 0 exactly for the i with (i = 0. We show that if all the Pi have degree at most D, then the number of zero patterns cannot exceed ( Dٜ+d) . For each zero pattern (, let Qc;, be the polynomial lli: (·#O Pi· We have deg Qc;, < Dn. Let us consider the qc;, as elements of the vector space L of all d-variate poly­ nomials over F of degree at most Dn. Using the basis of L consisting of all monomials of degree at most Dn, we obtain dim L < ( Dr:z+d) . It remains to verify that the qc;, are linearly independent (assuming that no Pi is identically 0). Suppose that E< o:c;,qc;, = 0 with o:< E F not all 0. Choose a zero pattern լ with o:e =I= 0 and with the largest possible number of O's, and substitute x(լ) into Ec;, o:c;,Q(· This yields o:e = 0, a contradiction. Pseudoline arrangements. The founding paper is Levi [Lev26], where, among others, the nonstretchable arrangement of 9 lines drawn above was presented. A concise survey was written by Goodman [Goo97]. Pseudoline arrangements, besides being very natural, have also turned out to be a fruitful generalization of line arrangements. Some problems concerning line arrangements or point configurations were first solved only in the more general setting of pseudoline arrange­ ments, and certain algorithms for line arrangements, the so-called topological sweep methods, use an auxiliary pseudoline to speed up the computation; see [Goo97]. Infinite families of pseudolines have been considered as well, and even topological planes, which are analogues of the projective plane but made of pseudolines. It is known that every finite configuration of pseudolines can be extended to a topological plane, and there are uncountably many distinct topological planes; see Goodman, Pollack, Wenger, and Zamfirescu [GPWZ94]. Oriented matroids. The possibility of representing each pseudoline arrangement by a wiring diagram makes it clear that a pseudoline ar­ rangement can also be considered as a purely combinatorial object. The appropriate combinatorial counterpart of a pseudoline arrange­ ment is called an oriented matroid of rank 3. More generally, similar to arrangements of pseudolines, one can define arrangements of pseudo­ hyperplanes in Rd, and these are combinatorially captured by oriented matroids of rank d+l. Here the rank is one higher than the space di­ mension, because an oriented matroid of rank d is usually viewed as a 6.2 Arrangements of Other Geometric Objects combinatorial abstraction of a central arrangement of hyperplanes in Rd (with all hyperplanes passing through 0). There are several different but equivalent definitions of an oriented matroid. We present a definition in the so-called covector form. An oriented matroid is a set V C {-1, 0, 1 }n that is symmetric ( v E V implies -v E V), contains the zero vector, and satisfies the following two more complicated conditions: • (Closed under composition) If u, v E V , then u o v E V, where ( U 0 V) i = Ui if Ui =/= 0 and ( U 0 V) i = Vi if Ui = 0. • {Admits elimination) If u, v E V and j E S( u, v) = { i: ui == -vi i= 0}, then there exists w E V such that Wj = 0 and wi = (u o v)i for all i tt S(u, v). The rank of an oriented matroid V is the largest r such that there is an increasing chain Vt -< v2 -< · · · -< Vr, vi E V , where u -< v means Ui -< Vi for all i and where 0 -< 1 and 0 -< - 1. At first sight, all this may look quite mysterious, but it becomes much clearer if one thinks of a basic example, where V is the set of sign vectors of all faces of a central arrangement of hyperplanes in Rd. It turns out that every oriented matroid of rank 3 corresponds to an arrangement of pseudo lines. More generally, Lawrence's represen­ tation theorem asserts that every oriented matroid of rank d comes from some central arrangement of pseudo hyperplanes in R d, and so the purely combinatorial notion of oriented matroid corresponds, es­ sentially uniquely, to the topological notion of a (central) arrangement of pseudohyperplanes. 5 137 Oriented matroids are also naturally obtained from configurations of points or vectors. In the notation of Section 5.6 (Gale transform), if a is a sequence of n vectors in Rr, then both the sets sgn(LinVal( a)) and sgn(LinDep(a)) are oriented matroids in the sense of the above definition. The first one has rank r, and the second, rank n-r. We are not going to say much more about oriented matroids, re­ ferring to Ziegler [Zie94) for a quick introduction and to Bjorner, Las Vergnas, Sturmfels, White, and Ziegler [BVS+99) for a comprehensive account. Stretchability. The following results illustrate the surprising difficulty of the stretchability problem for pseudoline arrangements. They are analogous to the statements about realizability of 4-dimensional con­ vex polytopes mentioned in Section 5.3, and they were actually found much earlier. 5 The correspondence need not really be one-to-one. For example, the oriented matroids of two projectively isomorphic pseudoline arrangements agree only up to reorientation. 138 Chapter 6: Number of Faces in Arrangements Certain (simple) stretchable arrangements of n pseudo lines require coefficients with 2n( n) digits in the equations of the lines, in every straight-line realization (Goodman, Pollack, and Sturmfels [GPS90]). Deciding the stretchability of a given pseudoline arrangement is NP­ hard (Shor [Sho91] has a relatively simple proof), and in fact, it is polynomially equivalent to the problem of solvability of a systerr1 of polynomial inequalities with integer coefficients. This follows from re­ sults ofMnev, published in Russian in 1985 (proofs were only sketched; see (Mne89] for an English version). This work went unnoticed in the West for some time, and so some of the results were rediscovered by other authors. Although detailed proofs of such theorems are technically demand­ ing, the principle is rather simple. Given two real numbers, suitably represented by geometric quantities, one can produce their sum and their product by classical geometric constructions by ruler. (Since ruler constructions are invariant under projective transformations, the num­ bers are represented as cross-ratios.) By composing such constructions, one can express the solvability of p(x1, . • . , Xn) = 0, for a given n­ variate polynomial p with integer coefficients, by the stretchability of a suitable arrangement in the projective plane. Dealing with inequalities and passing to simple arrangements is somewhat more complicated, but the idea is similar. Practical algorithms for deciding stretchability have been studied extensively by Bakowski and Sturmfels [BS89) and by Richter-Gebert (see, e.g., [RG99]). Mnev [Mne89] was mainly interested in the realization spaces of ar­ rangements. Let H be a fixed stretchable arrangement. Each straight­ line arrangement H' affinely isomorphic to H can be represented by a point in R2n, with the 2n coordinates specifying the coefficients in the equations of the lines of H'. Considering all possible H' for a given H, we obtain a subset of R2n. For some time it was conjectured that this set, the realization space of H, has to be path-connected, which would mean that one straight-line realization could be converted to any other by a continuous motion while retaining the affine isomor­ phism type. 6 Not only is this false, but the realization space can have arbitrarily many components. In a suitable sense, it can even have arbitrary topological type. Whenever A C Rn is a set definable by a formula involving finitely many polynomial inequalities with inte­ ger coefficients, Boolean connectives, and quantifiers, there is a line arrangement whose realization space S is homotopy equivalent to A (Mnev's main result actually talks about the stronger notion of sta-6 In fact, these questions have been studied mainly for the isomorphism of arrange­ ments in the projective plane. There one has to be a little careful, since a mirror reflection can easily make the realization space disconnected, and so the mirror reflection (or the whole action of the general linear group) is factored out first. 6.2 Arrangements of Other Geon1etric Objects ble equivalence of S and A; see, e.g., [Goo97] or [BVS+99]). Similar theorems were proved by Richter-Gebert for the realization spaces of 4-dimensional polytopes [RG99], [RG97]. These results for arrangements and polytopes can be regarded as instances of a vague but probably quite general principle: "Almost none of the combinatorially imaginable geometric configurations are geometrically realizable, and it iB difficult to decide which ones are. " Of course, there are exceptions, such as the graphs of 3-dimensional convex polytopes. Encoding pseudoline arrangements. The lower bound 2n(n2) for the number of isomorphism classes of pseudoline arrangements is asymp­ totically tight. Felsner [Fel97] found a nice encoding of such an arrange­ ment by an n x n matrix of O's and 1 's, from which the isomorphism type can be reconstructed: The entry ( i, j) of the matrix is 1 iff the jth leftmost crossing along the pseudoline number i is with a pseudoline whose number k is larger than i. Exercises 139 1. Let p1 ( x), . . . , Pn ( x) be univariate real polynomials of degree at most D. Check that the number of sign patterns of the Pi is at most 2nD+1. l2l 2. (Intersection graphs) Let S be a set of n line segments in the plane. The intersection graph of S is the graph on n vertices, which correspond to the segments of S, with two vertices connected by an edge if and only if the corresponding two segments intersect. (a) Prove that the graph obtained from K5 by subdividing each edge exactly once is not the intersection graph of segments in the plane (and not even the intersection graph of any arcwise connected sets in the plane). IIl (b) Use Theorem 6.2.1 to prove that most graphs are not intersection graphs of segments: While the total number of graphs on n given vertices is 2(Ƥ) :::;: 2n2 /2+0(n), only 2°(n log n) of them are intersection graphs of segments (be careful about collinear segments!). [I] (c) Show that the number of (isomorphism classes of) intersection graphs of planar arcwise connected sets, and even of planar convex sets, on n vertices cannot be bounded by 2°(n log n). (The right order of magnitude does not seem to be known for either of these classes of intersection graphs.) IIl 3. (Number of combinatorially distinct simplicial convex polytopes) Use Theorem 6.2.1 to prove that for every dimension d > 3 there exists Cd > 0 such that the number of combinatorial types of simplicial polytopes in R d with n vertices is at most 2cdn log n. (The combinatorial equivalence means isomorphic face lattices; see Definition 5.3.4.) IIl 140 Chapter 6: Number of Faces in Arrangements Such a result was proved by Alon (Alo86b] and by Goodman and Pollack (GP86]. 4. (Sign patterns of matrices and rank) Let A be a real n x n matrix. The sign matrix a(A) is the n x n matrix with entries in {-1, 0, +1} given by the signs of the corresponding entries in A. (a) Check that A has rank at most q if and only if there exist n x q matrices U and V with A = uvr. [II (b) Estimate the number of distinct sign matrices of rank q using Theo­ rem 6.2.1, and conclude that there exists an n x n matrix S containing only entries +1 and -1 such that any real matrix A with a(A) = S has rank at least en, with a suitable constant c > 0. 0 The result in (b) is from Alon, Frankl, and Rodl [AFR85] (for another application see [Mat96b]). 5. (Extendible pseudosegments) A family of pseudosegments is a finite col­ lection S = { s1, s2, . . . , sn} of curves in the plane such that each si is x-monotone and its vertical projection on the x-axis is a closed interval, every two curves in the family intersect at most once, and whenever they intersect they cross (tangential contacts are not allowed). Such an S is called extendible if there is a family L = { f 1, . . . , Rn} of pseudo lines such that Si C fi, i = 1, 2, . . . , n. (a) Find an example of a nonextendible family of 3 pseudosegments. 0 (b) Define an oriented graph G with vertex set S and with an edge from si to s j if si n s i ¥- 0 and si is below s i on the left of their intersection. Check that if S is extendible, then G is acyclic. [!] (c) Prove that, conversely, if G is acyclic, then S is extendible. Extend the pseudosegments one by one, maintaining the acyclicity of G. [II (d) Let Ii be the projection of Si on the x-axis. Show that if for every i < j, Ii n Jj = 0 or Ji C Ij or Ij C Ii, then G is acyclic, and hence S is extendible. ի (e) Given a family of closed intervals 11, . . . , In C R, show that each in­ terval in the family can be partitioned into at most O(logn) subintervals in such a way that the resulting family of subintervals has the property as in (d). This implies that an arbitrary family of n pseudosegments can be cut into a family of 0( n log n) extendible pseudosegments. 0 These notions and results are from Chan [ChaOOa]. 6.3 N11mber of Vertices of Level at Most k In this section and the next one we investigate the maximum number of faces in certain naturally defined portions of hyperplane arrangements. We con­ sider only simple arrangements, and we omit the (usually routine) perturba­ tion arguments showing that simple arrangements maximize the investigated quantity. 6.3 Number of Vertices of Level at Most k 141 Let H be a finite set of hyperplanes in R d, and assume that none of them is vertical, i.e., parallel to the xd-axis. The level of a point x E Rd is the number of hyperplanes of H lying strictly below x (the hyperplanes passing through x, if any, are not counted). This extends the definition for lines from Section 4. 7. We are interested in the maximum possible number of vertices of level at most k in a simple arrangement of n hyperplanes. The following drawing shows the region of all points of level at most 2 in an arrangement of lines; we want to count the vertices lying in the region or on its boundary. The vertices of level 0 are the vertices of the cell lying below all the hyperplanes, and since this cell is the intersection of at most n half-spaces, it has at most O(nld/21 ) vertices, by the asymptotic upper bound theorem (Theorem 5.5.2). From this result we derive a bound on the maximum number of vertices of level at most k. The elegant probabilistic technique used in the proof is generally applicable and probably more important than the particular result itself. 6.3.1 Theorem (Clarkson's theorem on levels). The total number of vertices of level at most k in an arrangement of n hyperplanes in R d is at most O(nld/2J (k+l) rd/21 ), with the constant of proportionality depending on d. We are going to prove the theorem for simple arrangements only. The general case can be derived from the result for simple arrangements by a standard perturbation argument. But let us stress that the simplicity of the arrangement is essential for the forthcoming proof. For all k (0 < k < n - d), the bound is tight in the worst case. To see this for k > 1, consider a set ofN hyperplanes such that the lower unbounded cell in their arrangement is a convex polyhedron with 0( ( ٛ) Ld/2J ) vertices, and replace each of the hyperplanes by k very close parallel hyperplanes. Then each vertex of level 0 in the original arrangement gives rise to 0( kd) vertices of level at most k in the new arrangement. A much more challenging problem is to estimate the maximum possible number of vertices of level exactly k. This will be discussed in Chapter 11. One of the main motivations that led to Clarkson's theorem on levels was an algorithmic problem. Given an n-point set P C Rd, we want to construct 142 Chapter 6: Nun1ber of Faces in Arrangements a data structure for fast answering of queries of the following type: For a query point x E Rd and an integer t, report the t points of P that lie nearest to x. Clarkson's theorern on levels is needed for bounding the maximum amount of memory used by a certain efficient algorithm. The connection is not entirely simple. It uses the lifting transform described in Section 5. 7, relating the algorithmic problem in Rd to the complexity of levels in Rd+I, and we do not discuss it here. Proof of Theorem 6.3.1 for d = 2. First we demonstrate this special case, for which the calculations are somewhat simpler. Let H be a set of n lines in general position in the plane. Let p denote a certain suitable number in the interval (0, 1) whose value will be determined at the end of the proof. Let us imagine the following random experiment. We choose a subset R C H at random, by including each line h E H into R with probability p, the choices being independent for distinct lines h. Let us consider the arrangement of R, temporarily discarding all the other lines, and let f ( R) denote the number of vertices of level 0 in the arrangement of R. Since R is random, f is a random variable. We estimate the expectation of f, denoted by E [f], in two ways. First, we have f(R) < IRI for any specific set R, and hence E[f] < E[IRIJ = pn. Now we estimate E[f] differently: We bound it from below using the number of vertices of the arrangement of H of level at most k. For each vertex v of the arrangement of H, we define an event Av meaning "v becomes one of the vertices of level 0 in the arrangement of R." That is, Av occurs if v contributes 1 to the value of f. The event Av occurs if and only if the following two conditions are satisfied: • Both lines determining the vertex v lie in R. • None of the lines of H lying below v falls into R. >K } these must be in R ٚ } h b ' R Ȃ t ese must not e In . . . We deduce that Prob [Av] = p2(1 - p)l(v), where f(v) denotes the level of the vertex v. Let V be the set of all vertices of the arrangement of H, and let V L Prob[AvJ vE V 6.3 Nurnber of Vertices of Level at Most k 143 vE V:::;k Altogether we have derived np > E[f] > IV e-1 > 1 for all k > 1. This leads to IV<kl < 3(k+l)n. D Proof for an arbitrary dimension. The idea of the proof is the sarne as above. As for the technical realization, there are at least two possible routes. The first is to retain the same probability distribution for selecting the sample R (picking each hyperplane of the given set H independently with probability p); in this case, most of the proof remains as before, but we need a lemma showing that E (IRI Ld/2J] = O((pn)Ld/2J). This is not difficult to prove, either from a Chernoff-type inequality or by elementary calculations (see Exercises 6.5.2 and 6.5.3). The second possibility, which we use here, is to change the probability distribution. Namely, we define an integer parameter r and choose a random r-elernent subset R C H, with all the (;) subsets being equally probable. With this new way of choosing R, we proceed as in the proof for d = 2. We define f(R) as the number of vertices of level 0 in the arrangement of R and estimate E[f] in two ways. On the one hand, we have f(R) == O(rld/2J ) for all R, and so E[f] == O(rld/2J ). The notation V for the set of all vertices of the arrangement of H, V IV<kl · P(k). vEV 144 Chapter 6: Number of Faces in Arrangements Combining with E(/] = O(rldf2J) derived earlier, we obtain O(rld/2J ) IV <kl < P(k) . (6.2) An appropriate value for the parameter r is r = lkƈ1J . (This is not surprising, since in the previous proof, the size of R was concentrated around pn = kR1 .) Then we have the following estimate: 6.3.2 Lemma. Suppose that 1 < k < 8 - 1, which implies 2d < r < 9. Then P(k) > Cd(k+l)-d for a suitable cd > 0 depending only on d. We postpone the proof of the lemma a little and finish the proof of The­ orem 6.3.1. We want to substitute the bound from the lemma into (6.2). In order to meet the assumptions of the lemma, we must restrict the range of k somewhat. But if, say, k > ; d, then the bound claimed by the theorem is of order nd and thus trivial, and for k = 0 we already know that the theorem holds. So we may assume 1 < k < 8 - 1, and we have This establishes the theorem. Proof of Lemma 6.3.2. (n-d-k) P(k) = (;) (n-d-k)(n-d-k-1) · · · (n-k-r+l) . r(r-1) . . . (r-d+1) n(n-1) · · · (n-r+1) r(r-1) · · · (r-d+1) n-d-k n-d-k-1 n-k-r+l n(n-1) · · · (n-d+l) n-d n-d-1 n-r+1 > (S) d (1 -k ) (1 -k ) . . . (1 -k ) 2n n - d n - d - 1 n - r + 1 r d ( k )r > ( 2n) 1 - n - r + 1 · 0 Now, ٙ > ( kƈl - 1)/n > 2(k Tl} (since k < :, say) and 1 - n-Ȕ+1 > 1 - ٘ (a somewhat finer calculation actually gives 1 - ktl here). Since k < ;, we can use the inequality 1-x > e-2x valid for x E [0, l], and we arrive at Lemma 6.3.2 is proved. 0 6.3 Number of Vertices of Level at Most k 145 Levels in arrangements. Besides vertices, we can consider all faces of level at most k, where the level of a face is the (common) level of all of its points. Using Theorem 6.3.1, it is not hard to prove that the number of all faces of level at most k in an arrangement of n hyperplanes is O(nld/2J (k+l)rd/21 ). In the literature one often speaks about the level k in an arrangement of hyperplanes, meaning the boundary of the region of all points of level at most k. This is a polyhedral surface and each vertical line intersects it in exactly one point. It is a subcomplex of the arrangement; note that it may also contain faces of level different from k. In Section 4. 7 we considered such levels in arrangements of lines. Bibliography and remarks. Clarkson's theorem on levels was first proved in Clarkson [Cla88a] (see Clarkson and Shor (CS89) for the journal version). The elegant proof technique has many other applica­ tions, and we will meet it several more times, combined with additional tricks into sophisticated arguments. The theorem can be formulated in an abstract framework outlined in the notes to Section 6.5. New variations on the basic method were noted by Sharir (ShaOl] (see Ex­ ercises 4 and 5). In the planar case, the O(nk) bound on the complexity of levels 0 through k was known before Clarkson's paper, apparently first proved by Goodman and Pollack [GP84]. Alon and Gyori [AG86] determined the exact constant of proportionality (which Clarkson's proof in the present form cannot provide). Welzl [WelOl] proved an exact upper bound in R3; see the notes to Section 11.3 for a little more about his method. Several other related references can be found, e.g., in Agarwal and Sharir [ ASOOa). Exercises 1. Show that for n hyperplanes in Rd in general position, the total number of vertices of levels k, k+ 1' . . . ' n-d is at most 0( n ld/2J ( n-k) r d/21 ). m 2. (a) Consider n lines in the plane in general position (their arrangement is simple). Call a vertex v of their arrangement an extreme if one of its defining lines has a positive slope and the other one has a negative slope. Prove that there are at most O((k+1)2) extremes of level at most k. Imitate the proof of Clarkson's theorem on levels. m (b) Show that the bound in (a) cannot be improved in general. ITl 3. Let K 1, . . . , K n be circular disks in the plane. Show that the number of intersections of their boundary circles that are contained in at most k disks is bounded by O(nk). Use the result of Exercise 5.7.10 and assume general position if convenient. 0 4. Let L be a set of n nonvertical lines in the plane in general position. (a) Let W be an arbitrary subset of vertices of the arrangement of L, and let X w be the number of pairs ( v, f), where v E W, R E L, and 146 Chapter 6: Number of Faces in Arrangements f goes (strictly) below v. For every real number p E (0, 1), prove that Xw > p-1IWI - p-2n. 0 (b) Let W be a set of vertices in the arrangement of L such that no line of L lies strictly below rnore than k vertices of W, where k > 1. Use (a) to prove IWI = O(nv'k). ǣ (c) Check that the bound in (b) is tight for all k < 9. ǣ This exercise and the next one are from Sharir [ShaOl]. 5. Let P be an n-point set in the plane in general position (no 4 points on a common circle). Let C be a set of circles such that each circle in C passes through 3 points of P and contains no more than k points of P in its interior. Prove that ICI < O(nk213), by an approach analogous to that of Exercise 4. IT1 6.4 The Zone Theorem Let H be a set of n hyperplanes in R d, and let g be a hyperplane that may or may not lie in H. The zone of g is the set of the faces of the arrangement of H that can see g. Here we imagine that the hyperplanes of H arc opaque, and so we say that a face F can see the hyperplane g if there are points x E F and y E g such that the open segment xy is not intersected by any hyperplane of H (the face F is considered relatively open). Let us note that it does not matter which point x E F we choose: Either all of them can see g or none can. The picture shows the zone in a line arrangement: g The following result bounds the maximum complexity of the zone. In the proof we will meet another interesting random sampling technique. 6.4.1 Theorem (Zone theorem). The number of faces in the zone of any hyperplane in an arrangement of n hyperplanes in Rd is O(nd-1), with the constant of proportionality depending on d. \Ve prove the result only for simple arrangements; the general case follows, as usual, by a perturbation argument. Let us also assume that g ¢ H and that H U {g} is in general position. 6.4 The Zone Theoren1 147 It is clear that the zone has O(nd-l) cells, because each (d-1)-dimen­ sional cell of the ( d-1 )-dimensional arrangement within g is intersects only one d-dimensional cell of the zone. On the other hand, this information is not sufficient to conclude that the total number of vertices of these cells is O(nd-l ) : For example, as we know from Chapter 4, n arbitrarily chosen cells in an arrangement of n lines in the plane can together have as many as n( n413) vertices. Proof. We proceed by induction on the dimension d. The base case is d = 2; it requires a separate treatment and does not follow from the trivial case d -== 1 by the inductive argument shown below. The case d = 2. (For another proof see Exercise 7.1.5.) Let H be a set of n lines in the plane in general position. We consider the zone of a line g. Since a convex polygon has the same number of vertices and edges, it suffices to bound the total number of 1-faces (edges) visible from the line g. Imagine g drawn horizontally. We count the number of visible edges lying above g. Among those, at most n intersect the line g, since each line of H gives rise to at most one such edge. The others are disjoint from g. Consider an edge uv disjoint from g and visible from a point of g. Let h E H be the line containing uv, and let a be the intersection of h with g: : f . . 0 x .. Ƹ--.. . Y 0 . 0 Let the notation be chosen in such a way that u is closer to a than v, and let f E H be the ǚecond line (beǚideǚ h) defining the vertex u. Let b denote the intersection f n g. Let us call the edge uv a right edge of the line f if the point b lies to the right of a, and a left edge of the line f if b lies to the left of a. We show that for each line f there exists at most one right edge. If it were not the case, there would exist two edgeǚ, uv and xy, where u lieǚ lower than x, which would both be right edges of f, as in the above drawing. The edge xy should see some point of the line g, but the part of g lying to the right of a is obscured by the line h, and the part left of a is obscured by the line f. This contradiction shows that the total number of right edges is at most n. Symmetrically, we ǚee that the number of left edges in the zone is at most n. The same bounds are obtained for edges of the zone lying below g. Altogether we have at most O(n) edges in the zone, and the 2-dimensional case of the zone theorem is proved. 148 Chapter 6: Number of Faces in Arrangements The case d > 2. Here we make the inductive step from d-1 to d. We assume that the total number of faces of a zone in Rd-1 is O(nd-2), and we want to bound the total number of zone faces in Rd. The first idea is to proceed by induction on n, bounding the maximum possible number of new faces created by adding a new hyperplane to n-1 given ones. However, it is easy to find examples showing that the number of faces can increase roughly by nd-1, and so this straightforward approach fails. In the actual proof, we use a clever averaging argument. First, we demon­ strate the method for the slightly simpler case of counting only the facets (i.e., (d-1)-faces) of the zone. Let f(n) denote the maximum possible number of (d-1)-faces in the zone in an arrangement of n hyperplanes in Rd (the dimension d is not shown in the notation in order to keep it simple). Let H be an arrangement and g a base hyperplane such that f(n) is attained for them. We consider the following random experiment. Color a randomly chosen hyperplane h E H red and the other hyperplanes of H blue. We investigate the expected number of blue facets of the zone, where a facet is blue if it lies in a blue hyperplane. On the one hand, any facet has probability n n 1 of becoming blue, and hence the expected number of blue facets is n n 1 f(n). We bound the expected number of blue facets in a different way. First, we consider the arrangement of blue hyperplanes only; it has at most f(n-1) blue facets in the zone by the inductive hypothesis. Next, we add the red hyperplane, and we look by how much the number of blue facets in the zone . can Increase. A new blue facet can arise by adding the red hyperplane only if the red hyperplane slices some existing blue facet F into two parts F1 and F2, as is indicated in the picture: 9 n h 6.4 The Zone Theorem 149 This increases the number of blue facets in the zone only if both F1 and F2 are visible frotn g. In such a case we look at the situation within the hyperplane h; we claim that F n h is visible from g n h. Let C be a cell of the zone in the arrangement of the blue hyperplanes having F on the boundary. We want to exhibit a segment connecting F n h to g n h within C. If x1 E F1 sees a point Yt E g and x2 E F2 sees y2 E g, then the whole interior of the tetrahedron XtX2YtY2 is contained in C. The intersection of this tetrahedron with the hyperplane h contains a segment witnessing the visibility of g n h from F n h. If we intersect all the blue hyperplanes and the hyperplane g with the red hyperplane h, we get a (d-1)-dimensional arrangement, in which F n h is a facet in the zone of the ( d-2)-dimensional hyperplane g n h. By the inductive hypothesis, this zone has O(nd-2) facets. Hence, adding h increases the number of blue facets of the zone by O(nd-2), and so the total number of blue facets after h has been added is never more than /(n-1) + O(nd-2). We have derived the following inequality: n - 1 -- f(n) < f(n-1) + O(nd-2). n It implies f ( n) = 0 ( n d-l), as we will demonstrate later for a slightly more general recurrence. The previous considerations can be generalized for (d-k)-faces, where 1 < k < d-2. Let fi(n) denote the maximum possible number of j-faces in the zone for n hyperplanes in dimension d. Let H be a collection of n hyperplanes where /d-k(n) is attained. As before, we color one randomly chosen hyperplane h E H red and the others blue. A ( d-k )-face is blue if its relative interior is disjoint from the red hyperplane. Then the probability of a fixed (d-k)-face being blue is n n k , and the expected number of blue (d-k)-faces in the zone is at most n n k !d-k(n). On the other hand, we find that by adding the red hyperplane, the num­ ber of blue (d-k)-faces can increase by at most O(nd-2), by the inductive hypothesis and by an argument similar to the case of facets. This yields the recurrence n - k /d-k(n) < /d-k(n-1) + O(nd-2). n We use the substitution <p(n) = n(n-{)ÿ( Ā k+l) , which transforms our re­ currence to <p(n) < <p(n- 1) + O(nd-k-2). We assume k < d-1 (so the con­ sidered faces must not be edges or vertices). Then the last recurrence yields <p(n) = O(nd-k-l ), and hence /d-k(n) = O(nd-l ). For the case k = d-1 (edges), we would get only the bound /1 ( n) = O(nd-1 logn) by this method. So the number of edges and vertices must be bounded by a separate argument, and we also have to argue separately for the planar case. 150 Chapter 6: Number of Faces in Arrangements We are going to show that the number of vertices of the zone is at most proportional to the number of the 2-faces of the zone. Every vertex is con­ tained in some 3-face of the zone. Within each such 3-face, the number of vertices is at most 3 times the number of 2-faces, because the 3-face is a 3-dimensional convex polyhedron. Since our arrangement is simple, each 2-face is contained in a bounded number of 3-faces. It follows that the total number of vertices is at most proportional to f2(n) = O(nd-1). The analogous bound for edges follows immediately from the bound for vertices. D Zones in other arrangements. The maximum complexity of a zone can be investigated for objects other than hyperplanes. We can consider two classes Z and A of geometric objects in Rd and ask for the maximum complexity of the zone of a ( E Z in the arrangement of n objects a1, a2, . . . , an E A. This leads to a wide variety of problems. For some of them, interesting results have been obtained by extending the technique shown above. Most notably, if ( is a k-flat in Rd, 0 < k < d, or more generally, a k-di­ mensional algebraic variety in R d of degree bounded by a constant, then the zone of ( in an arrangement of n hyperplanes has complexity at most 0 ( nl(d+k)/2J (log n)13) , where {3 = 1 for d + k odd and {3 = 0 for d + k even. (The logarithmic factor seems likely to be superfluous in this bound; perhaps a more sophisticated proof could eliminate it.) \Vith ( being a k-flat, this result can be viewed as an interpolation between the asymptotic upper bound theorem and the zone theorem: For k = 0, with ( being a single point, we consider the complexity of a single cell, while for k = d-1, we have the zone of a hyperplane. The key ideas of the proof are outlined in the notes below; for a full proof we refer to the literature. A simple trick relates the zone problem to another question, the maxi­ mum complexity of a single cell in an arrangement. For example, what is the complexity of the zone of a segment ( in an arrangement of n line segments? On the one hand, ( can be chosen as a single point, and so the maximum zone complexity is at least the maximum possible complexity of a cell in an arrangement of n segments. On the other hand, the complexity of the zone of ( is no more than the maximum cell complexity in an arrangement of 2n segments, since we can split each segment by making a tiny hole near the intersection with ( : 6.4 The Zone Theorem 151 A similar reduction works for the zone of a triangle in an arrangement of triangles in R 3 and in many other cases. Results presented in Section 7.6 will show that under quite general assumptions, the zone complexity in dimension d is no more than O(nd-l+e), for an arbitrarily small (but fixed) c > 0. Bibliography and remarks. The two-dimensional zone theorem was established by Chazelle, Guibas, and Lee [CGL85], with the proof shown above, and independently by Edelsbrunner, O'Rourke, and Sei­ del [EOS86] by a different method. The first correct proof of the gen­ eral d-dimensional case, essentially the one presented here, is due to Edelsbrunner, Seidel, and Sharir [ESS93]. The main ingredients of the technique were previously developed by Sharir and his coauthors in several papers. Bern, Eppstein, Plassman, and Yao [BEPY91] determined the best constant in the planar zone theorem: The zone of a line in an arrange­ ment of n lines has at most 5.5n edges. They also showed that the zone of a convex k-gon has complexity O(n + k2). The extension of the zone theorem to the zone of a k-dimensional algebraic variety in a hyperplane arrangement, as mentioned in the text, was proved by Aronov, Pellegrini, and Sharir [APS93]. They also obtained the same bound with ( being the relative boundary of a ( k+ 1 )-dimensional convex set in Rd. The problem with the zone of a curved surface that did not exist for the zone of a hyperplane is that a face F of the zone of ( can be split by a newly inserted hyperplane h into two subfaces F1 and F2, both of them lying in the zone, without h n F being in the zone of ( n h, as is illustrated below: h It turns out that each face F split by h in this way is adjacent to a facet in h that can be seen from ( from both sides; such a facet is called a popular facet of the zone. In order to set up a suitable recurrence for the number of faces in the zone, one needs to bound the total complexity of all popular facets. This is again done by a technique similar to the proof of the zone theorem in the text. The concept of popular facet needs to be generalized to a popular j-face, which is a j-dimensional face F that can be seen from ( in all the 2d-j "sectors" determined by the d - j hyperplanes defining F. The key observation is that if a blue popular j-face is split into two new popular j-faces by the new red hyperplane, then this can be charged to a popular (j -1 )-face within h, as the following picture illustrates for j = 1: 152 Chapter 6: Number of Faces in Arrangements h ( This is used to set up recurrences for the numbers of popular j-faces. Exercises 1. (Sum of squares of cell complexities) (a) Let C be the set of all cells of an arrangement of a set H of n hyper­ planes in Rd. For d = 2, 3, prove that EcEC /o(C)2 = O(nd), where/o(C) is the number of vertices of the cell C. 0 (b) Use the technique explained in this section to prove EcEC fo(C)2 = 0( nd (log n) ld/2J -l) for every fixed d > 3 (or a similar bound with a larger constant in the exponent of log n if it helps). B The result in (b) is from Aronov, Matousek, and Sharir (AMS94]. 2. Define the ( 1 there exists a f-cutting for H of size O(r2), i.e., a subdivision of the plane into O(r2) generalized triangles —1, . . . , —t such that the interior of each —·i is intersected by at most lines of H. The proof uses random sampling, and unlike the elementary proof in Section 4.7, it can be generalized to higher dimensions without much trouble. We first give a complete proof for the planar case and then we discuss the generalizations. Throughout this section we assume that H is in general position. A per­ turbation argument mentioned in Section 4. 7 can be used to derive the cutting lemma for an arbitrary H. The first idea is as in the proof of a weaker cutting lemma by random sampling in Section 4.6: We pick a random sample S of a suitable size and triangulate its arrangement. The subsequent calculations become simpler and more elegant if we choose S by independent Bernoulli trials. That is, instead of picking s random lines with repetitions as in Section 4.6, we fix a probability p = Ü and we include each line h E H into S with probability p, the decisions being Inutually independent (this is as in the proof of the planar case of Clarkson's theorem on levels). These two ways of random sampling (by s random draws with repetitions and by independent trials with success probability Ü) can usually be thought of as nearly the same; although the actual calculations differ significantly, their results tend to be similar. Sampling and triangulation alone do not work. Considerations similar to those in Section 4.6 show that with probability close to 1, none of the triangles in the triangulation for the random sample S as above is intersected by more than C b log n lines of H, for a suitable constant C. Later we will see that a similar statement is true with C b log s instead of C b log n. But it is not generally true with Cb, for any C independent of s and n. So the most direct road to an optimal f-cutting, namely choosing const · r random lines and triangulating their arrangement, is impassable. To see this, consider a !-dimensional situation, where H = { h1, . . . , hn} is a set of n points in R (or if you prefer, look at the part of a 2-dimensional arrangement along one of the lines). For simplicity, let us set s = 9; then p = l , and we can imagine that we toss a fair coin n times and we include hi into S if the ith toss is heads. The picture illustrates the result of 30 tosses, with black dots indicating heads: oeoeeooeoeooeooeoeeeoeeeoooeeo We are interested in the length of the longest consecutive run of tails (empty circles). For k is fixed, it is very likely that k consecutive tails show up in a sequence of n tosses for n sufficiently large. Indeed, if we divide the tosses into blocks of length k (suppose for simplicity that n is divisible by k), 154 Chapter 6: Number of Faces in Arrangements 000000000000000000000000000000 I I I I I I I I I I I then in each block, we have probability 2-k of receiving all tails. The blocks are mutually independent, and so the probability of not obtaining all tails in any of the N blocks is (1 - 2-k)nfk. For k fixed and n ǎ oo this goes to 0, and a more careful calculation shows that for k = lh log2 nJ we have exponentially small probability of not receiving any block of k consecutive tails (Exercise 1). So a sequence of n tosses is very likely to contain about log n consecutive tails. (Sequences produced by humans that are intended to look random usually do not have this property; they tend to be "too uniform.") Similarly, for a smaller s, if we make a circle black with probability Ü, then the longest run typically has about ; log s consecutive empty circles. Of course, in the one-dimensional situation one can define much more uniform samples, say by making every ࢖th circle black. But it is not clear how one could produce such "more uniform" samples for lines in the plane or for hyperplanes in Rd. The strategy: a two-level decomposition. Instead of trying to select better samples we construct a ; -cutting for H in two stages. First we take a sample S with probability p = Û and triangulate the arrangement, obtaining a collection T of triangles. (The expected number of triangles is O(r2), as we will verify later.) Typically, T is not yet a f-cutting. Let I(Û) denote the set of lines of H intersecting the interior of a triangle Û E T and let n> = II(Û)I. We define the excess of a triangle Û E T as t> = n> · Û. If t> < 1, then n> < and Û is a good citizen: It can be included into the final ; -cutting as is. On the other hand, if iD.. > 1, then Û needs further treatment: We subdivide it into a collection of finer triangles such that each of them is intersected by at most lines of H. We do it in a seemingly naive way: We consider the whole arrangement of I(Û), temporarily ignoring Û' and we construct a t' -cutting for it. Then we intersect the triangles of this t' -cutting with Û' which can produce triangles but also quadrilaterals, pentagons, and hexagons. Each of these convex polygons is further subdivided into triangles, as is illustrated below: ę \ \ I(Û) a t} -cutting restrict to Û and triangulate Note that each triangle in the t' -cutting is intersected by at most ٵ! = lines of I ( Û). Therefore, the triangles obtained within Û are valid triangles 6.5 The Cutting Lemma Revisited 155 of a ; -cutting for H. The final Ԑ-cutting for H is constructed by subdividing each q E T with excess greater than 1 in the indicated manner and taking all the resulting triangles together. How do we make the required tÙ -cuttings for the 1(.6.)? We do not yet have any suitable way of doing this unless we use the cutting lemma itself, which we do not want, of course. Fortunately, as a by-product of the subse­ quent considerations, we obtain a method for directly constructing slightly suboptimal cuttings: 6.5.1 Lemma (A suboptimal cutting lemma). For every finite collec­ tion of lines and any u > 1, there exists a ٗ-cutting consisting of at n1ost K(ulog(u+1))2 triangles, where K is a suitable constant. If we employ this lemma for producing the t' -cuttings, we can estimate the number of triangles in the resulting ; -cutting in terms of the excesses of the triangles in T: The total number of triangles is bounded by L max { 1, 4K(tD. log(tD. + 1))2} . (6.3) D.ET The key insight for the proof of the cutting lemma is that although we typically do have triangles q E T with excess as large as about log r, they are very few. More precisely, we show that under suitable assumptions, the expected number of triangles in T with excess t or larger decreases exponen­ tially as a function of t. This will take care of both estimating (6.3) by O(r2) and establishing Lemma 6.5.1. Good and bad triangulations. Our collection T of triangles is obtained by triangulating the cells in the arrangement of the random sample S. Now is the time to specify how exactly the cells are triangulated, since not every triangulation works. To see this, consider a set H of n lines, each of them touching the unit circle, and let S be a random sample, again for simplicity with probability p = 5. We have learned that such a sample is very likely to leave a gap of about log n unselected lines (as we go along the unit circle). If we maliciously triangulate the central cell in the arrangement of S by the diagonals from the vertex near such a large gap, ••••••• •••• ··· · ·· •••• ••••••• ••••••••• ••• •• •• •• • • • • • • •• • • •• 156 Chapter 6: Number of Faces in Arrangements all these about N triangles have excess about logn; this is way too large for our purposes. The triangulation thus cannot be quite arbitrary. For the subsequent proof, it has to satisfy simple axioms. In the planar case, it is actually tech­ nically easier not to triangulate but to construct the vertical decomposition of the arrangement of S. We erect vertical segments upwards and downwards from each vertex in the arrangement of S and extend them until they meet another line (or all the way to infinity): So far we have been speaking of triangles, and now we have trapezoids, but the difference is immaterial, since we can always split each trapezoid into two triangles if we wish. Let T(S) denote the set of (generalized) trapezoids in the vertical decomposition of S. As before, I(") is the set of lines of H intersecting the interior of a trapezoid "-6.5.2 Proposition (Trapezoids with large excess are rare). Let H be a fixed set of n lines in general position, let p = Û , where 1 < r < N , let S be a random sample drawn f rom H by independent Bernoulli trials with success probability p, and let t > 0 be a real parameter. Let T(S)>t denote the set of trapezoids in — E T(S) with excess at least t, i.e., with II(") I > tš . Then the expected number of trapezoids in T(S)>t is bounded as f ollows: for a suitable absolute constant C. First let us see how this result can be applied. Proof of the suboptimal cutting lemma 6.5.1. To obtain a !-cutting for H, we set r = Au log(u+l) for a sufficiently large constant A and choose a sample S as in Proposition 6.5.2. By that proposition with t = 0, we have E[IT(S)I] < Br2 for a suit­ able constant B. By the same proposition with t = A log(u+l), we have E[I/(S)>tl] < if A is sufficiently large. By linearity of expectation, we ob­ tain 6.5 Tl1e Cutting Lemma Revisited 157 E [3ãr2 IT(S)I + IT(S)>ti] < ä· So there exists a sample S with both 17(8)1 < 2Br2 and IT(S)>tl = 0. This n1eans that we have a ! -cutting into 0( r2) = 0( ( u log( u+ 1) )2) trapezoids. 0 For an alternative proof of Lemma 6.5.1 see Exercise 10.3.4. Proof of the cutting lemma (Lemma 4.5.3). Most of the proof has already been described. To produce a ; -cutting, we pick a random sample S with probability p = LJ, we let 7 = T(S) be its vertical decomposition, and we refine each trapezoid G E T with excess til > 1 using an auxiliary t}-cutting. The size of the resulting ; -cutting is bounded by ( 6.3). So it suffices to estimate the expected value of that expression using Proposition 6.5.2: E[ L max { 1, 4K(t/ log(t/+1))2 }] ilET(S) < E[ L max { 1 , 4Kti}] 0ET(S) < E[ 17(S)I + 0 L 4Kti] , njE'T(S) 2i ::;tǍ <21+1 CX) < E[I/(S)I] + LE [I7(S)I>2t] . 0(24(i+l)) i=O CX) i=O The cutting lemma is proved. (as log(tLl+l) < til) D Note that it was not important that the suboptimal cutting lemma is near­ optimal: Any bound subexponential in u for the size of a !-cutting would do. In particular, for any fixed c > 1, the expected cth-degree average of the excess is only a constant. For the proof of Proposition 6.5.2, we need several definitions and some simple properties of the vertical decomposition. Let H be a fixed set of lines in general position, and let Reg = UscH T(S) be the set of all trapezoids that can ever appear in the vertical decomposition for some S C H (including S = 0). For a trapezoid G E Reg, let D(fl) be the set of the lines of H incident to at least one vertex of fl. By the general-position assumption, we have ID(G)I < 4 for all fl. The various possible cases, up to symmetry, are drawn below; the picture shows the lines of D(G) with G marked in gray: 158 Chapter 6: Number of Faces in Arrangements The set D( Û) is called the defining set of Û. Note that the same defining set can belong to several trapezoids. Now we list the properties required for the proof; some of them are obvious or have already been noted. (CO) We have ID(Û)I < 4 for all Û E Reg. Moreover, any set So C H is the defining set for at most a constant number of Û E Reg (certainly no more than the maximum of IT(So)l for ISol < 4). (C1) For any Û E T(S), we have D(Û) C S (the defining set must be present) and S n I ( Û) = 0 (no intersecting line may be present). (C2) For any Û E Reg and any S C H such that D(Û) C S and J(Û) n S = 0, we have Û E T(S). (C3) For every S C H, we have IT(S)I = O(ISI2 + 1). To see this, think of adding the vertical segments to the arrangement of S one by one. Each of them splits an existing region in two. The most interesting condition is (C2), which says that the vertical de­ composition is defined "locally." It implies, in particular, that Û is one of the trapezoids in the vertical decomposition of its defining set. More generally, it says that Û E Reg is present in T(S) whenever it is not excluded for simple local reasons (which can be checked by looking only at Û). Checking (C2) in our situation is easy, and we leave it to the reader. Also note that it is ( C2) that is generally violated for the mischievous triangulation considered earlier. Proof of Proposition 6.5.2. First we prove that if S C H is a random sample drawn with probability p = LJ, 0 < r < n, then E[IT(S)I] = O(r2 + 1). (6.4) This takes care of the case t < 1 in the proposition. By ( C3), we have IT ( S) I == O(ISI2 + 1) for every fixed S, and so it suffices to show that E [ISI2] = O(r2 + 1). Now, lSI is the sum of independent random variables, each of them attaining value 1 with probability p and value 0 with probability 1 - p, and it is easy to check that E < r2 + r (Exercise 2(a)). Next, we assume t > 1. Let S C H be a random sample drawn with probability p. We observe that the conditions (C1) and (C2) allow us to 6.5 The Cutting Lemma Revisited 159 express the probability p(J) that a certain trapezoid Û E Reg appears in the vertical decomposition T(S): Since Û appears if and only if all lines of D(q) are selected into S and none of I(Û) is selected, we have p(J) = piD(I)I (l - p)II(I)I . (An analogous formula appeared in the proof of the planar Clarkson's the­ orem on levels, and one can say that the technique of that proof is devel­ oped one step further in the present proof.) If we write Reg>t = { Û E Reg: II (q)I > t š } for the set of all potential trapezoids with excess at least t, the expected number of trapezoids in T(S)>t can be written as E[I/(S)>tiJ = L p(J). (6.5) IEReg'?.t It seems difficult to estimate this sum directly; the trick is to compare it with a similar sum obtained for the expected number of trapezoids for another sample. We define another probability p = ٖ, and we let S be a sample drawn from H by Bernoulli trials with success probability p. On the one hand, we have E (17(8)1] = O(r2 /t2 + 1) by (6.4). On the other hand, setting p(Û) = _piD(I)I (1 - _p)l1(I)1 we have, in analogy to (6.5), where E (lr(B)I] = L P(l1) > L P(l1) IEReg IE Reg?;t p(Û) L p(l1) . p(l1) > E[IT(S)>tll . R, (6.6) IEReg'?.t . {.P(Û) } R = mm p(l1) : l1 E Reg>t . Now R can be bounded from below. For every Û E Reg>t' we have II(Û) I > tLJ and ID(q)I < 4, and so -p(Û) = (.P) ID(I)I ( 1 - p) II(.6.)1 > r4 ( 1 - p) tn/r . p(J) p 1 - p 1 - p We use 1 Ƞ p < e-P (this holds for all real p) and 1 -fj > e-2P (this is true for all p E (0, h], and we have p < p < h). Therefore R > t-4et-2• Substituting into (6.6), we finally derive E[IT(S)>tl] < â · E [17(8)1] < t4e-<t-2l · 0 ( :: + 1) < C · Ttr2 for a sufficiently large constant C (the proposition assumes r > 1). Proposi­ tion 6.5.2 is proved. D The following can be proved by the same technique: 160 Chapter 6: Number of Faces in Arrangements 6.5.3 Theorem (Cutting lemma for arbitrary dimension). Let d > 1 be a fixed integer, let H be a set of n hyperplanes in R d, and let r be a parameter, 1 < r < n. Then there exists a Ŏ-cutting for H of size 0( rd); that is, a subdivision ofRd into O(rd) generalized simplices such that the interior of each simplex is intersected by at most › hyperplanes of H. The only new part of the proof is the construction of a suitable trian­ gulation scheme that plays the role of T(S). A vertical decomposition does not work. More precisely, it is not known whether the vertical decomposition of an arrangement of n hyperplanes in Rd always has at most O(nd) cells (prisms); this would be needed as the analogue of condition ( C3). Instead one can use the bottom-vertex triangulation, which we define next. First we specify the bottom-vertex triangulation of a k-dimensional con­ vex polytope P C Rd, 1 < k < d, by induction on k. For k = 1, P is a line segment, and the triangulation consists of P itself. For k > 1, we let v be the vertex of P with the smallest last coordinate (the "bottom vertex"); ties can be broken by lexicographic ordering of the coordinate vectors. We triangu­ late all proper faces of P inductively, and we add the simplices obtained by erecting the cone with apex v over all simplices in the triangulations of the faces not containing v. d = 2 d = 3 v v It is not difficult to check that this yields a triangulation of P (even a simpli­ cial complex, although this is not needed in the present proof), and that if P is a simple polytope, then the total number of simplices in this triangulation is at most proportional to the number of vertices of P (with the constant of proportionality depending on d); see Exercise 4. All the bounded cells of the arrangement of S are triangulated in this way. Some care is needed for the unbounded cells, and several ways are available. One of the simplest is to intersect the arrangement with a sufficiently large box containing all the vertices and construct the f-cutting only inside that box. This is sufficient for most applications of Ŏ-cuttings. Alternatively (and almost equivalently), we can consider the whole arrangement in the projective d-space instead of Rd. We omit a detailed discussion of this aspect. In this way we obtain a triangulation T(S) for every subset S of the given set of hyperplanes. The analogue of (C3) is IT(S)I = O(ISid+ 1), which follows (assuming H in general position) because the number of simplices in each cell is proportional to the number of its vertices, and the total number of vertices is O(ISid). 6.5 The Cutting Lemma Revisited 161 The set I(q) are all hyperplanes intersecting the interior of a simplex Ll, and D ( q) consists of all the hyperplanes incident to at least one vertex of q. We again need to assume that our hyperplanes are in general posi­ tion. Then, obviously, JD(q)I < d(d+l), and a more careful argument shows that ID(q)I < d(d i3) . The important thing is that an analogue of (CO) holds, namely, that both JD(q)I and the number of Ll with a given D(q) are bounded by constants. The condition (Cl) holds trivially. The "locality" condition (C2) does need some work, although it is not too difficult, and we refer to Chazelle and Friedman (CF90] for a detailed argument. With (CO)-(C3) in place, the whole proof proceeds exactly as in the planar case. To get the analogue of (6.4), namely E(JT(S)I] = O(rd+1), we need the fact that E [JSJd] = O(rd) (this is what we avoided in the proof of the higher-dimensional Clarkson's theorem on levels by passing to another way of sampling); see Exercise 2(b) or 3. Further generalizations. An analogue of Proposition 6.5.2 can be derived from conditions (CO)-(C3) in a general abstract framework. It provides op­ timal ; -cuttings not only for arrangements of hyperplanes but also in other situations, whenever one can define a suitable decomposition scheme satisfy­ ing (CO)-(C3) and bound the maximum number of cells in the decomposition (the latter is a challenging open problem for arrangements of bounded-degree algebraic surfaces). The significance of Proposition 6.5.2 reaches beyond the construction of cuttings; its variations have been used extensively, mainly in the analysis of geometric algorithms. We are going to encounter a combina­ torial application in Chapter 11. Bibliography and remarks. The proof of the cutting lemma as in this section (with a different way of sampling) is due to Chazelle and Friedman (CF90]. Analogues of Proposition 6.5.2, or more precisely the consequence stating that the expectation of the cth-degree average of the excess is bounded by a constant, were first proved and applied by Clarkson [Cla88a] (see Clarkson and Shor (CS89] for the journal version). Since then, they became one of the indispensable tools in the analysis of randomized geometric algorithms, as is illustrated by the book by Mulmuley [Mul93a], for example, as well as by many newer papers. The bottom-vertex triangulation (also called the canonical trian­ gulation in some papers) was defined in Clarkson [Cla88b]. Proposition 6.5.2 can be formulated and proved in an abstract framework, where H and Reg are some finite sets and 7: 2H ---+ 2Reg, I: Reg ---+ 2H, and D: Reg ---+ 2H are mappings satisfying (CO) (with some constants), ( C 1), ( C2), and an analogue of ( C3) that bounds the expected size of T(S) for a random S C H by a suitable function of r, typically by O(rk) for some real constant k > 1. The conclusion 162 Chapter 6: Number of Faces in Arrangements is E[IT(S)>tl] = 0(2-trk). Very similar abstract frameworks are dis­ cussed in Mulmuley [Mul93a] and in De Berg, Van Kreveld, Overmars, and Schwarzkopf [dBv KOS97]. The axiom (C2) can be weakened to the following: (C2') If Û E T(S) and S' c S satisfies D(Û) c S', then Û E T(S'). That is, Û cannot be destroyed by deleting elements of S unless we delete an element of D(Û). A typical situation where (C2') holds although (C2) fails is that in which H is a set of lines in the plane and T(S) are the trapezoids in the vertical decomposition of the cell in the arrangement of S that contains sotne fixed point, say 0. Then Û can be made to disappear by adding a line to S even if that line does not intersect Û' as is illustrated below: This weaker axiom was first used instead of (C2) by Chazelle, Edels­ brunner, Guibas, Sharir, and Snoeyink [CEG+93]. For a proof of a counterpart of Proposition 6.5.2 under ( C2') see Agarwal, Matousek, and Schwarzkopf [AMS98]. Yet another proof of the cutting lemma in arbitrary dimension was invented by Chazelle [Cha93a]. An outline of the argument can also be found in Chazelle's book [ChaOOc] or in the chapter by Matousek in [SUOO]. Both the proofs of the higher-dimensional cutting lemma depend crucially on the fact that the arrangement of n hyperplanes in R d, d fixed, can be triangulated using O(nd) simplices. As was explained in Section 6.2, the arrangement of n bounded-degree algebraic surfaces in Rd has O(nd) faces in total, but the faces can be arbitrarily compli­ cated. A challenging open problem is whether each face can be further decomposed into "simple" pieces (each of them defined by a constant­ bounded number of bounded-degree algebraic inequalities) such that the total number of pieces for the whole arrangement is O(nd) or not much larger. This is easy for d = 2 (the vertical decomposition will do), but dimension 3 is already quite challenging. Chazelle, Edels­ brunner, Guibas, and Sharir [CEGS89] found a general argument that provides an O(n2d-2) bound in dimension d using a suitable vertical decomposition. By proving a near-optimal bound in the 3-dimensional case and using it as a basis of the induction, they obtained the bound of O(n2d-3j3(n)), where j3 is a very slowly growing function (much smaller than log n). Recently Koltun [KolOl] established a near-tight 6.5 The Cutting Len1ma Revisited bound in the 4-dinlensional situation, which pushed the general bound to O(n2d-4+e) for every fixed d > 4. This decomposition problem is the main obstacle to proving an optimal or near-optiinal cutting lemma for arrangements of algebraic surfaces. For some special cases, say for an arrangement of spheres in R d, optimal decompositions are known and an optimal cutting lemma can be obtained. In general, if one can prove a bound of O(na) for the number of pieces in the decomposition, then the techniques of Chapter 10 yield Ƈ-cuttings of size O(ra loga r), and if, moreover, the locality condition ( C2) can be guaranteed, then the method of the present section leads to ; -cuttings of size 0 ( ra) . Exercises 163 1 . Estimate the largest k = k( n) such that in a row of n tosses of a fair coin we obtain k consecutive tails with probability at least u. In particular, using the trick with blocks in the text, check that for k = l u log2 n J , the probability of not getting all tails in any of the blocks is exponentially small (as a function of n). m 2. Let X = X1 + X2 + · · · + Xn, where the Xi are independent random variables, each attaining the value 1 with probability p and the value 0 with probability 1 - p. (a) Calculate E (X2] . 0 (b) Prove that for every integer d > 1 there exists cd such that E [ Xd] < (np+cd)d. (You can use a Chernoff-type inequality, or prove by induction that E [(X + a)d] < (np + d + a)d for all nonnegative integers n, d, and a.) 0 (c) Use (b) to prove that E[Xa] < (np + ca)a also holds for nonintegral a > 1. 0 3. Let X = X1 + X2 + · · · + Xn be as in the previous exercise. Show that E[("'d)] = pd(9) (where d > 0 is an integer) and conclude that E [Xd] < cd(np)d for np > d and a suitable cd > 0. 0 4. Let P be a d-dimensional simple convex polytope. Prove that the bottom­ vertex triangulation of P has at most Cdfo(P) simplices, where Cd de­ pends only on d and f0(P) denotes the number of vertices of P. 0 7 Lower Envelopes This is a continuation of the chapter on arrangements. We again study the number of vertices in a certain part of the arrangement: the lower envelope. Already for segments in the plane, this problem has an unexpectedly subtle and difficult answer. The closely related combinatorial notion of Davenport­ Schinzel sequences has proved to be a useful general tool, since the surprising phenomena encountered in the analysis of the lower envelope of segments are by no means rare in combinatorics and discrete geometry. The chapter has two rather independent parts. After a common introduc­ tion in Section 7.1, lower envelopes in the plane are discussed in Sections 7.2 through 7.4 using Davenport-Schinzel sequences. Sections 7.5 and 7.6 gently introduce the reader to geometric methods for analyzing higher-dimensional lower envelopes, finishing with a quick overview of known results in Sec­ tion 7. 7. 7.1 Segments and Davenport-Schinzel Sequences The following question is extremely natural: What is the maximum possible combinatorial complexity of a single cell in an arrangement of n segments? (The arrangement of segments was defined in Section 6.2.} The complexity of a cell can be measured as the number of vertices and edges on its boundary. It is immediate that the number of edges is at most proportional to the number of vertices plus 2n, the total number of endpoints of the segments, and so it suffices to count the vertices. Here we mainly consider a slightly simpler question: the maximum com­ plexity of the lower envelope of n segments. Informally, the lower envelope of an arrangement is the part that can be seen by an observer sitting at (0, -oo) and looking upward. In the picture below, the lower envelope of 4 segments is drawn thick: 166 Chapter 7: Lower Envelopes If we think of the segments as graphs of (partially defined) functions, the lower envelope is the graph of the pointwise minimum. It consists of pieces of the segments, and we are interested in the maximum possible number of these pieces (in the drawing, we have 7 pieces). Let us denote this maximum by a(n). Davenport-Schinzel sequences. A tight upper bound for a(n) has been obtained via a combinatorial abstraction of lower envelopes, the so-called Davenport-Schinzel sequences. These are closely related to segments, but the most natural way of introducing them is starting from curves. Let us consider a finite set of curves in the plane, such as in the following picture: We suppose that each curve is a graph of a continuous function R --+ R; in other words, each vertical line intersects it exactly once. Jvlost significantly, we assume that every two of the curves intersect in at most s points for some constant s . This condition holds, for example, if the curves are the graphs of polynomials of degree at most s . Let us number the curves 1 through n, and let us write down the sequence of the numbers of the curves along the lower envelope from left to right: 1 2 3 1 2 We obtain a sequence a1 a2a3 . . . at with the following properties: (i) For all i, ai E {1, 2, . . . , n}. (ii) No two adjacent terms coincide; i.e., ai =I= ai+l· (iii) There is no (not necessarily contiguous) subsequence of the form . . . a . . . b . . . a . . . b . . . . . . a . . . b . . . , I I I . • • _ _ . I _ _. s + 2 letters a and b where a =/= b. In other words, there are no indices i1 < i2 < i3 < · · · < is+2 with ai1 # ai2, ai1 = ai3 = ai5 = · · ·, and ai2 = ai4 = ai6 = · · ·. 7. 1 Segments and Davenport-Schinzel Sequences 167 Only (iii) needs a little thought: It suffices to note that between an occurrence of a curve a and an occurrence of a curve b on the lower envelope, a and b have to intersect. Any finite sequence satisfying (i)-(iii) is called a Davenport-Schinzel se­ quence of order 8 over the symbols 1, 2, . . . , n. It is not important that the terms of the sequence are the numbers 1, 2, . . . , n; often it is convenient to use some other set of n distinct symbols. Let us remark that every Davenport-Schinzel sequence of order s over n symbols corresponds to the lower envelope of a suitable set of n curves with at most s intersections for each pair of curves (Exercise 1). On the other hand, very little is known about the realizability of Davenport-Schinzel sequences by graphs of polynomials of degree s, say. We will mostly consider Davenport-Schinzel sequences of order 3. This is the simplest nontrivial case and also the one closely related to lower envelopes of segments. Every two segments intersect at most once, and so it might seem that their lower envelope gives rise to a Davenport-Schinzel sequence of order 1, but this is not the case! The segments are graphs of partially defined functions, while the discussion above concerns graphs of functions defined on all of R. We can convert each segment into a graph of an everywhere-defined function by appending very steep rays to both endpoints: All the left rays are parallel, and all the right ones are parallel. Then every two of these curves have at most 3 intersections, and so if the considered segments are numbered 1 through n and we write the sequence of their numbers along the lower envelope, we get a Davenport-Schinzel sequence of order 3 (no ababa). Let .A8(n) denote the maximum possible length of a Davenport-Schinzel sequence of order s over n symbols. Some work is needed to see that .A8(n) is finite for all 8 and n; the reader is invited to try this. The bound .A1 ( n) = n is trivial, and .A2 ( n) = 2n-1 is a simple exercise. Determining the asymptotics of .A3(n) is a hard problem; it was posed in 1965 and solved in the mid-1980s. We will describe the solution later, but here we start more modestly: with a reasonable upper bound on A3(n). 7.1.1 Proposition. We have a(n) < .A3(n) < 2n Inn + 3n. Proof. Let w be a Davenport-Schinzel sequence of order 3 over n symbols. If the length of w is f, then there is a symbol a occurring at most ȁ times in w. Let us remove all occurrences of such a from w. The resulting sequence can contain some pairs of adjacent equal symbols. But we claim that there can be at most 2 such pairs, coming from the first and last occurrences of a. 168 Chapter 7: Lower Envelopes Indeed .. if some a which is neither the first nor the last a in w were surrounded by some b from both sides, we would have the situation . . . a . . . bab . . . a . .. with the forbidden pattern ababa. So by deleting- all the a and at most 2 more symbols, we obtain a Davenport-Schinzel sequence of order 3 over n-1 symbols. We arrive at the recurrence A3(n) < A3(n) + 2 + A3(n - 1), n which can be rewritten to .A3(n) < .A3(n - 1) + 2 n -n - 1 n - 1 (we saw such a recurrence in the proof of the zone theorem). Together with .A3 ( 1) = 1 this yields A3(n) < 1 + 2 (1 + ɀ + ɀ + · · · + 1 ) n -2 3 n - 1 ' and so A3 ( n) < 2n Inn + 3n as claimed. Bibliography and remarks. A detailed account of the history of Davenport-Schinzel sequences and of the analysis of lower envelopes, with references up until 1995, can be found in the book of Sharir and Agarwal [SA95). Somewhat more recent results are included in in their surveys [ASOOb] and [ASOOa]. We sketch this development and mention some newer results in the notes to Section 7 .3. Exercises 0 1. Let w be a Davenport-Schinzel sequence of order s over the symbols 1, 2, . . . , n. Construct a family of planar curves h1, h2, . . . , hn, each of them intersecting every vertical line exactly once and each two intersect­ ing in at most s points, such that the sequence of the numbers of the curves along the lower envelope is exactly w. x 2. Prove that A2 ( n) = 2n-1 (the forbidden pattern is abab). 0 3. Prove that for every n and s, A8(n) < 1 + (s+1)(ƽ). m 4. Show that the lower envelope of n rays in the plane has O(n) complexity. 0 5. (Planar zone theorem via Davenport-Schinzel sequences) Prove the zone theorem (Theorem 6.4.1) for d = 2 using the fact that A2(n) = O(n). Consider only the part above the line g, and assign one symbol to each side of each line. III 6. Let 91, 92, . . . , 9m C R 2 be graphs of piecewise linear functions R ---+ R that together consist of n segments and rays. Prove that the lower envelope of 91, 92, . . . , 9m has complexity 0 ( ٕ .A3 (2m)); in particular, if m = 0(1), then the complexity is linear. III 7.2 Segments: Superlinear Complexity of the Lower Envelope 169 7. Let P1, P2, . . . , Prn be convex polygons (not necessarily disjoint!) in the plane with n vertices in total such that no vertex is common to two or more Pi and the vertices form a point set in general position. Prove that the number of lines that intersect all the Pi and are tangent to at least two of them is at most 0(A3(n)). 0 8. (Dynamic lower envelope of lines) Let f 1 , £2, . . . , ln be lines in the plane in general position (in particular, none of them is vertical). At each moment t of time, only a certain subset Lt of the lines is present: fi is inserted at tirne si and it is removed at time ti > si. We are interested in the maximum possible total number f(n) of vertices of the arrangement of the fi that appear as vertices of the lower envelope of Lt for at least one t E R. (a) Show that f(n) = f!(a(n)), where a(n) is the maximum complexity of the lower envelope of n segments. 0 (b) Prove that f(n) = O(n logn). (Familiarity with data structures like segment trees or interval trees may be helpful.) Ʒ These results are from Tamir [Tam88], and improving the lower bound or the upper bound is a nice open problem. 7.2 Segments: Superlinear Complexity of the Lower Envelope In Proposition 7 .1.1 we have shown that the lower envelope of n segments has complexity at most O(n logn), but it turns out that the true complexity is still lower. With this information, the next reasonable guess would be that perhaps the complexity is linear in n. The truth is much subtler, though: On the one hand, the complexity behaves like a linear function for all practical purposes, but on the other hand, it cannot be bounded by any linear function: It outgrows the function n t---+ Cn for every fixed C. We present an ingenious construction witnessing this. 7.2.1 Theorem. The function a(n), the maximum combinatorial complex­ ity of the lower envelope of n segments in the plane, is superlinear. That is, for every C there exists an no such that a( no) > Cno. Consequently, ,X3(n), the maximum length of a Davenport-Schinzel sequence of order 3, is super­ linear, too. Proof. For every integers k, m > 1 we construct a set S k ( m) of segments in the plane. Let nk(m) = ISk(m)l be the number of segments and let ek(m) denote the number of arrangement vertices and segment endpoints on the lower envelope of Sk(m). We prove that ek(m) > uk · nk(m). In particular, for m = 1 and k -t oo, this shows that the complexity of the lower envelope is nonlinear in the number of segments. 170 Chapter 7: Lower Envelopes If we really need only the case m ࣪ 1, then what is the parameter m good for? The answer is that we proceed by double induction, on both k and m, and in order to specify Sk(1), for example, we need Sk-I (2). Results of mathematical logic, which are beyond the scope· of this book, show that double induction is in some sense unavoidable: The "usual" induction on a single variable is too crude to distinguish a( n) from a linear function. The segments in Sk(m) are usually not in general position, but they are aggregated in fans by m segments. A fan of m segments is illustrated below f or m ч 4: All the segments of a fan have a common left endpoint and positive slopes, and the length of the segments increases with the slope. Other than forming the fans, the segments are in general position in an obvious sense. For example, no endpoint of a segment lies inside another segment, the endpoints do not coincide unless the segments are in a common fan, and so on. Let fk(m) denote the number of fans forming Sk(m); we have nk(m) ч m · !k(m). First we describe the construction of Sk(m) roughly, and later we make precise some finer aspects. As was already mentioned, we proceed by induc­ tion on k and m. One of the invariants of the construction is that the left endpoints of all the fans of Sk(m) always show up on the lower envelope. First we specify the boundary cases with k х 1 or m х 1. For k = 1, S1 (m) is simply a single fan with m segments. For m = 1, Sk(1) is obtained from sk-1 (2) by the following transformation of each fan (each fan has 2 segments): . .. The lower segment in each fan is translated by the same tiny amount to the left. Now we describe the construction of Sk(m) for general k, m > 2. First we construct Sk(m-1) inductively. We shrink this Sk(m-1) both vertically and horizontally by a suitable affine transform; the vertical shrinking is much more intensive than the horizontal one, so that all segments become very short and almost horizontal. Let S' be the transformed Sk(m-1). We will use many translated copies of S' as "microscopic" ingredients in the construction of Sk(m). The "master plan" of the construction is obtained from Sk-1 ( M), where M ч /k(m-1) is the number of fans in S'. Namely, we first shrink Sk_1(M) 7.2 Segments: Superlinear Complexity of the Lower Envelope 171 vertically so that all segments become nearly horizontal, and then we apply the affine transform ( x, y) Nj ( x, x + y) so that the slopes of all the segments are just a little over 1. Let S denote the resulting set. For each fan F in the master construction S, we make a copy S'p of the microscopic construction S' and place it so that its leftmost endpoint coincides with the left endpoint of F. Let the segments of F be s1 , . . . , sM , numbered by increasing slopes, and let f 1 , . . . , f M be the left endpoints of the fans in SɁ, numbered from left to right. The fan F is gigantic compared to S'p. Now we take F apart: We translate each si so that its left endpoint goes to l!i. The following drawing shows this schematically, since we have no chance to make a realistic drawing of Sk(m-1). Only a very small part of F near its left endpoint is shown. This construction yields Sk(m). It correctly produces fans of size m, by ap­ pending one top (and long) segment to each fan in every S'p. If S' was taken sufficiently tiny, then all the vertices of the lower envelope of S are pre­ served, as well as those in each SɁ. Crucially, we need to make sure that the above transformation of each fan F in S yields M-1 new vertices on the envelope, as is indicated below: S' F The new vertices lie on the right of S'p but, in the scale of the master con­ struction S, very close to the former left end point of F, and so they indeed appear on the lower envelope. This is where we need to make the whole construction more precise, namely, to say more about the structure of the fans in Sk(m). Let us call a fan r-escalating if the ratio of the slopes of every two successive segments in the fan is at least r. It is not difficult to check that for any given r > 1, 172 Chapter 7: Lower Envelopes the construction of Sk(m) described above can be arranged so that all fans in the resulting set are r-escalating. Then, in order to guarantee that the M-1 new vertices per fan arise in the general inductive step described above, we make sure that the fans in the master construction S are affine transforms of r-escalating fans for a suitable very large r. More precisely, let Q be a given number and let r = r(k, Q) be sufficiently large and 8 = 8(k, Q) > 0 sufficiently small: Let F arise from an r-escalating fan by the affine transformation described above (which makes all slopes a little bigger than 1), and assume that the shortest segment has length 1, say. Suppose that we translate the left endpoint of si , the segment with the ith smallest slope in F, by 81 + 82 + · · · + 8i almost horizontally to the right, where 8 < 8i < Q8. Then it is not difficult to see, or calculate, that the lower envelope of the translated segments of F looks combinatorially like that in the last picture and has M-1 new vertices. The reader who is not satisfied with this informal argument can find real and detailed calculations in the book [SA95]. We want to prove that the complexity of the lower envelope of Sk(m) is at least T km times the number of fans; in our notation, This is simple to do by induction, although the numbers involved are fright­ eningly large. For k = 1, we have /1 (m) = 1 and e1 (m) = m+1, so we are fine. For m = 1, we obtain fk(l) = 2/k-1 (2) and ek(l) = ek-1(2) + 2/k-1(2) > T(k-1) . 2 . !k-1(2) + 2/k-1(2) = (k+1) . /k-1(2) > hk . /k(1). In the construction of Sk(m) for k, m > 2, each of the fk-1 (M) fans of the master construction S produces M = /k(m-1) fans, and so fk(m) = fk-1(M) · M. For the envelope complexity we get a contribution of ek-l ( M) from S, ek(m-1) from each copy of S', and M-1 new vertices for each copy of S'. Putting this together and using the inductive assumption to eliminate the function e, we have ek(m) > ek-l (M) + fk-t(M) [ ek(m - 1) + M - 1 J > fk-t (M) 0 [T(k - l)M + Tk(m- l)M + M - 1] > fk-t (M) 0 [TkM + Tk(m - 1)M] = Úkm · M · fk-1 (M) = ekm · !k(m). Theorem 7. 2.1 is proved. 0 Note how the properties of the construction Sk ( m) contradict the intuition gained from small pictures: Most of the segments appear many times on 7.3 More on Davenport-Schinzel Sequences 173 the lower envelope, and between two successive segment endpoints on the envelope there is typically a concave arc with quite a large number of vertices. Bibliography and remarks. An example of n segments with super­ linear complexity of the lower envelope was first obtained by Wiernik and Sharir [WS88], based on an abstract combinatorial construction of Davenport-Schinzel sequences of order 3 due to Hart and Sharir [HS86). The simpler construction shown in this section was found by Shor (in an unpublished manuscript; a detailed presentation is given in [SA95]). Exercises 1. Construct Davenport-Schinzel sequences of order 3 of superlinear length directly. That is, rephrase the construction explained in this section in terms of Davenport-Schinzel sequences instead of segments. m 7.3 More on Davenport-Schinzel Sequences Here we come back to the asymptotics of the Davenport-Schinzel sequences. We have already proved that >.. 3(n)/n is unbounded. It even turns out that the construction in the proof of Theorem 7 .2.1 yields an asymptotically tight lower bound for >.. 3(n), which is of order na(n). Of course, we should explain what a(n) is. In order to define the extremely slowly growing function a, we first intro­ duce a hierarchy of very fast growing functions A1, A2, • • • • We put At(n) = 2n, Ak(n) = Ak-1 o Ak-l o · · · o Ak-l {1) (n-fold composition), k = 2, 3, . . . . Only the first few of these functions can be described in usual terms: We have 2 A2(n) = 2n and A3(n) = 22. with n twos in the exponential tower. The Ackermann function1 A( n) is defined by diagonalizing this hierarchy: A(n) = An(n). And a is the inverse function to A: a(n) = min{k > 1: A(k) > n}. Since A( 4) is a tower of 2's of height 216, encountering a number n with a(n) > 4 in any physical sense is extremely unlikely. 1 Several versions of the Ackermann function can be found in the literature, dif­ fering in minor details but with similar properties and orders of magnitude. 174 Chapter 7: Lower Envelopes The Ackermann function was invented as an example of a function grow­ ing faster than any primitive recursive function. For people familiar with some of the usual programming languages, the following semiformal expla­ nation can be given: No function as large as A(n) can be evaluated by a program containing only FOR loops, where the number of repetitions of each loop in the program has been computed before the loop begins. For a long time, it was thought that A( n) was a curiosity irrelevant to "natural" math­ ematical problems. Then theoretical computer scientists discovered it in the analysis of an extremely simple algorithm that manipulates rooted trees, and subsequently it was found in the backyard of elementary geometry, na1nely in the asymptotics of the Davenport-Schinzel sequences. As was already remarked above, a not too difficult analysis of the con­ struction in Theorem 7 .2.1 shows that .X3 ( n) = 0( na( n)). This is the correct order of magnitude, and we will (almost) present the matching upper bound in the next section. Even the constants in the asymptotics of A3 ( n) are known with surprising precision. Namely, we have < na(n) - 2n < ..\3(n) < 2na(n) + 0 ( Jna(n)) , and so the gap in the main term is only a factor of 4, in spite of the complexity of the whole probletn! Higher-order Davenport-Schinzel sequences and their generaliza­ tions. The asymptotics of the functions As ( n) for fixed 8 > 3, which corre­ spond to forbidden patterns ababa . . . with 8+ 2 letters, is known quite well, although not entirely precisely. In particular, A4 ( n) is of the (strange) order n · 2o(n) , and for a general fixed s, we have n . 2Ps (a{n)) < As(n) < n . 2q.., (o(n)) ' where Ps ( x) is a polynomial of degree l s 2 2 J (with a positive leading coeffi­ cient) and q8 ( x) is a polynomial of the same degree, for s odd multiplied by log x. The proofs are similar in spirit to those shown for 8 = 3 but tech­ nically much more complicated. On the other hand, proving something like As ( n) = 0( n log n) for every fixed 8 is not very difficult with the tricks from the proof of Proposition 7.4.2 below (see Exercise 7.4.1). The Davenport-Schinzel sequences have the simple alternating forbidden pattern ababa . . . . More generally, one can consider sequences with an arbi­ trary fixed for bidden pattern v, such as abcdabcdabcd, where a, b, c, d must be distinct symbols. Of course, here it is not sufficient to require that every two successive symbols in the sequence be distinct, since then the whole sequence could be 121212 . . . of arbitrary length. To get a meaningful problem, one can assume that if the forbidden pattern v has k distinct letters ( k 1 4 in our example), then each k consecutive letters in the considered sequence avoiding v must be distinct. Let Ex(v, n) denote the maximum possible length of such a sequence over n symbols. It is known that for every fixed v , we have 7.3 1\tlore on Davenport-Schinzel Sequences Ex( v, n) < 0 ( n · 2"'( n )c) 175 for a suitable exponent c = c(v). In particular, the length of such sequences is nearly linear in n. Moreover, many classes of patterns v are known with Ex( v, n) = 0( n), although a complete characterization of such patterns is still elusive. For example, for patterns v consisting only of two letters a and b, Ex( v, n) is linear in n if and only if v contains no subsequence ababa (not necessarily contiguous). These results have already found nice applications in combinatorial geometry and in enumerative combinatorics. Bibliography and remarks. Davenport and Schinzel [DS65] de­ fined the sequences now associated with their nan1es in 1965, rnoti­ vated by a geometric problem from control theory leading to lower envelopes of a collection of planar curves. They established some sim­ ple upper bounds on A8(n). The next major progress was made by Szemeredi [Sze74], who proved that A8(n) < Csn log n for a suitable Cs, where log n is the inverse of the tower function A3 ( n). Over ten more years passed until the breakthrough of Hart and Sharir [HS86], who showed that A3(n) is of order na(n). A recollection of Sharir about their discovery, after several months of trying to prove a lin­ ear upper bound and then learning about Szemeredi's paper, deserves to be reproduced (probably imprecisely but with Micha Sharir's kind consent): "We decided that if Szemeredi didn't manage to prove that A3(n) is linear then it is probably not linear. We were aware of only one result with a nonlinear lower bound not exceeding O(n log n), and this was Tarjan's bound of 8(na(n)) for path compressions. In des­ peration, we tried to relate it to our problem, and a miracle happened: The construction Tarjan used for his lower bound could be massaged a little so as to yield a similar lower bound for A3(n)." The path compression alluded to is an operation on a rooted tree. Let T be a tree with root r and let p be a leaf-to-root path of length at least 2 in T. The compression of p makes all the vertices on p, except for r, sons of r, while all the other father-to-son relations in T remain unchanged. Tarjan (Tar75] proved, as a part of an analysis of a simple algorithm for the so-called UNION-FIND problem, that if T is a suitably balanced rooted tree with n nodes, then the total length of all paths in any sequence of successive path compressions performed on T is no more than O(na(n)), and this is asymptotically tight in the worst case. Hart and Sharir put Davenport-Schinzel sequences of order 3 into correspondence with generalized path compressions (where only some nodes on the considered path become sons of the root, while the others retain the same father) and analyzed them in the spirit of Tarjan's proofs. Later the proofs were simplified and rephrased by Sharir to work directly with Davenport-Schinzel sequences. 176 Chapter 7: Lower Envelopes The constant h in the lower bound on A3 ( n) is by Wiernik and Sharir [WS88), and the 2 in the upper bound is due to Klazar [Kla99) (he gives a self-contained proof somewhat different from that in [SA95]). The most precise known bounds for A8(n) with s > 4 were obtained by Agarwal, Sharir, and Shor [ASS89], as a slight improvement over earlier results of Sharir. Davenport-Schinzel sequences are encountered in many geomet­ ric and nongeometric situations. Even the straightforward bound A2(n) = 2n-1 is often useful for simplifying proofs, and the asymp­ totics of the higher-order sequences allow one to prove bounds involv­ ing the function a(n) without too much work, although such bounds are difficult to derive from scratch. Numerous applications, mostly ge­ ometric, are listed in [SA95). Single cell. Pollack, Sharir, and Sifrony [PSS88) proved that the com­ plexity of a single cell in an arrangement of n segments in the plane is at most O(na(n)), by a reduction to Davenport-Schinzel sequences of order 3 (see Exercise 1). A similar argument shows that a single cell in an arrangement of n curves, with every two curves intersecting at most s times, has complexity 0(As+2(n)) (see [SA95]). Generalized Davenport-Schinzel sequences were first considered by Adamec, Klazar, and Valtr [AKV92). The near-linear upper bound Ex(v, n) = O(n · 2a(n)c) mentioned in the text is from Klazar [Kla92]. The most general results about sequences u with Ex(u, n) = O(n) were obtained by Klazar and Valtr [KV94]. A recent survey, includ­ ing applications of the generalized Davenport-Schinzel sequences, was written by Valtr [Val99a]. We mention two applications. The first one concerns Ramsey-type questions for geometric graphs (already considered in the notes to Sec­ tion 4.3). We consider an n-vertex graph G drawn in the plane whose edges are straight segments, and we ask, what is the maximum possible number of edges of G so that the drawing does not contain a certain geometric configuration? Here we are interested in the following two types of configurations: k pairwise crossing edges 3 pairwise crossing edges and k pairwise parallel edges, where two edges are called parallel if they do not cross and their four vertices are in convex position: -----·--..... .. . -· ---····--··--··----7.3 More on Davenport-Schinzel Sequences A graph with no two crossing edges is planar and thus has 0( n) ver­ tices. It seems to be generally believed that forbidding k pairwise cross­ ing edges forces O(n) edges for every fixed k. This has been proved for k = 3 by Agarwal, Aronov, Pach, Pollack, and Sharir [AAP+97], and for all k > 4, the best known bound is O(nlogn) due to Valtr (see [Val99a]). For k forbidden pairwise parallel edges, he derived an O(n) bound for every fixed k using generalized Davenport-Schinzel sequences, and the 0( n log n) bound for k pairwise crossing edges fol­ lows by a neat simple reduction. In this connection, let us mention a nice open question: What is the smallest n = n(k) such that any straight-edge drawing of the complete graph Kn always contains k pairwise crossing edges? The best known bound is O(k2) (AEG+94], but perhaps the truth is 0( k) or close to it. The second application of generalized Davenport-Schinzel sequen­ ces concerns a conjecture of Stanley and Wilf. Let a be a fixed per­ mutation of {1, 2, . . . , k }. We say that a permutation 1r of {1, 2, . . . , n} contains a if there are indices i1 < i2 < · · · < ik such that a( u) < a( v) if and only if tr{iu) < tr(iv), 1 < u < v < k. Let N(a, n) de­ note the number of permutations of { 1, 2, . . . , n} that do not con­ tain a. The Stanley-Wilf conjecture states that for every k and u there exists C such that N(a, n) < en for all n. Using generalized Davenport-Schinzel sequences, Alon and Friedgut [AFOO] proved that log N(a, n) < n{3(n) for every fixed a, where {3(n) denotes a very slowly growing function, and established the Stanley-Wilf conjecture for a wide class of a (previously, much fewer cases had been known). Klazar [KlaOO] observed that the Stanley-Wilf conjecture is implied by a conjecture of Fiiredi and Hajnal [FH92] about the maximum number of 1's in an nxn matrix of O's and 1's that does not contain a kxk submatrix having 1 's in positions specified by a given fixed k x k per­ mutation matrix. Fiiredi and Hajnal conjectured that at most 0( n) 1 's are possible. The analogous questions for other types of forbidden patterns of 1 's in 0/1 matrices are also very interesting and very far from being understood; this is another direction of generalizing the Davenport-Schinzel sequences. Exercises 177 1. Let C be a cell in an arrangement of n segments in the plane (assume general position if convenient). (a) Number the segments 1 through n and write down the sequence of the segment numbers along the boundary of C, starting from an arbi­ trarily chosen vertex of the boundary (decide what to do if the boundary has several connected components!). Check that there is no ababab sub­ sequence, and hence that the combinatorial complexity of C is no more than 0 ( A4 ( n)). li1 178 Chapter 7: Lower Envelopes (b) Find an example where an ababa subsequence does appear in the sequence constructed in (a). 0 (c) Improve the argument by splitting the segments suitably, and show that the boundary of C has complexity O(na(n)). 0 2. We say that an nxn matrix A with entries 0 and 1 is good if it contains no ( ! Ȁ ! ǿ ) ; that is, if there are no indices i1 < i2 and }I < i2 < j3 < j4 with ai1j1 = ai2]2 = ai1j3 = ai2]4 = 1. (a) Prove that a good A has at most A8(n) + O(n) ones for a suitable constant s. [!] (b) Show that one can take s = 3 in (a). [!] 7.4 Towards the Tight Upper Bound for Segments As we saw in Proposition 7.1.1, it is not very difficult to prove that the maximum length of a Davenport-Schinzel sequence of order 3 over n symbols satisfies A3 ( n) = 0( n log n). Getting anywhere significantly below this bound seems much harder, and the tight bound requires double induction. But there is only one obvious parameter in the problem, namely the number n, and introducing the second variable for the induction is one of the keys to the proof. Let w = a1 a2 . . . at be a sequence. A nonrepetitive segment in w is a contiguous subsequence u = aiai+l . . . ai+k consisting of k distinct symbols. A sequence w is m-decomposable if it can be partitioned into at most rn nonrepetitive segments (the partition need not be unique). Here is the main definition for the inductive proof: Let 'lj;( m, n) denote the maximum possible length of an m-decomposable Davenport-Schinzel sequence of order 3 over n symbols. First we relate 'ljJ ( m, n) to A3 ( n) . 7.4.1 Lemma. Every Davenport-Schinzel sequence of order 3 over n syin­ bols is 2n-decomposable, and consequently, Proof. Let w be the given Davenport-Schinzel sequence. We define a linear ordering -< on the symbols occurring in w: We set a -< b if the first occurrence of the symbol a in w precedes the first occurrence of the symbol b. We par­ tition w into maximal strictly decreasing segments according to the ordering -<. Here is an example of such a partitioning (the sequence is chosen so that the usual ordering of the digits coincides with -<): 11213214211516543. Clearly, each strictly decreasing segment is a nonrepetitive segment as well, and so it suffices to show that the number of the maximal strictly decreasing segments is at most 2n (the tight bound is actually 2n-1). Let u1 and Uj+l be two consecutive maximal strictly decreasing segments, let a be the last symbol of u1 , let i be its position in w, and let b be the first 7.4 Towards the Tight Upper Bound for Segments 179 symbol of Uj+l (at the (i+l)st position). We claim that the ith position is the last occurrence of a or the ( i+ 1 )st position is the first occurrence of b. This will imply that we have at most 2n segments ui , because each of the n symbols has (at n1ost) one first and one last occurrence. Supposing that the claim is not valid, we find the forbidden subsequence ababa. We have a -< b, for otherwise the ( i+ 1 )st position could be appended to 'l.lj , contradicting the maximality. The b at position i+ 1 is not the first b, and so there is some b before the ith position. There must be another a even before that b, for otherwise we would have b -< a. Finally, there is an a after the position i+ 1, and altogether we have the desired ababa. D Next, we derive a powerful recurrence for 'l/;(m, n). It is perhaps best to understand the proof first, and the complicated-looking statement then becomes quite natural. 7.4.2 Proposition. Let m, n > 1 and p < m be integers, and let m = m1 + m2 + · · · + mp be a partition of m into p nonnegative addends. Then there is a partition n = n 1 + n2 + · · · + np + n such that p 1/J(m, n} < 4m + 4n + 1/J(p, n) + L 'lj;(mk, nk)· k=l Proof. Let w be an m-decomposable Davenport-Schinzel sequence of order 3 over n symbols attaining 1jJ ( m, n). Let w = u1 u2 . . . Um. be a partition of w into nonrepetitive segments. Let w1 = u1 u2 . . . Um1 consist of the first m1 nonrcpetitive segments, w2 = Um1 +I . . . Um1 +m2 of the next m2 segments, and so on until wP. We call w1 , w2 , . . . , wP the parts of w. We divide the symbols in w into two classes: A symbol a is local if it occurs in (at most) one of the parts wk, and it is nonlocal if it appears in at least two distinct parts. We let n be the number of distinct nonlocal symbols and nk the number of distinct local symbols occurring in Wk. If we delete all the nonlocal symbols from Wk, we obtain an mk-decompos­ able sequence over nk symbols with no ababa. However, this sequence can still contain consecutive repetitions of some symbols, which is forbidden for a Davenport--Schinzel sequence. So we delete all symbols in each repetition but the first one; for example, 122232244 becomes 12324. We note that con­ secutive repetitions can occur only at the boundaries of the nonrepetitive segments Uj , and so at most mk-l local symbols have been deleted from wk. The remaining sequence is already a Davenport-Schinzel sequence, and so the total number of positions of w occupied by the local symbols is at most p p L[mk - 1 + 1/J(rnk, nk)] < m + L 1/J(mk, nk)· k=l k=l Next, we need to deal with the nonlocal symbols. Let us say that a non­ local symbol a is a middle symbol in a part Wk if it occurs both before Wk 180 Chapter 7: Lower Envelopes and after wk; otherwise, it is a nonmiddle symbol in wk. We estimate the contributions of middle and nonmiddle symbols separately. First we consider each part Wk in turn, and we delete all local symbols and all nonn1iddle syn1bols from it. Then we look at the sequence that remains from w after these deletions, and we delete all symbols but one from each contiguous repetition. As in the case of the local symbols, we have deleted at most m middle symbols. Clearly, the resulting sequence is a Davenport­ Schinzel sequence of order 3 over n symbols, and we claim that it is p­ decomposable (this is perhaps the most surprising part of the proof). Indeed, if we consider what remained from some Wk, we see that sequence cannot contain a subsequence bab, because some a's precede and follDw wk and we would get the forbidden ababa. Therefore, the surviving symbols of Wk form a nonrepetitive segment. Hence the total contribution of the middle symbols to the length of w is at most m + 'l/J (p, n ). The nonmiddle symbols in a given wk can conveniently be divided into starting and ending symbols (with the obvious meaning). We concentrate on the total contribution of the starting symbols; the case of the ending symbols is symmetric. Let n'k be the number of distinct starting symbols in wk; we have E)=l nk < n, since a symbol is starting in at most one part. Let us erase from wk all but the starting symbols, and then we also remove all contiguous repetitions in each w k, as in the two previous cases. The remaining starting symbols contain no subsequence abab, since we know that there is some a following wk. Thus, what is left of wk is a Davenport-Schinzel sequence of order 2 over nk syn1bols, and as such it has length at most 2nk-1. Therefore, the total number of starting symbols in all of w is no more than p L(mk - 1 + 2nk - 1) < m + 2n. k=l Summing up the contributions of local symbols, middle symbols, starting symbols, and ending symbols, we arrive at the bound claimed in the propo­ sition. Here is a graphic summary of the proof: symbols of w local: nonlocal m for repetitions + Ek 1/J(mk, nk) middle: m for repetitions + 1/J(p, n) (no aba in wk) non-middle m for repetitions starting: + Ek A2(n'k) (no abab in wk) ending: same as starting D 7.4 Towards the Tigl1t Upper Bound for Segments 181 How to prove good bounds from the recurrence. The recurrence just proved can be used to show that 1/J(m, n) = O((m+n)a(m)), and Lemma 7.4.1 then yields the desired conclusion .X3 ( n) = 0( na( n)). We do not give the full calculation; we only indicate how the recurrence can be used to prove better and better bounds starting from the obvious estimate 'lj;(m, n) < mn. First we prove that 1/J ( m, n) < 4m log2 m + 6n, for m a power of 2. From our recurrence with p = 2 and m1 = m,2 = r ;, we obtain Proceeding by induction on log2 m and using 1j.;(2, n) = 2n, we estimate the last expression by 4m + 4n + 2n + 2m(log2 m - 1) + 6n1 + 2m(log2 m - 1) + 6n2 = 4m log2 m + 6n as required. Next, we assume that m = A3(r) (the tower function) for an integer r and prove 'l/;(m, n) < 8rm + 10n by induction on r. This time we choose p = lo;:-m and mk = ; == log2 m == A3(r-l). For estimating 'l/J(p, n) we use the bound derived earlier. This gives p 'ljJ(m, n) < 4m + 4n + 4plog2 p + 6n + L 'l/J(mk, nk) k=l < 4m + 4n + 4m + 6n + 8(r - l)m + lO(n - n) = 8rm + IOn. So, by now we already know that .X3 ( n) = 0( n log n), where log n is the inverse to the tower function A3 ( n). This bound is as good as linear for practical purposes. In general, one proves that for rn = Ak ( r), '¢(m, n) < (4k - 4)rm + (4k - 2)n, by double induction on k and r. The inductive assumption for k-1 is always used to bound the term 'ljJ(p, n). We omit the rest of the calculation. Bibliography and remarks. In this section we draw mostly from [SA95], with sorne changes in terrninology. Exercises 1. For integers s > t > 1, let '¢; ( m, n) denote the maximum length of a Davenport-Schinzel sequence of order s (no subsequence abab . . . with s+2 letters) over n symbols that can be partitioned into m contiguous segments, each of them a Davenport-Schinzel sequence of order t. In particular, 1/Js(m, n) == 'lfJ!(m, n) is the maximum length of a Davenport­ Schinzel sequence of order s over n symbols that consists of m nonrepet­ itive segments. (a) Prove that A8(n) < 1/J!-1(n, n). [I] (b) Prove that 182 Chapter 7: Lower Envelopes 0 (c) Let w be a sequence witnessing 'l/Js ( m, n) and let m = m1 + m2 + · · · + mp be some partition of m. Divide 'UJ into p parts as in the proof of Proposition 7.4.2, the kth part consisting of mk nonrepetitive segments. With the tern1inology and notation of that proof, check that the local symbols contribute at most m+ I:ŗ=l 1/Js(mk, nk) to the length of w, the middle symbols at most m + 1/J!-2(p, n), and the starting symbols no more than m + 'l/Js-t(m, n). 0 (d) Prove by induction that 'l/J.K(n, m) < C8 • (m + n) logs-ࢗ(m+l) and As ( n) < CÇ n logs-2 ( n+ 1), for all s > 2 and suitable Cs and CÇ depending only on s (set p = 2 in (c)). 0 7.5 Up to Higher Dimension: Triangles in Space As we have seen, lower envelopes in the plane can be handled by means of a simple combinatorial abstraction, the Davenport-Schinzel sequences. Un­ fortunately, so far, no reasonable combinatorial model has been found for higher-dimensional lower envelopes. The known upper bounds are usually much cruder than those in the plane, but their proofs are quite complex and technical. We start with almost the simplest possible case: triangles in R 3. Here is an example of the lower envelope of triangles viewed from below: It is actually the vertical projection of the lower envelope on a horizontal plane lying below all the triangles. The projection consists of polygons, both convex and nonconvex, and the combinatorial complexity of the lower envelope is the total number of these polygons plus the number of their edges and vertices. Simple arguments, say using the Euler relation for planar graphs, show that if we do not care about constant factors, it suffices to consider the vertices of the polygons. It turns out that the worst-case complexity of the lower envelope is of order n2a(n). Here we prove a simpler, suboptimal bound: 7. 5 Up to Higher Di1nension: Triangles in Space 183 7.5.1 Proposition. The combinatorial complexity of the lower envelope of n triangles in R3 is at most O(na(n) logn) = O(n2a(n) logn), where a(n) stands for the maximum complexity of the lower envelope of n segments in the plane. It is convenient, although not really essential, to work with triangles in general position. As usual, a perturbation argument shows that this is where the n1aximum complexity of the lower envelope is attained. The precise gen­ eral position requirements can be found by inspecting the forthcoming proof, and we leave this to the reader. Walls and boundary vertices. Let H be a set of n triangles in R3 in general position. We need to bound the total number of vertices in the pro­ jection of the lower envelope. The vertices arc of two types: those that lie on the vertical projection of an edge of sorne of the triangles (boundary vertices), and those obtained from intersections of 3 triangles (inner vertices). In the above picture there are many boundary vertices but only two inner vertices. Yet the boundary vertices are rather easy to deal with, while the inner ver­ tices present the real challenge. We claiin that the total number of boundary vertices is at most 0 ( na ( n)). To see this, let a be an edge of a triangle h E H and let 1r a be the "vertical wall" through a, i.e., the union of all vertical lines that intersect a. Each triangle of H intersects 7ra in a (possibly empty) segment. The following drawing shows the triangle h, the wall 7ra, and the segments within it: 1ra ٔ · · · · · · · · a : ... - - - - -ƶ h Essentially, the boundary vertices lying on the vertical projection of a cor­ respond to breakpoints of the lower envelope of these segments within 1ra · Only the segment a needs special treatment, since on the one hand, its inter­ sections with other segments can give rise to boundary vertices, but on the other hand, it does not obscure things lying above it. To take care of this, we can consider two lower envelopes, one for the arrangement including a and another without a. So each edge a contributes at most 2a( n) boundary vertices, and the total number of boundary vertices is 0( nu( n)). Levels. Each inner vertex of the projected lower envelope corresponds to a vertex of the arrangement of H lying on the lower envelope, i.e., of level 0 (recall that according to our definitioQ. of arrangement, the vertices are inter­ sections of 3 triangles). The level of a vertex v is defined in the usual way: It is the number of triangles of H that intersect the open ray emanating from v vertically downwards. Let fk (H) denote the nurnber of vertices of level k, 184 Chapter 7: Lower Envelopes k = 0, 1, . . . . Further, let fk(n) be the maximum of fk(H) over all sets H of n triangles (in general position). So our goal is to estimate fo(n). The first part of the proof of Proposition 7.5.1 employs a probabilistic argument, very similar to the one in the proof of the zone theore1n (Theo­ rem 6.4.1) , to relate fo(H) and /1 (H) to fo(n-1). 7.5.2 Lemma. For every set H of n triangles in general position, we have n - 3 1 -- fo(H) < fo(n-1) - - !1(H). n n Proof. We pick one triangle h E H at random and estimate E [/o ( H \ { h})], the expected number of vertices of the lower envelope after removing h. Every vertex of the lower envelope of H is determined by 3 triangles, and so its chances of surviving the removal of h are n n 3 . For a vertex v of level 1, the probability of its appearing on the lower envelope is 6, since we must remove the single triangle lying below v . Therefore, n - 3 1 E[fo(H \ {h} )] = fo(H) + - ft(H). n n The lemma follows by using fo(H \ {h}) < /o(n-1). D Before proceeding, let us inspect the inequality in the lennna just proved. Let H be a set of n triangles with f0(H) = fo(n). If we ignored the term 6 ft(H), we would obtain the recurrence n n 3 fo(n) < /o(n-1). This yields only the trivial estimate fo(n) = O(n3), which is not surprising, since we have used practically no geometric information about the triangles. In order to do better, we now want to show that /1 (H) is almost as big as fo (H), in which case the term .ٓ f 1 (H) decreases the right-hand side significantly. Namely, we prove that /1(H) > fo(H) - O(na(n)). Substituting this into the inequality in Lemma 7.5.2, we arrive at n - 2 -- fo(n) < fo(n-1) + O(a(n)). n (7.1) We practiced this kind of recurrences in Section 6.4: The substitution cp( n) = n{Ʒh quickly yields f0(n) = O(na(n) logn). So in order to prove Proposi­ tion 7.5.1, it remains to derive (7.1), and this is the geometric heart of the proof. Making someone pay for the level-0 vertices. We are going to relate the number of level-0 vertices to the number of Ievel-l vertices by a local charging scheme: From each vertex v of level 0, we walk around a little and find suitable vertices of level 1 to pay for v , as follows. The level-0 vertex v is incident to 6 edges, 3 of them having level 0 and 3 level 1: 7.5 Up to Higher Diinension: Triangles in Space ' '··...., : "-.) lower envelope ............................. ; upward direction 185 The picture shows only a small square piece from each of the triangles incident to v . The lower envelope is on the bottom, and the edges of Ievel l emanating from v are marked by arrows. Let e be one of the Ievel-l edges going from v away from the lower envelope. We follow it until one of the following events occurs: (i) We reach the intersection v' of e with a vertical wall 7ra erected from an edge a of some triangle. This v' pays 1 unit to v. (ii) We reach the intersection v' of e with another triangle; i.e., v' is a vertex of the arrangement of H. This v' pays of a unit to v. This is done for all 3 level-1 edges emanating from v and for all vertices v of level 0. Clearly, every v receives at least 1 unit in total. It remains to discuss what kind of vertices the v' are and to estimate the total charge paid by thern. Since there is no other vertex on e between v and v ' , a particular v' can be reached from at most 2 distinct v in case (i) and from at most 3 distinct v in case (ii). So a v' is charged at most 2 according to case (i) or at most 1 according to case (ii) (because of the general position of H, these cases are never combined, since no intersection of 3 triangles lies in any of the vertical walls 7ra) · Next, we observe that in case (i), v' has level at most 2, and in case (ii), it has level exactly 1. This is best seen by considering the situation within the vertical plane containing the edge e. As we move along e, just after leaving v we are at level 1, with exactly one triangle h below, as is illustrated next: e case (i) case (ii) The level does not change unless we enter a vertical wall 1r a or another triangle h' E H. If we first enter some 1r a , then case ( i) occurs with v' = e n 1r a , and the level cannot change by more than 1 by entering 1r a . If we first reach a triangle h', we have case (ii) with v' = e n h', and v' has level 1. Each v' reached in case (i) is a vertex in the arrangement of segments within one of the walls 7ra , and it has level at most 2 there. It is easy to show 186 Chapter 7: Lower Envelopes by the technique of the proof of Clarkson's theorem on levels (Theorem 6.3.1) that the number of vertices of level at most 2 in an arrangement of n segments is 0( u( n)) (Exercise 2). Since we have 3n walls 1r a , the total amount paid according to case (i) is 0( na( n)). As for case (ii), all the v' are at level 1, and each pays at most 1, so the total charge is at most /1 (H). Therefore, fo(H) < f1(H) + O(na(n)), which establishes (7.1) and con-cludes the proof of Proposition 7.5.1. D Bibliography and remarks. The sharp bound of O(n2a(n)) for the lower envelope of n triangles in R 3 was first proved by Pach and Sharir [PS89] using a divide-and-conquer argument. A tight bound of O(nd-1a(n)) for (d-1)-dimensional simplices in Rd was established a little later by Edelsbrunner [Ede89]. Tagansky [Tag96] found a consid­ erably simpler argument and also proved some new results. We used his method in the proof of Proposition 7.5.1, but since we omitted a subtler analysis of the charging scheme, we obtained a suboptimal bound. To improve the bound to O(n2a(n)), the charging scheme is modified a little: The v' reached in case (i) pays Ɔ instead of 1, and the v' reached in case (ii) pays k if it was reached from k < 3 distinct v. Then it can be shown, with some work, that every vertex of the lower envelope receives a charge of at least Ɔ (and not only 1); see [Tag96]. Hence f1(H) > :fo(H) - O(na(n)), and the resulting recurrence be-comes n-;,13 fo(n) < fo(n- l)+O(a(n)). It implies fo(n) = O(na(n)); proving this is somewhat complicated, since the simple substitution trick does not work here. Exercises 1. Given a construction of a set of n segments in the plane with lower envelope of complexity a(n), show that the lower envelope of n triangles in R3 can have complexity O(na(n)). 0 2. Show that the number of vertices of level at most k in the arrangement of n segments (in general position) in the plane is at most O(k2u(lkv1J)). The proof of the general case of Clarkson's theorem on levels (Theo­ rem 6.3.1) applies almost verbatim. [!] 7.6 Curves in the Plane In the proof for triangles shown in the previous section, if we leave a vertex on the lower envelope along an edge of Ievel l, we cannot come back to the lower envelope before one of the events ( i) or ( ii) occurs. Once we start considering lower envelopes of curved surfaces, such as graphs of polynomials of degree 7. 6 Curves in the Plane 187 s for some fixed s, this is no longer true: The edge can immediately go back to another vertex on the lower envelope. Then we would be trying to charge one vertex of the lower envelope to another. This can be done, but one must define an "order" for each vertex, and charge envelope vertices of order i only to vertices of order smaller than i or to vertices of significantly higher levels. We show this for the case of curves in the plane. This example is artifi­ cial, since using Davenport-Schinzel Hequences leadH to much sharper bounds. But we can thus demonstrate the ideas of the higher-dimensional proof, while avoiding n1any technicalities. We rernark that this proof is not really an up­ grade of the one for triangles: Here we aim at a much cruder bound, and so some of the subtleties in the proof for triangles can be neglected. We consider n planar curves as discussed in Section 7.1: They are graphs of continuous functions R ---7 R, and every two intersect at most s times. 1\lloreover, we assu1ne for convenience that the curves cross at each intersec­ tion and no 3 curves have a common point. 7.6.1 Proposition. The maximum possible number of vertices on the lower envelope of a set H of n curves as above is at most O(n1+e) for every fixed E > 0. That is, f or every s and every E > 0 there exists C such that the bound is at most Cn1+e f or all n. Proof. Let v be a vertex of the arrangement of H. We say that v has order i if it is the ith leftrnost intersection of the two curves defining it. So the order is an integer between 1 and s. Let fwik (H) denote the number of vertices of order i and level at most k in the arrangement of H. Let fxik(n) be the maximum of this quantity over all n-element sets H of curves as-in the proposition. Further, we write f <k(H) = L::-l fwik(H) for the total number of vertices of level at most k. For k == 0 we write just f instead of f <o and similarly for j( i). Let v be a vertex of order i on the lower envelope. We define a charging Hcheme; that iH, we describe who is going to pay for v. We start walking from v to the left along the curve h passing through v and not being on the lower envelope on the left of v. If ki vertices are encountered, without returning to the lower envelope or escaping to -oo, then we charge each of these ki vertices ْ "' units. Here kt, k2, . • • , ks are integer pararneters whose values will be fixed later, but one can think of them as very large constants. If we end up at -oo before encountering ki vertices, we charge 1 to the curve h itself. Finally, if we are back at the lower envelope without having passed at least ki vertices, then, crucially, we must have crossed the second curve h' defining the vertex v again, at a vertex v' of order i-1, and this v' payH 1 for v . A picture illuHtrates these three cases of charging: h' v h h 188 Chapter 7: Lower Envelopes We see that v can charge a curve or a vertex of a smaller order significantly, or it can charge many vertices of arbitrary orders, but each of them just a little. We do this charging for all vertices v of order i on the lower envelope. A given vertex v' of the arrangement can be charged only if it has level at most ki, and it can be charged at most twice: The vertices of the lower envelope that might possibly charge v' can be found by following the two curves passing through v' to the right. So if v' has order different from i -1, then it pays at most ّ t , and if it has order i-1, then it can be charged 1 extra. Finally, each curve pays at most 1. Since at least 1 unit was paid for each vertex of order i on the lower envelope, we obtain (7.2) Next, we want to convert this into a recurrence involving only f and f( i). To this end, we estimate fÆik by following the proof of Clarkson's theorem on levels almost literally (as for the case of segments in Exercise 7.5.2). We obtain fÆik(n) = 0 ( k2 f(i) ( L ŕJ )) · By substituting this bound (and its analogue for f<k) into the right-hand side of (7.2), we arrive at the system of inequalities where C is a suitable constant and where we put f(O) = 0. We also have f < j(l) + . . . + j(s). It remains to derive the bound f(n) = O(n1+c) from this recurrence, which is not really difficult but still somewhat interesting. It is essential that f ( l ِ J) appearȺ only with the coefficient ki on the right-hand side, in contrast to j(i-I)(lk:J ), which has coefficient k. Let c > 0 be small but fixed. Let us see what happens if we try to prove the bounds f(i)(n) < Ainl+c and f(n) < An1+c: by induction on n using (7.3), where the Ai are suitable (large) constants and A = L: 1 Ai. The term n on the right-hand side of ( 7.3) is small compared to n 1 +c:, and so we ignore it for the moment. We also neglect the floor functions. By substituting the inductive hypothesis JC i) ( l ;: J) < Ai ( ; ) 1 +c: into the right-hand side of ( 7. 3), we obtain roughly n1+c(CAkic: + CAi-tkf-c:) < n1+c:(CAkic: + CAi-lki)· For the induction to work, Ai must be larger than the expression in paren­ theses. To make Ai bigger than the second term in parentheses, we can set Ai = 3C kiAi-l, Ⱥay (the conȺtant 3 iȺ chosen to leave enough room for the other terms). Then Ai = A1cŖ-1k2k3 · · · ki, with C1 = 3C. These Ai grow 7. 7 Algebraic Surface Patches 189 fast, and so A Ú A8 • Then the requirement that Ai be larger than the first term in parentheses yields, after a little simplification, k€ > cs-i+1k k k i -1 i+ 1 i+2 . . . s . Therefore, the ki should decrease very fast with i. We can set ks = c{le and ki = (Cf-i+Iki+lki+2 · · · ks)l/e. Now setting At, which is still a free parameter, sufficiently (enormously) large, we can make sure that the desired bounds j< i) ( n) < Ain 1 +e hold at least up to n = k1 , so that we can really use the recurrence (7.3) in the induction with the ki defined above. These considerations indicate that the induction works; to be completely sure, one should perform it once more in detail. But we leave this to the reader's diligence and declare Proposition 7.6.1 proved. D Bibliography and remarks. The method shown in this section first appeared in Halperin and Sharir [HS94], who considered lower envelopes of curved objects in R3. 7. 7 Algebraic Surface Patches Here we state, without proofs, general bounds on the complexity of higher­ dimensional lower envelopes. We also discuss a far-reaching generalization: an analogous bound for the complexity of a cell in a d-dimensional arrangement. Roughly speaking, the lower envelope of any n "well-behaved" pieces of (d-1)-dimensional surfaces in Rd has complexity close to nd-l. While for planar curves it is simple to say what "well-behaved" means, the situation is 1nore problematic in higher din1ensions. The known proofs are geometric, and listing as axioms all the geometric properties of "well-behaved pieces of surfaces" actually used in them seems too cumbersome to be useful. Thus, the most general known results, and even conjectures, are formulated for families of algebraic surface patches, although it is clear that the proofs apply in more general settings. First we recall the definition of a semialgebraic set. This is a set in Rd definable by a Boolean combination of polynomial inequalities. More formally, a set A C R d is called semialgebraic if there are polynomials PI, P2, . . . , Pr E R[x1 , . . . , xd] (i.e., polynomials in d variables with real coef­ ficients) and a Boolean formula (Xt , X2, . . . , Xr) (such as X1&(X2 V X3)), where X 1 , • • • , Xr are variables attaining values "true" or "false" , such that Note that the formula rnay involve negations, and so the sets { x E Rd: p1(x) > 0} and {x E Rd: p1(x) = 0} are semialgebraic, for example. 190 Chapter 7: Lower Envelopes One might want to allow for quantifiers, that is, to admit sets like { (x1, x2) E R2: =:3yl \ly2 p(xi , x2, Yt , Y2) > 0} for a 4-variate polynomial p. As is useful to know, but not very easy to prove (and we do not attempt it here), each such set is semialgebraic, too: According to a famous theorem of Tarski, it can be defined by a quantifier-free formula. Let D be the maximum of the degrees of the polynomials p1, . . . , Pr ap­ pearing in the definition of a semialgebraic set A. Let us call the number max( d, r, D) the description complexity2 of A. The results about lower en­ velopes concern ǚemialgebraic ǚetǚ whose deǚcription complexity is bounded by a constant. An algebraic surface patch is a special case of a semialgebraic set: It can be defined as the intersection of the zero set of some polynornial q E R[x1, . . . , xd] with a closed semialgebraic set B. Intuitively, q(x) = 0 defines a "surface" in Rd, and B cuts off a closed patch from that surface. Note that B can be all of R d, and so the forthcoming results apply, arnong others, to graphs of polynomials or, more generally, to surfaces defined by a single polynomial equation. Let us remark that in the papers dealing with algebraic surface patches, the definition is often rnore restrictive, and certainly the proofs tnakc several extra assumptions. Most significantly, they usually suppose that the patches are smooth and they intersect transversally; that is, near each point com­ mon to the relative interior of k patches, these k patches look locally like k hyperplanes in general position, 1 < k < d. These conditions follow from a suitable general position assumption, nan1ely, that the coefficients of all the polynomials appearing in the descriptions of all the patches are algebraically independent numbers. 3 This can be achieved by a perturbation, but a rigor­ ous argument, showing that a sufficiently small perturbation cannot decrease the complexity of the lower envelope too much, is not entirely easy. The algebraic surface patches are also typically required to be xd-mono­ tone (every vertical line intersects them only once). This can be guaranteed by partitioning each of the original patches into smaller pieces, slicing them along the locus of points with vertical tangent hyperplanes (and eliminating the vertical pieces). After these prelin1inaries, we can state the 1nain theore1n. 7.7.1 Theorem. For every integers b and d > 2 and every c > 0, there exists C = C ( d, b, c) such that the following holds. Whenever ')'1, 12, . . . , 'Yn are algebraic surf ace patches in Rd, each of description complexity at most b, the lower envelope of the arrangement of /I , 12 , . . . , In has combinatorial complexity at most Cnd-l+c:. 2 This terminology is not standard. 3 Real numbers a1, a2, . . . , am are algebraically independent if there is no nonzero polynomial p with integer coefficients such that p(a1 , a2, . . . , am) = 0. 7. 7 Algebraic Surface Patches 191 How is the combinatorial complexity of the lower envelope defined in this general case, by the way? For each /i, we define Mi C Rd-1 as the region where /i is on the bottom of the arrangement; formally, Mi consists of all ( x 1 , x2, . . . , Xd-l) E R d-1 such that the lowest intersection of the vertical line { (xl' X2, . . . ' Xd-1' t): t E R} with u; 1 /j lies in /i· The arrangement of the Mi is often called the minimization diagram of the /i , and the number of its faces is the complexity of the lower envelope. The proof of Theorem 7.7.1 is quite similar to the one shown in the pre­ ceding section. Each lower-envelope vertex is charged either to a vertex of lower order (the intersection of the same d patches but lying more to the left), or to some ki vertices, or to a vertex within the vertical wall erected from the boundary of some patch (all the charged vertices lying at level at most ki). The number of vertices of the last type is estimated by using the (d-1)-dimensional case of Theorem 7.7.1 (so the whole proof goes by induc­ tion on the dimension). To this end, one needs to show that the situation within the ( d-1 )-dimensional vertical wall, which in general is curved, can be mapped to a situation with algebraic surface patches in R d-l. Here the fact that we are dealing with semialgebraic sets is used most heavily. Theore1n 7. 7.1 is a powerful result and it provides nontrivial upper bounds on the complexity of various geometric configurations. Sometimes the bound can be improved by a problem-specific proof, but the general lower-envelope result often quickly yields roughly the correct order of magnitude. For exam­ ples see Exercise 1 and [SA95] or [ASOOa]. Single cell. Bounding the maximum complexity of a single cell in an ar­ rangement is usually considerably more demanding than the lower envelope question, mainly because a cell can have a complicated topology: It can have holes, tunnels, and so on (cells in hyperplane arrangements, no more com­ plicated than the lower envelope, are an honorable exception). The following theorem provides a bound analogous to that of Theorem 7.7.1. It was proved by similar methods but with several new ideas, especially for the topological complexity of the cell. 7.7.2 Theorem. For every integers b and d > 2 and every e > 0, there exist Co = Co ( d, b) and C = C ( d, b, c) such that the f ollowing holds. Let K be a cell in the arrangement of n algebraic surface patches in R d in general position, each of description complexity at most b. Then the combinatorial complexity of K (the number of f aces in its closure) is at most Cnd-l+e:, and its topological complexity (the sum of the Betti numbers) is no more than 11 d-1 von . The general position assumption can probably be removed, but I am aware of no explicit reference, except for the special case d = 3. Bibliography and remarks. For a thorough discussion of semialge­ braic sets and quantifier elimination we refer to books on real algebraic geometry, such as Bochnak, Coste, and Roy [BCR98]. 192 Chapter 7: Lower Envelopes An old conjecture of Sharir asserts that the combinatorial com­ plexity of the lower envelope in the situation of Theorem 7. 7.1 is at most O(nd-2 A8(n)) for a suitable s depending on the description com­ plexity of the patches. The best known lower bound is O(nd-1a(n)), which applies even for simplices. The decisive advance towards proving Theorem 7.7.1 was made by Halperin and Sharir [HS94], who established the 3-dimensional case. The general case was proved, as a culmination of a long development, by Sharir [Sha94]. A discussion of the general position assumption and the perturbation argument can also be found there. Interestingly, it is not proved that the maximum complexity is attained in general position; rather, it is argued that the expected complexity after an appropriate random perturbation is always at least a fixed fraction of the original complexity minus O(nd-l+c-). Some applications lead to the following variation of the lower en­ velope problem: We have two collections F and Q of algebraic surface patches in Rd, we project the lower envelopes of both F and g into Rd-l, and we are interested in the complexity of the superimposed projections (where, for d == 3, a vertex of the superimposed projec­ tions can arise, for example, as the intersection of an edge coming from F with an edge obtained from Q). In R3, it is known that this complexity is O(n2+E), where n = IFI + 191 (Agarwal, Sharir, and Schwarzkopf [ASS96]); this is similar to the bound for the lower en­ velopes themselves. The problem remains open in dimensions 4 and higher. The combinatorial complexity of a Voronoi diagram can also be viewed as a lower-envelope problem. Namely, let s1, s2, . . . , sn be ob­ jects in R d (points, lines, segments, polytopes), and let p be a metric on Rd. Each si defines the function /i: Rd ---+ R by fi(x) == p(x, si), and the Voronoi diagram of the Si is exactly the minimization diagram of the graphs of the /i (i.e., the projection of their lower envelope). If the /i are algebraic of bounded degree (or can be converted to such functions by a monotone transform of the range), the general lower envelope bound implies that the complexity of the Voronoi diagram in Rd is no more than O(nd+c-). This result is nontrivial, but it is widely believed that it should be possible to improve it by a factor of n (and even more in some special cases). Several nice partial results are known, mostly obtained by methods similar to those for lower envelopes. Most notably, Chew, Kedem, Sharir, Tagansky, and Welzl [CKS+98] proved that if the si are lines in R3 and the metric p is given by a norm whose unit ball is a convex polytope with a constant­ bounded number of vertices (this includes the e 1 and £00 metrics, but not the Euclidean metric), then the Voronoi diagram has complexity O(n2a(n) logn). On the other hand, Aronov [AroOO] constructed, for 7. 7 Algebraic Surface Patches every p E [1, oo], a set of n (d-2)-fiats in Rd whose Voronoi diagram under the fp metric ha.. ࣧ complexity O(nd-l) (Exercise 5.7.3). Single cell. For a single cell in the arrangement of n simplices in Rd, Aronov and Sharir [AS94) obtained the complexity bound O(nd-1 log n). Halperin and Sharir [HS95) managed to prove Thea­ rein 7.7.2 in dimension 3. The effort was crowned by Basu [Bas98J, who showed by an argument inspired by Morse theory that the topo­ logical complexity of a single cell in R d, assuming general position, is O(nd-l ); the Halperin-Sharir technique then implies the O(nd-I+c:) bound on the combinatorial complexity. The research of Sharir and his colleagues in this problern (and many other problems discussed in this chapter) has been motivated by questions about automatic motion planning for a robot. For exam­ ple, let us consider a square-shaped robot in the plane moving among n pairwise disjoint segment obstacles. The placement of the robot can be specified by three coordinates: the position ( x, y) of the center and the angle a of rotation. Each obstacle excludes some placements of the robot. With suitable choice of coordinates, say (x, y, tan Ϥ), the region of excluded placements is bounded by a few algebraic surface patches. Hence all possible placements of the robot reachable from a given position by a continuous obstacle-avoiding Inovement correspond to a single cell in the arrangement of O(n) algebraic surface patches in R 3. Consequently, the set of reachable placements has combinatorial complexity at most O(n2+c:). Similar reduction works for more gen­ eral shapes of the robot and of the obstacles (the robot may even have movable parts), as long as the robot and each of the obstacles can be described by a bounded number of algebraic surface patches. Unfor­ tunately, even in quite simple settings, the combinatorial complexity of the reachable region can be very large. For example, a cube robot in R3 has 6 degrees of freedom, and so its placements correspond to points in R 6. Exact motion planning algorithms thus becorne rather impractical, and faster approximate algorithms are typically used. The complexity of unions. This is another type of problem that often occurs in the analysis of geometric algorithms. Let A1, A2, . • • , An be sets in the plane, each of them bounded by a closed Jordan curve, and suppose that the boundaries of every Ai and Aj intersect in at most s points. For s = 2, the Ai arc called pseudodisks, and the primary example is circular disks. pseudo disks not pseudodisks 193 194 Chapter 7: Lower Envelopes For this case Kedem, Livne, Pach, and Sharir [KLPS86] proved that the complexity of Uà 1 Ai is O(n), where the complexity is measured as the sum of the complexities of the "exterior" cells of the arrange­ ment, i.e., the cells that are not contained in any of the Ai. For s 2: 4, long and skinny sets can form a grid pattern and have union complexity about n2, but linear or near-linear bounds were proved under additional assumptions. One type of such additional assumption is metric, namely, that the objects are "fat." A rather complicated proof of Efrat and Sharir (ESOO] shows that if each Ai is convex, the ratio of the circumradius and inradius is bounded by some constant K, and every two boundaries intersect at most s times, then the union complexity is at most O(n1+e) for any c > 0, with the constant of proportionality depending on s, K, c. Earlier, Matousek, Pach, Sharir, Sifrony, and Welzl [MPS+94] gave a simpler and more precise bound of 0( n log log n) for fat triangles. Pach, Safruti, and Sharir [PSSOI) showed that the union of n fat wedges in R3 (intersec­ tions of two half-spaces with angle at least some no > 0), as well as the union of n cubes in R3, has complexity O(n2+e). Various extensions of these results to nonconvex objects or to higher diinensions seen1 easy to conjecture but quite hard to prove. Several results are known where one assumes that the Ai have special shapes or bounded complexity. Aronov, Sharir, and Tagansky [AST97] proved that the complexity of the union of k convex polygons in the plane with n vertices in total is O(k2+na(k)) and that the union of k convex polytopes in R3 with n vertices in total has complexity 0( k3 + kn log k). Boissonnat, Sharir, Tagansky, and Yvinec [BSTY98] showed that the union of n axis-parallel cubes in R d has 0( n f d/21) complexity, and O(nld/2J ) complexity if the cubes all have the same size; both these bounds are tight. Agarwal and Sharir [ASOOc] proved that the union of n infinite cylinders of equal radius in R3 has complexity O(n2+c) (here O(n2) is a lower bound), and more generally, if A 1 , . . . , An are pairwise dis­ joint triangles in R3 and B is a ball, then Ui(Ai + B) has complexity O(n2+c), where Ai + B = {a + b: a E Ai, b E B} is the Minkowski sum. The proof relies on the result mentioned above about two super­ imposed lower envelopes. Exercises 1. Let p1, . . . , Pn be points in the plane. At time t = 0, each Pi starts moving along a straight line with a fixed velocity Vi · Use Theorem 7. 7.1 to prove that the convex hull of the n moving points changes its combinatorial structure at most O(n2+c) times during the time interval [0, oo ). 0 The tight bound is O(n2); it was proved, together with many other related results, by Agarwal, Guibas, Herschberger, and Veach [AGHVOl]. 8 Intersection Patterns of Convex Sets In Chapter 1 we covered three simple but basic theorems in the theory of convexity: Belly's, Radon's, and Caratheodory's. For each of them we present one closely related but more difficult theorem in the current chapter. These n1ore advanced relatives are selected, among the vast number of variations on the Helly-Radon-Caratheodory theme, because of their wide applicability and also because of nice techniques and tricks appearing in their proofs. The development started in this chapter continues in Chapters 9 and 10. One of the culrninations of this route is the (p, q)-theorern of Alon and Kleit­ rnan, which we will prove in Section 10.5. The proof ingzniously combines many of the tools covered in these three chapters and illustrates their power. Readers who do not like higher dimensions may want to consider dimen­ sions 2 and 3 only. Even with this restriction, the results are still interesting and nontri via]. 8.1 The Fractional Belly Theorem Belly's theorem says that if every at most d+ 1 sets of a finite family of convex sets in Rd intersect, then all the sets of the farnily intersect. What if not necessarily all, but a large fraction of ( d+ 1 )-tuples of sets, intersect? The following theorem states that then a large fraction of the sets must have a point in common. 8.1.1 Theorem (Fractional Helly theorem). F or every dimension d > 1 and every a > 0 there exists a {3 = {3( d, a) > 0 with the f ollowing property. Let F1 , . . . , Fn be convex sets in R d, n > d+ 1, and suppose that f or at least a(dُl) of the (d+1)-point index sets I c {1, 2, . . . ' n} , we have niEI Fi t= 0. Then there exists a point contained in at least f3n sets arnong tlle Fi. 196 Chapter 8: Intersection Patterns of Convex Sets Although simple, this is a key result, and many of the subsequent devel­ opments rely on it. The best possible value of j3 is j3 = 1 - (1 -a) I/(d+l). We prove the weaker estirnate j3 > dI 1 . Proof. For a subset I C {1, 2, . . . , n }, let us write F1 for the intersection niEI Fi. First we observe that it is enough to prove the theorem for the Fi closed and hounded (and even convex polytopes). Indeed, given sorne arbitrary F1, . . . , Fn, we choose a point PI E F1 for every (d+1}-tuple I with F1 f=- 0 and we define Ff = conv{p1: F1 -=/= 0, i E I}, which is a polytope contained in Fi· If the theorem holds for these Ff, then it also holds for the original Fi· In the rest of the proof we thus assume that the Fi, and hence also all the non empty F1, are compact. Let <Iexdenote the lexicographic ordering of the points of Rd by their coordinate vectors. It is easy to show that any compact subset of Rd has a unique lexicographically minimum point (Exercise 1). We need the following consequence of Helly's theorem. 8.1.2 Lemma. Let I C {1, 2, . . . , n} be an index set with F1 # 0, and let v be the (unique) lexicographically minimum point of Fr. Then there exists an at most d-element subset J C I such that v is the lexicographically minimum point of FJ as well. In other words, the minimum of the intersection F1 is always enforced by some at most d "constraints" Pi, as is illustrated in the following drawing (note that the two constraints determining the minimum are not determined uniquely in the picture): Proof. Let C = {x E Rd: x a dMI sets among the Fi. Hence we may set f3 = dIl . D Bibliography and remarks. The fractional Helly theorem is due to Katchalski and Liu [KL 79]. The quantitatively sharp version with /3 = 1-(1-a)1/(d+l) was proved by Kalai [Kal84] (and the main result needed for it was proved independently by Eckhoff [Eck85J, too). Ac­ tually, there is an exact result: If the maximum size of an intersecting subfamily in a family of n convex sets in Rd is m, then the smallest possible number of intersecting ( d+ 1 )-tuples is attained for the family consisting of n - m + d hyperplanes in general position and m - d copies of Rd. But there are many other essentially different examples attaining the same bound. These assertions are consequences of considerably more general re­ sults about the possible intersection patterns of convex sets in Rd. For explaining some of them it is convenient to use the language of simplicial complexes. Let F = { F1, F2, . . . , Fn} be a family of con­ vex sets in Rd. The nerve N(F) of :F is the simplicial complex with vertex set {1, 2, . . . , n} whose simplices are all I C {1, 2, . . . , n} such that niEJ Fi =I= 0. A simplicial complex obtainable as N(F) for some family of convex sets in Rd is called d-representable. A characteri­ zation of d-representable simplicial complexes for a given d is most likely out of reach. There are several useful necessary conditions for d-representability. One certainly worth mentioning is d-collapsibility, which means that a given simplicial complex IC can be reduced to the void complex by a sequence of elementary d-collapsings, where an ele­ mentary d-collapsing consists in deleting a face S E IC of dimension at most d-1 that lies in a unique rnaxiinal face of IC and all the faces of IC containing S. The proof of the d-collapsibility of every d-representable complex (Wegner [Weg75]) uses an idea quite similar to the proof of the fractional Helly theorem. While no characterization of d-representable complexes is known, the possible !-vectors of such complexes (where fi is the number of i-dimensional simplices, which correspond to ( i+ 1 )-wise intersections here) are fully characterized by a conjecture of Eckhoff, which was proved by Kalai [Kal84], [Kal86] by an impressive combination of sev­ eral methods. The same characterization applies to d-collapsible com­ plexes as well (and even to the rnore general d-Leray cornplexes; these 198 Chapter 8: Intersection Patterns of Convex Sets are the complexes where the homology of dimension d and larger van­ ishes for all induced subcomplexes). We do not formulate it but men­ tion one of its consequences, the upper bound theorem for families of convex sets: If fr(N(F)) = 0 for a family F of n convex sets in Rd and some r, d < r < n, then fk(N(F)) < (ȍ 0 (kr 1!1) (n-j+d); equality holds, e.g., in the case mentioned above (several copies of R d and hy­ perplanes in general position). Exercises 1. Show that any compact set in Rd has a unique point with the lexico­ graphically smallest coordinate vector. GJ 2. Prove the following colored Helly theorern: Let C1 , . . . , Cd+l he finite f arn­ ilies of convex sets in Rd such that for any choice of sets C1 E C1, . . . , Cd+t E Cd+t, the intersection Ct n · · · n Cd+l is nonemp(y. Then f or some i, all the sets ofCi have a nonempty intersection. Apply a method similar to the proof of the fractional Helly theorem; i.e., consider the lex­ icographic minima of the intersections of suitable collections of the sets. 0 The result is due to Lova.sz ([Lov74]; also see [Bar82]). 3. Let Ft, F2, . . . , Fn be convex sets in Rd. Prove that there exist convex polytopes P1, P2, . . . , Pn such that dim(niEJ Fi) = dim(niEJ Pi) for ev­ ery I C {1, 2, . . . , n} (where dim(0) = -1). õ 8.2 The Colorful Caratheodory Theorem Caratheodory 's theorem asserts that if a point x is in the convex hull of a set X C Rd, then it is in the convex hull of sorne at rnost d+1 points of X. Here we present a "colored version" of this statement. In the plane, it shows the following: Given a red triangle, a blue triangle, and a white triangle, each of them containing the origin, there is a vertex r of the red triangle, a vertex b of the blue triangle, and a vertex w of the white triangle such that the tricolored triangle rbw also contains the origin. (In the following pictures, the colors of points are distinguished by different shapes of the point markers.) •• • ' . • The d-dimensional statement follows. • , ' : .. .. ȟ . , • ' '"" -.: ... Ȟ / : .. . ' . ' w 8.2 The Colorful CaratlHǹodory Theore1n 199 8.2.1 Theorem (Colorful Caratheodory theorem). Consider d+l fi­ nite point sets M1, . . . , Md+l in Rd such that the convex hull of each Mi contains the point 0 (the origin). Then there exists a (d+l)-point set S C M1 U · · · U Md+l with IMi n Sl = 1 for each i and such that 0 E conv(S). (If we imagine that the points of Mi have "color" i, then we look for a "rain­ bow" ( d+ 1 )-point S with 0 E conv( S), where "rainbow" = "containing all colors.") Proof. Call the convex hull of a (d+l)-point rainbow set a rainbow simplex. We proceed by contradiction: We suppose that no rainbow simplex contains 0, and we choose a (d+l)-point rainbow set S such that the distance of conv(S) to 0 is the smallest possible. Let x be the point of conv( S) closest to 0. Consider the hyperplane h containing x and perpendicular to the segment Ox, as in the picture: . > / II y conv(S) o + h Then all of S lies in the closed half-space h- bounded by h and not contain­ ing 0. We have conv(S) n h = conv(S n h), and by Caratheodory's theorem, there exists an at most d-point subset T c S n h such that x E conv(T). Let i be a color not occurring in T (i.e., Mi n T = 0). If all the points of Mi lay in the half-space h-, then 0 would not be in conv(Mi), which we assume. Thus, there exists a point y E Mi lying in the complement of h­ (strictly, i.e., y ¢ h). Let us form a new rainbow set S' from S by replacing the (unique) point of Mi n S by y. We have T C S', and so x E conv(S'). Hence the segment xy is contained in conv(S'), and we see that conv(S') lies closer to 0 than conv(S), a contradiction. The colorful Caratheodory theorem is proved. D This proof suggests an algorithm for finding the rainbow simplex as in the theorem. Nantely, start with an arbitrary rainbow simplex, and if it does not contain 0, switch one vertex as in the proof. It is not known whether the number of steps of this algorithm can be bounded by a polynomial function of the dimension and of the total number of points in the Mi. It would be very interesting to construct configurations where the number of steps is very large or to prove that it cannot be too large. Bibliography and remarks. The colorful Caratheodory theorem is due to Barany (Bar82]. Its algorithmic aspects were investigated by Barany and Onn [B097]. 200 Chapter 8: Intersection Patterns of Convex Sets Exercises 1. Let S and T be ( d+ 1 )-point sets in R d, each containing 0 in the convex hull. Prove that there exists a finite sequence So = S, 81, 82, . . . , Sm = T of (d+l)-point sets with Si C S U T and 0 E conv(Si) for all i, such that si+l is obtained from si by deleting one point and adding another. Assume general position of S U T if convenient. Warning: better do not try to find a ( d+ 1 )-term sequence. m 8.3 Tverberg's Theorem Radon's lemma states that any set of d+2 points in Rd has two disjoint subsets whose convex hulls intersect. Tverberg's theorem is a generalization of this statement, where we want not only two disjoint subsets with intersecting convex hulls but r of them. It is not too difficult to show that if we have very many points, then such r subsets can be found. For easier formulations, let T( d, r) denote the smallest integer T such that for any set A of T points in R d there exist pairwise disjoint subsets A1, A2, . • . , Ar c A with n࢔ 1 conv(Ai) =/= 0. Radon's lemma asserts that T(d, 2) = d+2. It is not hard to see that T(d, r1r2) < T(d, r1)T(d, r2) (Exercise 1). To­ gether with Radon's lemma this observation shows that T( d, r) is finite for all r, but it does not give a very good bound. Here is another, more sophisticated, argument, leading to the (still subop­ timal) bound T(d, T) < n = (r-1)(d+1)2 + 1. Let A be an n-point set in Rd and let us set s = n - (r-l)(d+l). A simple counting shows that every d+1 subsets of A of size s all have a point of A in common. Therefore, by Helly's theorem, the convex hulls of all s-tuples have a common point x (typically not in A anymore). By Carathedory's theorem, x is contained in the convex hull of so1ne (d+1}-point set A1 C A. Since A \ A1 has at least s points, x is still contained in conv( A \ A1), and thus also in the convex hull of some ( d+ 1 )-point A2 C A \ A1, etc. We can continue in this manner and select the desired r disjoint sets A 1 , . . . , Ar, all of them containing x in their convex hulls. It is not difficult to see that T(d, r) cannot be smaller than (r-l)(d+1)+1 (Exercise 2). Tverberg's theorem asserts that this smallest conceivable value is always sufficient. 8.3.1 Theorem (Tverberg's theorem). Let d and r be given natural numbers. For any set A C R d of at least ( d+ 1) ( r-1) + 1 points there exist r pairwise disjoint subsets A1, A2, . . . , Ar c A such that n࢕ 1 conv(Ai) =I= 0. The sets A1, A2, . . . , Ar as in the theorem are called a Tverberg partition of A (we may assume that they form a partition of A), and a point in the intersection of their convex hulls is called a Tverberg point. The following 8.3 Tverberg's Tl1eorem 201 illustration shows what such partitions can look like for d = 2 and r = 3; both the drawings use the same 7-point set A: (Are these all Tverberg partitions for this set, or are there more?) As in the colorful Caratheodory theorem, a very interesting open problem is the existence of an efficient algorithm for finding a Tverberg partition of a given set. There is a polynomial-time algorithm if the dimension is fixed, but some NP-hardness results for closely related problems indicate that if the dimension is a part of input then the problem might be algorithmically difficult. Several proofs of Tverberg's theorem are known. The one demonstrated below is maybe not the simplest, but it shows an interesting "lifting" tech­ nique. We deduce the theorem by applying the colorful Caratheodory theorem to a suitable point configuration in a higher-dimensional space. Proof of Tverberg's theorem. We begin with a reformulation of Tver­ berg's theorem that is technically easier to handle. For a set X C R d, the convex cone generated by X is defined as the set of all linear combinations of points of X with nonnegative coefficients; that is, we set cone( X) = {t aixi: Xl , . . . ' Xn E X, 0!1, • • • ' an E R, O!i > o} . t=1 Geometrically, cone(X) is the union of all rays starting at the origin and passing through a point of conv(X). The following statement is equivalent to Tverberg's theorem: 8.3.2 Proposition (Tverberg's theorem: cone version). Let A be a set of ( d+ 1) ( r-1) + 1 points in R d+ 1 such that 0 ¢ conv (A). Then there exist r pairwise disjoint subsets A1, A2, . . . , Ar c A such that n: 1 cone(Ai) =/= {0}. Let us verify that this proposition implies Tverberg's theorem. En1bed Rd into Rd+l as the hyperplane xd+l = 1 (as in Section 1.1). A set A c Rd thus becomes a subset of Rd+l; moreover, its convex hull lies in the Xd+l = 1 hyperplane, and thus it does not contain 0. By Proposition 8.3.2, the set A can be partitioned into groups A1, . . . , Ar with n: 1 cone(Ai) =/= {0}. The intersection of these cones thus contains a ray originating at 0. It is easily checked that such a ray intersects the hyperplane xd+l = 1 and that the intersection point is a Tverberg point for A. Hence it suffices to prove Proposition 8.3.2. 202 Chapter 8: Intersection Patterns of Convex Sets Proof of Proposition 8.3.2. Let us put N = (d+l ) (r-l ); thus, A has N +l points. First we define linear maps cp j: R d+ 1 ---+ R N, j = 1 , 2, . . . , r. We group the coordinates in the image space R N into r -l blocks by d+ 1 coordinates each. For j = 1, 2, . . . , r-1, ({)j(x) is the vector having the coordinates of x in the jth block and zeros in the other blocks; symbolically, 'Pi (X) = ( 0 I 0 I · . . I 0 I X I 0 I . . . I 0 ) . ' ,I v (j -1) X The last mapping, <pr, has -x in each block: 'Pr(x) = (-x I - xI · · · I - x ). These maps have the following property: For any r vectors u1, . . . , Ur E Rd+1 ' r Lcpj(ui) = 0 holds if and only if u1 = u2 = · · · = Ur· (8.1) j =1 Indeed, this can be easily seen by expressing r L 'Pi ( Uj} = ( U1 - Ur I U2 - Ur I · · · I Ur-1 - Ur) . j=1 Next, let A = { a1, . . . , aN+t} C Rd+l be a set with 0 rl conv(A). We con­ sider the set M = <p1(A) U cp2 (A) U · · · U <pr(A) in RN consisting of r copies of A. The first r-1 copies are placed into mutually orthogonal coordinate sub­ spaces of R N. The last copy of each ai sums up to 0 with the other r-1 copies of ai. Then we color the points of M by N + 1 colors; all copies of the same ai get the color i. In other words, we set Mi = {'PI ( ai), <p2 ( ai), . . . , 'Pr ( ai)}. As we have noted, the points in each Mi sum up to 0, which means that 0 E conv(Mi), and thus the assumptions of the colorful Caratheodory theo­ rem hold for M1, . . . , MN+1 · Let S C M be a rainbow set (containing one point of each Mi) with 0 E conv( S). For each i, let f ( i) be the index of the point of Mi contained in S; that is, we have S = { 'P/(l) (at), 'P!(2)(a2), . . . , 'PJ(N+l) (aN+t) }. Then 0 E conv( S) nteans that N+l L ai 2 there exists an integer t such that given any t(d+1)-point set Y C Rd par­ titioned into d+ 1 color classes Yɢ, . . . , Y d+l with t points each, there ex-ist r pairwise disjoint sets A1, . . . , Ar such that each Ai contains exactly one point of each lj, j = 1, 2, . . . , d+ 1 (that is, the Ai are rainbow), and nȄ 1 conv(Ai) ٱ 0. Let T col ( d, r) denote the s1nallest t for which the conclusion of the theorem holds. It is known that Tcoi(2, r) = r for all r. It is possible that Tcol(d, r) = r for all d and r, but only weaker bounds have been proved. The strongest known result guarantees that Tcol(d, r) < 2r-1 whenever r is a prime power. Recall that in Tverberg's theorem, if we need only the existence of T( d, r), rather than the precise value, several simple arguments are available. In con­ trast, for the colored version, even if we want only the existence of T col ( d, r), there is essentially only one type of proof, which is not easy and which uses topological methods. Since such methods are not considered in this book, we have to omit a proof of the colored Tverberg theorem . . Bibliography and remarks. Tverberg's theorem was conjectured by Birch and proved by Tverberg (really!) [Tve66]. His original proof is 204 Chapter 8: Intersection Patterns of Convex Sets technically complicated, but the idea is simple: Start with some point configuration for which the theorem is valid and convert it to a given configuration by moving one point at a time. During the movement, the current partition rnay stop working at some point, and it must be shown that it can be replaced by another suitable partition by a local change. Later on, Tverberg found a simpler proof [Tve81]. For the proof presented in the text above, the main idea is due to Sarkaria [Sar92], and our presentation is based on a simplification by Onn (see [B097]). Another proof, also due to Tverberg and inspired by the proof of the colorful Caratheodory theorem, was published in a paper by Tverberg and Vrecica [TV93). Here is an outline. Let 1r = (A1, A2, . . . , Ar) be a partition of (d+1)(r-1}+1 given points into r disjoint nonempty subsets. Consider a ball intersect­ ing all the sets conv(Aj), j = 1, 2, . . . , r, whose radius p = p(1r) is the smallest possible. By a suitable general position assumption, it can be assured that the smallest ball is always unique for any par­ tition. (Alternatively, among all balls of the smallest possible radius, one can take the one with the lexicographically smallest center, which again guarantees uniqueness.) If p( 1r) = 0, then 1r is a Tverberg parti­ tion. Supposing that p( 1r) > 0, it can be shown that 1r can be locally changed (by reassigning one point from one class to another) to an­ other partition 1r1 with p(1r') < p(1r). Another proof, based on a similar idea, was found by Roudneff [Rou01a]. Instead of p(1r), he considers w(1r) = minxERd w(1r, x), where w(1r, x) = 2:َ 1 dist(x, conv(Ai))2. He actually proves a "cone version" of Tverberg's theorem (but dif­ ferent from our cone version and stronger). Several extensions of Tverberg's theorem are known or conjectured. Here we ntention only two conjectures related to the dimension of the set of Tverberg points. For X c Rd, let Tr(X) denote the set of all Tverberg points for r-partitions of A (the points of Tr (X) are usually called r-divisible). Reay [Rea68] conjectured that if X is in general position and has k more points than is generally necessary for the existence of a Tverberg r-partition, i.e., lXI = (d+1)(r -1) + 1 + k, then dim Tr(X) > k. This holds under various strong general position assumptions, and special cases for small k have also been established (see Roudneff (Rou01a] , [Rou01b]). Kalai asked the following sophis-ticated question in 1974: Does L" XI1 dim Tr(X) > 0 hold for every finite X c Rd? Here dim 0 = -1, and so the nonexistence of Tverberg r-partitions for large r must be compensated by sufficiently large di­ mensions of Tr(X) for small r. Together with other interesting aspects of Tverberg's theorem, this is briefly discussed in Kalai's lively sur­ vey [KalOl]. There he also notes that edge 3-colorability of a 3-regular graph can be reformulated as the existence of a Tverberg 3-partition 8.3 Tverberg's Theorem of a suitable high-dimensional point set. This implies that deciding whether T 3(X) = 0 for a (2d+3)-point X c Rd is NP-complete. It is interesting to note that Tverberg's theorem implies the center­ point theorem (Theorem 1.4.2). More generally, if x is an r-divisible point of a finite X C Rd, then each closed half-space containing x contains at least r points of X (at least one from each of the r parts); in particular' if I X I = n and r = r d† 1 l ' we get that every r-divisi ble point is a centerpoint. On the other hand, as an example of A vis (Avi93] in R3 shows, a point x such that each closed half-space h con­ taining x satisfies lh n XI > r need not be r-divisible in general; these two properties are equivalent only in the plane. A conjecture of Sierksma asserts that the number of Tverherg par­ titions for a set of ( r-1) ( d+ 1) + 1 points in R d in general position is at least ( ( r-1 )!)d. A lower bound of (rٍl)! (;) (r-l)(d+l)/2, provided that r > 3 is a prime number, was proved by Vucic and Zivaljevic [VZ93] by an ingenious topological argument. The colored Tverberg theorem was conjectured by Barany, Fiiredi, and Lova.sz [BFL90], who also proved the planar case. The general case was established by Zivaljevic and Vrecica (ZV92]; simplified proofs were given later by Bjorner, Lovasz, Zivaljevic, and Vrecica [BLZV94J and by Matousek [Mat96a] (using a method of Sarkaria). As was men­ tioned in the text, all these proofs are topological. They show that T col ( d, r) < 2r-1 for r a prime. Recently, this was extended to all prime powers r by Zivaljevic .(Ziv98] (a similar approach in a different problem was used earlier by Ozaydin, by Sarkaria, and by Volovikov). Barany and Larman (BL92] proved that T(2, r) = r for all r. We outline a beautiful topological proof, due to Lovasz (reproduced in [BL92]), showing that Tcot(d, 2) = 2 for all d. Let X be the surface of the ( d+ 1 )-dimensional crosspolytope. We recall that the crosspolytope is the convex hull of V = {et, -e1, e2, -e2, ... , ed+l, -ed+t}, where e1, e2, ... , ed+l is the standard orthonormal basis in R d+l. Note that X consists of 2d+l simplices of dimension d, each of them the convex hull of d+l points of V. Let Yi = {ui, vi} c Rd, i = 1, 2, . . . , d+1, be the given two-point color classes. Define the mapping f: V --+ Rd by setting f(ei) = ui, f(-ei) = vi. This mapping has a unique extension 1: X --+ Rd such that f is affine on each of the d-dimensional simplices mentioned above. This 1 is a continuous mapping of X --+ Rd. Since X is homeomorphic to the d-dimensional sphere Sd, the Borsuk-Ulam theorem guarantees that there is an x E X such that /(x) = J(-x). If Vi c V is the vertex set of a d-dimensional simplex containing x, then Vi n ( -V1) = 0, -x E co࢘v(-Vt), and as is easy to check, S1 = f(V1) and S2 = f(-V1) are vertex sets of intersecting rainbow simplices ( = f(-x) is a common point). 205 206 Chapter 8: Intersection Patterns of Convex Sets Exercises 1. Prove (directly, without using Tverberg's theorem) that for any integers d, r1, r2 > 2, we have T(d, r1r2) < T(d, rt)T(d, r2). 0 2. For each r > 2 and d > 2, find (d+1)(r-1) points in Rd with no Tverberg r-partition. m 3. Prove that Tverberg's theorem implies Proposition 8.3.2. Why is the assumption 0 tJ_ conv(A) necessary in Proposition 8.3.2? OJ 4. (a) Derive the following Radon-type theorem (use Radon's lemma): For every d > 1 there exists f = f( d) such that every £ points in R d in general position can be partitioned into two disjoint subsets A, B such that not only conv(A) n conv(B) =!= 0, but this property is preserved by deleting any single point; that is, conv(A \ {a}) n conv(B) =1- 0 for each a E A and conv(A) n conv(B \ {b}) :/= 0 for each b E B. 0 (b) Show that £(2) > 7. l2J Remark. The best known value of f(d) is 2d+3; this was established by Lar1nan [Lar72], and his proof is difficult. The original question is, What is the largest n = n( k) such that every n points in R k in general position can be brought to a convex position by some projective transform? Both formulations are related via the Gale transform. 5. Show that for any d, r > 1 there is an (N +I)-point set in Rd in general position, N == (d+1)(r-1), having no more than ((r·-1)!)d Tverberg· partitions. 0 . 6. Why does Tverberg's theorem imply the centerpoint theorem (Theo­ rem 1.4.2)? OJ 9 Geometric Selection Theorems As in Chapter 3, the cornmon theme of this chapter is geometric Ramsey theory. Given n points, or other geometric objects, where n is large, we want to select a not too srnall subset forrning a configuration that is "regular" in some sense. As was the case for the Erdos-Szekeres theorem, it is not difficult to prove the existence of a "regular" configuration via Ramsey's theorem in some of the subsequent results, but the size of that configuration is very small. The proofs we are going to present give Inuch better bounds. In many cases we obtain "positive-fraction theorems" : The regular configuration has size at least en, where n is the number of the given objects and c is a positive constant independent of n. In the proofs we encounter important purely combinatorial results: a weak version of the Szemeredi regularity lemrna and a theorern of Erdos and Si­ monovits on the number of complete k-partite subhypergraphs in dense k­ uniform hypergraphs. We also apply tools from Chapter 8, such as Tverberg's theorem. 9.1 A Point in Many Simplices: The First Selection Lemma Consider n points in the plane in general position, and draw all the (ˇ) triangles with vertices at the given points. Then there exists a point of the plane common to at least ƅ ('ٌ) of these triangles. Here ƅ is the optimal constant; the proof below, which establishes a similar statement in arbitrary dimension, gives a considerably smaller constant. 208 Chapter 9: Geometric Selection Theorems For easier formulations we introduce the following terminology: If X c Rd is a finite set, an X -simplex is the convex hull of some ( d+ 1 )-tuple of points of X. We make the convention that X-simplices are in bijective correspon­ dence with their vertex sets. This means that two X-simplices determined by two distinct ( d+ 1 )-point subsets of X are considered different even if they coincide as subsets of Rd. Thus, the X -simplices form a multiset in general. This concerns only sets X in degenerate positions; if X is in general position, then distinct ( d+ 1 )-point sets have distinct convex hulls. 9.1.1 Theorem (First selection lemma). Let X be an n-point set in Rd. Then there exists a point a E R d (not necessarily belonging to X) contained in at least cd(d³1) X-simplices, where cd > 0 is a constant depending only on the dimension d. The best possible value of cd is not known, except for the planar case. The first proof below shows that for n very large, we may take cd 4 ( d+ 1)-( d+ 1 ) . The first proof: from Tverberg and colorful Caratheodory. We may suppose that n is sufficiently large ( n > n0 for a given constant n0) , for otherwh;e, we can ǚet cd to be sufficiently sn1all and choose a point contࣨined in a single X -simplex. Put r = r nj(d + 1)1 . By Tverberg's theorem (Theorem 8.3.1), there exist r pairwise disjoint sets M1, . . . , Mr C X whose convex hulls all have a point -in common; call this point a. (A typical Mi has d+ 1 points, but some of them may be ǚmailer.) We want show that the point a is contained in many X-simplices (so far we have canst · n and we need const · nd+l ). Let J == {j0, . . . , jd} C {1, 2, . . . , r} be a set of d+ 1 indices. We apply the colorful Caratheodory's theorem (Theorem 8.2.1) for the (d+1) "color" sets Mjo, . . . , MJd, which all contain a in their convex hull. This yields a rainbow X -simplex S J containing a and having one vertex from each of the Mji, as illustrated below: . •· -. If J' =!= J are two ( d+ 1 )-tuples of indices, then S J =!= S J' . Hence the number of X -simplices containing the point a is at least 9.1 A Point in Many Sin1plices: The First Selection Len1n1a 209 ( r ) = ( ln/(d+l)l) > 1 n(n - (d+I)) · · · (n - d(d+I)) _ d+l d+l - (d+1)d+l (d+1)! For n sufficiently large, say n > 2d(d+1), this is at least (d+l) -(d+l)2-d(dŽ l) . D The second proof: from fractional Helly. Let :F denote the family of all X -simplices. Put N = IFI = (dŽ1). We want to apply the fractional Helly theore1n (Theorem 8.1.1) to F. Call a (d+1)-tuple of sets of F good if its d+ 1 sets have a common point. To prove the first selection lemma, it suffices to show that there are at least a(dÅ1) good (d+1)-tuples for some a > 0 independent of n, since then the fractional Helly theorem provides a point common to at least f3N members of F. Set t = (d+1)2 and consider a t-point set Y C X. Using Tverberg's theorem, we find that Y can be partitioned into d+ 1 pairwise disjoint sets, of size d+ 1 each, whose convex hulls have a common point. (Tver­ berg's theorem does not guarantee that the parts have size d+l, but if they don't, we can move points from the larger parts to the smaller ones, us­ ing Qaratheodory's theorem.) Therefore, each t-point Y c X provides at • least one good (d+1)-tuple of members of F. Moreover, the members of this good ( d+ 1 )-tuple are pairwise vertex-disjoint, and therefore the ( d+ 1 )-tuple uniquely determines Y. It follows that the number of good (d+1)-tuples is at least (7) = O(n{d+l)2) > a(dÅ1). D In the first proof we have used Tverberg's theorem for a large point set, while in the second proof we applied it only to configurations of bounded size. For the latter application, if we do not care about the constant of propor­ tionality in the first selection lemma, a weaker version of Tverberg's theorem suffices, namely the finiteness of T( d, d+ 1), which can be proved by quite sin1ple arguments, as we have seen. ࣩhe .relation of Tverberg's theorem to the first selection lemma in the second proof somewhat resembles .. the derivation of macroscopic properties in physics (pressure, temperature, etc.) from microscopic properties (laws of motion of molecules, say). F.rom the information about small (microscopic) I configurations we obtained a global (macroscopic) result, saying that a sig-nificant portion of the X-simplices have a common point. A point in the interior of many X-simplices. In applications of the first selection len1ma (or its relatives) we often need to know that there is a point contained in the interior of many of the X -simplices. To assert anything like that, we have to assume some kind of nondegenerate position of X. The following lemma helps in most cases. 9.1.2 Lemma. Let X C Rd be a set of n > d+1 points in general position, meaning that no d+ 1 points of X lie on a common hyperplane, and let 1l be the set of the (9) hyperplanes determined by the points of X. Then no point 210 Chapter 9: Geon1etric Selection Theorerns a E Rd is contained in more than dnd-l hyperplanes of1i. Consequently, at most O(nd) X-simplices have a on their boundary. Proof. For each d-tuple S whose hyperplane contains a, we choose an inclusion-minimal set K(S) C S whose affine hull contains a. We claim that if IK(SI)I = IK(S2)I = k, then either K(S1) = K(S2) or K(S1) and K(S2) share at most k-2 points. Indeed, if K(S1) = {xi, . . . , Xk-1, xk} and K(S2) = {x1, . . . , Xk-1, Yk}, Xk =I= Yk, then the affine hulls of K(S1) and K(S2) are distinct, for otherwise, we would have k+1 points in a common (k-1)-flat, contradicting the general position of X. But then the affine hulls intersect in the (k-2)-flat generated by x1, . . . , Xk-1 and containing a, and K(S1) and K(S2) are not inclusion­ minimal. Therefore, the first k-1 points of K ( S) determine the last one uniquely, and the number of distinct sets of the form K(S) of cardinality k is at most nk-l. The number of hyperplanes determined by X and containing a given k-point set K C X is at most nd-k , and the leinina follows by suinining over k. o Bibliography and remarks. The planar version of the first selec­ tion lemma, with the best possible constant ԍ, was proved by Boros and Fiiredi [BF84). A generalization to an arbitrary dimension, with the first of the two proofs given above, was found by Barany [Bar82]. The idea of the proof of Lemma 9.1.2 was communicated to me by Janos Pach. Boros and Fiiredi [BF84] actually showed that any centerpoint of X works; that is, it is contained in at least ԍ (ˇ) X -triangles. Wag­ ner and Welzl (private communication) observed that a centerpoint works in every fixed dimension, being common to at least cd(dŽ1) X -simplices. This follows from known results on the face numbers of convex polytopes using the Gale transform, and it provides yet another proof of the first selection lemma, yielding a slightly better value of the constant cd than that provided by Barany's proof. Moreover, for a centrally symmetric point set X this method implies that the origin is contained in the largest possible number of X -simplices. As for lower bounds, it is known that no n-point X c Rd in gen­ eral position has a point common to more than 21d (dœ1) X-simplices (Bar82]. It seems that suitable sets might provide stronger lower bounds, but no results in this direction are known. 9.2 The Second Selection Lemma In this section we continue using the term X-simplex in the sense of Sec­ tion 9.1; that is, an X -simplex is the convex hull of a ( d+ 1 )-point subset 9.2 The Second Selection Lemma 211 of X. In that section we saw that if X is a set in R d and we consider all the X -simplices, then at least a fixed fraction of them have a point in common. What if we do not have all, but many X-simplices, some a-fraction of all? It turns out that still many of them must have a point in common, as stated in the second selection lemma below. 9.2.1 Theorem {Second selection lemma). Let X be an n-point set in Rd and let :F be a f amil.r of a(d࢛1) X-simplices, where a E (0, 1] is a parameter. Then there exists a point contained in at least X -simplices of :F, where c = c( d) > 0 and sd are constants. This result is already interesting for a fixed. But for the application that n1otiva.ted the discovery of the second selection lemma, namely, trying to bound the number of k-sets (see Chapter 11), the dependence of the bound on a is important, and it would be nice to determine the best possible values of the exponent sd. For d = 1 it is not too difficult to obtain an asymptotically sharp bound (see Exercise 1). For d = 2 the best known bound (probably still not sharp) is as follows: If IFI = n3-v, then there is a point contained in at least 0( n3-3v / log5 n) X -triangles of :F. In the parameterization as in The­ orem 9.2.1, this means that s2 can be taken arbitrarily close to 3, provided that a is sufficiently small, say a < n-8 for some 6 > 0. For higher dimen­ sions, the best known proof gives sd ::::: ( 4d+ 1 )d+l. Hypergraphs. It is convenient to formulate some of the subsequent con­ siderations in the language of hypergraphs. Hypergraphs are a generalization of graphs where edges can have more than 2 points (from another point of view, a hypergraph is synonymous with a set system). A hypergraph is a pair H = (V, E), where V is the vertex set and E C 2v is a system of subsets of V, the edge set. A k-uniform hypergraph has all edges of size k (so a graph is a 2-uniform hypergraph). A k-partite hypergraph is one where the vertex set can be partitioned into k subsets vl' v2' . . . ' vk' the classes, so that each edge contains at most one point from each Vi. The notions of subhypergraph and isomorphism are defined analogously to these for graphs. A subhypergraph is obtained by deleting some vertices and some edges (all edges containing the deleted vertices, but possibly more). An isomorphism is a bijection of the vertex sets that maps edges to edges in both directions (a renaming of the vertices). Proof of the second selection lemma. The proof is somewhat similar to the second proof of the first selection lemma (Theorem 9.1.1). We again use the fractional Helly theorem. We need to show that many ( d+ 1 )-tuples of X -simplices of :F are good (have nonempty intersections). 212 Chapter 9: Geon1etric Selection Theorerns We can view F as a (d+1)-uniform hypergraph. That is, we regard X as the vertex set and each X-simplex corresponds to an edge, i.e., a subset of X of size d+ 1. This hypergraph captures the "combinatorial type" of the family F, and a specific place1nent of the points of X in Rd then gives a concrete "geometric realization" of F. First, let us concentrate on the simpler task of exhibiting at least one good (d+1)-tuple; even this seems quite nontrivial. Why cannot we proceed as in the second proof of the first selection lemma? Let us give a concrete example with d == 2. Following that proof, we would consider 9 points in R2, and Tverberg's theorem would provide a partition into triples with intersecting convex hulls: But it can easily happen that one of these triples, say {a, b, c}, is not an edge of our hypergraph. Tverberg's theorem gives us no additional information on which triples appear in the partition, and so this argument would guarantee a good triple only if all the triples on the considered 9 points were contained in F. Unfortunately, a 3-uniform hypergraph on n vertices can contain more than half of all possible (ً) triples without containing all triples on some 9 points (even on 4 points). This is a "higher-dimensional" version of the fact that the complete bipartite graph on 9 + 9 vertices has about ! n2 edges without containing a triangle. Hypergraphs with many edges need not contain complete hypergraphs, but they have to contain complete multipartite hypergraphs. For example, a graph on n vertices with significantly more than n312 edges contains K2,2, the complete bipartite graph on 2 + 2 vertices (see Section 4.5). Concerning hypergraphs, let Kd+1(t) denote the co1nplete (d+l)-partite (d+l)-uniform hypergraph with t vertices in each of its d+ 1 vertex classes. The illustration shows a K3(4); only three edges are drawn as a sample, although of course, all triples connecting vertices at different levels are present. If t is a constant and we have a (d+l)-uniform hypergraph on n vertices with sufficiently many edges, then it has to contain a copy of Kd+ 1 ( t) as a subhypergraph. We do not formulate this result precisely, since we will need a stronger one later. 9.2 The Second Selection Lemma 213 In geometric language, given a family :F of sufficiently many X -simplices, we can color some t points of X red, some other t points blue, . . . , t points by color (d+l) in such a way that all the rainbow X-simplices on the (d+1)t colored points are present in :F. And in such a situation, if t is a sufficiently large constant, the colored Tverberg theorem (Theorem 8.3.3) with r = d+l claims that we can find a (d+1)-tuple of vertex-disjoint rainbow X-simplices whose convex hulls intersect, and so there is a good ( d+ 1 )-tuple! In fact, these are the considerations that led to the formulation of the colored Tverberg theorem. For the fractional Helly theorem, we need not only one but many good ( d+ 1 )-tuples. We use an appropriate stronger hypergraph result, saying that if a hypergraph has enough edges, then it contains many copies of Kd+I (t): 9.2.2 Theorem (The Erdos-Simonovits theorem). Let d and t be pos­ itive integers. Let 1-l be a (d+l)-uniform hypergraph on n vertices and with a(dC 1) edges, where a > Cn-lftd for a certain sufficiently large constant C. Then 1-l contains at least copies of Kd+I (t), where c = c(d, t) > 0 is a constant. For completeness, a proof is given at the end of this section. Note that in particular, the theorem implies that a (d+l)-uniform hy­ pergraph having at least a constant fraction of all possible edges contains at least a constant fraction of all possible copies of Kd+I (t). We can now finish the proof of the second selection lemma by double counting. The given family F, viewed as a (d+1)-uniform hypergraph, has a(dD 1) edges, and thus it contains at least catd+I n<d+I)t copies of Kd+l (t) by Theorem 9.2.2. As was explained above, each such copy contributes at least one good (d+l)-tuple of vertex-disjoint X-simplices of F. On the other hand, d+ 1 vertex-disjoint X -simplices have together ( d+ 1 )2 vertices, and hence their vertex set can be extended to a vertex set of some Kd+t (t) (which has t(d+1) vertices) in at most nt(d+l)-(d+l)2 = n 0. This proves the second selection lemma, with the exponent sd < (4d+l)d+t. D Proof of the Erdos-Simonovits theorem (Theorem 9.2.2). By induc­ tion on k, we are going to show that a k-uniform hypergraph on n vertices and with m edges contains at least fk(n, m) copies of Kk(t), where 214 Chapter 9: Geometric Selection Tl1eoren1s with ck > 0 and Ck suitable constants depending on k and also on t (t is not shown in the notation, since it remains fixed). This claim with k = d+ 1 implies the Erdos-Simonovits theorem. For k = 1, the claim holds. So let k > 1 and let 1i be k-uniform with vertex set V, lVI = n, and edge set E, lEI = m. For a vertex v E V, define a (k-1)-uniform hypergraph Hv on V, whose edges are all edges of 1l that contain v, but with v deleted; that is, 1-lv = (V, {e \ {v}: e E E, v E e}). Further, let tl' be the (k-1)-uniform hypergraph whoǚe edge set is the union of the edge sets of all the 1iv. Let /C denote the set of all copies of the complete (k-1)-partite hyper­ graph Kk-1 ( t) in 1i'. The key notion in the proof is that of an extending vertex for a copy K E K: A vertex v E V is extending for a K E K if K is contained in 1-lv, or in other words, if for each edge e of K, eU{ v} is an edge in 1l. The picture below shows a K2(2) and an extending vertex for it (in a 3-regular hypergraph). The idea is to count the number of all pairs (K, v) , where K E K and v is an extending vertex of K, in two ways. On the one hand, if a fixed copy K E K has QK extending vertices, then it contributes (qf) distinct copies of Kk(t) in 11. . We note that one copy of Kk(t) comes from at most 0(1) distinct K E K in this way, and therefore it suffices to bound L:KEK (qf) from below. On the other hand, for a fixed vertex v, the hypergraph 1-lv contains at least fk-1 ( n, mv) copies K E K by the inductive assumption, where mv is the number of edges of Hv. Hence L QK > L fk-1 (n, mv)· KEK vEV Using EvEV mv = km, the convexity of fk-1 in the second variable, and Jensen's inequality (see page xvi), we obtain L QK > n fk-l(n, km/n) . (9.1) KEK To conclude the proof, we define a convex function extending the binomial coefficient (ي) to the domain R: g(x} = { á(x-1)· .. (x-t+l) t! for x < t - 1, for x > t - 1 . 9.3 Order Types and the San1e-Type Lemn1a 215 We want to bound yKEJC g(QK) from below, and we have the bound (9.1) for yKEJC QK· Using the bound IKI < nt(k-1) (clear, since Kk-1(t) has t(k-1) vertices) and Jensen's inequality, we derive that the number of copies of Kk(t) in 1-l is at least t(k-1) (n fk-1(n, kmjn)) en g nt(k-1) . A calculation finishes the induction step; we omit the details. Bibliography and remarks. The second selection lemma was conjectured, and proved in the planar case, by Baniny, Fiiredi, and Lovasz [BFL90]. The missing part for higher dimensions was the col­ ored Tverberg theorern (discussed in Section 8.3). A proof for the planar case by a different technique, with considerably better quanti­ tative bounds than can be obtained by the method shown above, was given by Aronov, Chazelle, Edelsbrunner, Guibas, Sharir, and Wenger [ACE+91] (the bounds were mentioned in the text). The full proof of the second selection lemma for arbitrary di1nension appears in Alon, Barany, Fiiredi, and Kleitman [ABFK92]. Several other "selection lemmas," sometimes involving geometric objects other than simplices, were proved by Chazelle, Edelsbrunner, Guibas, Herschberger, Seidel, and Sharir [CEG+94]. Theorem 9.2.2 is from Erdos and Sirnonovits [ES83]. Exercises 0 1. (a) Prove a one-dimensional selection lemma: Given an n-point set X C R and a family F of a(Ì) X-intervals, there exists a point common to n( a2 (ϩ)) intervals of :F. What is the best value of the constant of proportionality you can get? m (b) Show that this result is sharp (up to the value of the multiplicative constant) in the full range of a. ǣ 2. (a) Show that the exponent 82 in the second selection lemma in the plane cannot be smaller than 2. ǣ (b) Show that 83 > 2. m Can you also show that Sd > 2? (c) Show that the proof method via the fractional Helly theorem cannot give a better value of s2 than 3 in Theorem 9.2.1. That is, construct an n-point set and a(Í) triangles on it in such a way that no more than O(a5n9) triples of these triangles have a point in common. ǣ 9.3 Order Types and the Same-Type Lemma The order type of a set. There are infinitely many 4-point sets in the plane in general position, but there are only two "combinatorially distinct" types of such sets: 216 Chapter 9: Geometric Selection Theorems • • • and • • • • • What is an appropriate equivalence relation that would capture the intuitive notion of two finite point sets in R d being "combinatorially the same"? We have already encountered one suitable notion of combinatorial isomorphism in Section 5.6. Here we describe an equivalent but perhaps more intuitive approach based on the order type of a configuration. First we explain this notion for planar configurations in general position, where it is quite simple. Let p = (Pt, p2, . . . , Pn) and q = (qt, q2, . . . , qn) be two sequences of points in R 2, both in general position (no 2 points coincide and no 3 are collinear). Then p and q have the same order type if for any indices i < j < k we turn in the same direction (right or left) when going from Pi to Pk via Pj and when going from qi to qk via qj: or Pk ~ Pi Pj We say that both the triples (Pi, Pj, Pk) and ( qi, qj, qk) have the same orien­ tation. If the point sequences p and q are in R d, we require that every ( d+ 1 ) ­ element subsequence of p have the same orientation as the corresponding subsequence of q. The notion of orientation is best explained for d-tuples of vectors in Rd. If v1 , . . . , vd are vectors in R d, there is a unique linear rnapping sending the vector ei of the standard basis of R d to Vi, i = 1, 2, . . . , d. The matrix A of this mapping has the vectors v1 , . . . , vd as the columns. The orientation of (v1 , . . . , vd) is defined as the sign of det(A); so it can be +1 (positive orientation), -1 (negative orientation), or 0 (the vectors are linearly dependent and lie in a (d- 1)-dimensional linear subspace). For a (d+1)-tuple of points (p1 , p2, . • . , Pd+ 1) , we define the orientation to be the orientation of the d vectors P2 -PI, P3 - Pt, . . . , Pd+ 1 -Pt· Geometrically, the orientation of a 4-tuple (PI , P2, P3, P4) tells us on which side of the plane PIP2P3 the point P4 lies (if Pt, P2, P3, P4 are affinely independent). Returning to the order type, let p = (Pt, P2, . . . , Pn) be a point sequence in Rd. The order type of p (also called the chirotope of p) is defined as the mapping assigning to each (d+1)-tuple (it, i2, . . . , id+t) of indices, 1 < it < i2 < · · · < id+ 1 < n, the orientation of the ( d+ 1 )-tuple (Pi 1 , Pi2, • • • , Pid+ 1 ) • Thus, the order type of p can be described by a sequence of +1's, -1's, and O's with (d³1) terms. The order type makes good sense only for point sequences in Rd con­ taining some d+ 1 affinely independent points. Then one can read off various properties of the sequence from its order type, such as general position, con­ vex position, and so on; see Exercise 1. 9.3 Order Types and the Same-Type Lemma 217 In this section we prove a powerful Ramsey-type result concerning order types, called the same-type lemma. Same-type transversals. Let (Y1, Y2, • . . , Y m) be an m-tuple of finite sets in Rd. By a transversal of this m-tuple we mean any m-tuple (yt, y2, . . . , Ym) such that Yi E Yi for all i. We say that (Yt, Y2, . . . , Ym) has same-type transversals if all of its transversals have the same order type. Here is an example of 4 planar sets with same-type transversals: y3 • • • • . , . , . , . . . , . . "'.. , Ȝ . ' , J ... ,.. . . . .. · · · · . . . , ' . . ..... · · Y4 · · ·· · .. . . . , . , . , . y l ··.:::.>'"'"'''''''''"'"'"''•········>.:::.:::.::-.· y2 , ' , ȝ, ...... If (X1,X2, . . . , Xm) are very large finite sets such that XtU · · · UXm is in general position, 1 we can find not too small subsets Y1 C X 1 , . . . , Y;n C Xrn such that (Y1, . . . , Y m) has same-type transversals. To see this, color each transversal of (X1, X2, . . . , Xm) by its order type. Since the num­ ber of possible order types of an m-point set in general position cannot ex-ceed r = 2(dÃ1), we have a coloring of the edges of the complete m-partite hypergraph on (X1, . . . , X.,,) by r colors. By the Erdos-Simonovits theorem (Theoren1 9.2.2), there are sets Yi C Xi, not too small, such that all edges induced by Y1 U · · · UĢn have the same color, i.e., (Y 1 , . . . , Y m) has same-type transversals. As is the case for many other geometric applications of Ramsey-type theo­ rems, this result can be quantitatively improved tremendously by a geometric argument: For m , and d fixed, the size of the sets li can be made a constant fraction of IXil· 9.3.1 Theorem (Same-type lemma). For any integers d, m > 1, there exists c = c( d, m) > 0 such that the following holds. Let X 1, X 2, . . . , X m be finite sets in R d such that X 1 U · · · UXm is in general position. Then there are Y1 C X1, . . . , Ģn C X.,, such that the m-tuple (Y1, Y2, . . . , Ģ,) has same-type transversals and lli I > ciXi I for all i = 1, 2, . . . , m. Proof. First we observe that it is sufficient to prove the same-type lemma for m = d+1. For larger m, we begin with (X1, X2, . . . , Xm) as the current m­ tuple of sets. Then we go through all ( d+ 1 )-tuples ( i1, i2, . . . , id+t) of indices, and if (Z1, . . . , Zm) is the current m-tuple of sets, we apply the same-type lemma to the (d+l)-tuple (Zi1 , • • • , Zid+t ). These sets are replaced by smaller 1 This is a shorthand for saying that Xi n X1 = 0 for all i =/= j and X1 U · · · u Xm is in general position. 218 Chapter 9: Geometric Selection Theoren1s sets ( z:l ' . . . ' z:d+l) such that this ( d+ 1 )-tuple has same-type transversals. After this step is executed for all (d+l)-tuples of indices, the resulting current m-tuple of sets has same-type transversals. This tnethod gives the rather small lower bound To handle the crucial case m = d+ 1, we will use the following criterion for a ( d+ 1 )-tuple of sets having same-type transversals. 9.3.2 Lemma. Let 01, 02, . . . , Cd+l c Rd be convex sets. The f ollowing two conditions are equivalent: ( i) There is no hyperplane simultaneously intersecting all of C 1, C2, . . . , C d+ 1. (ii) For each nonempty index set I C {1, 2, . . . , d+l}, the sets UiEI Ci and UiԎI Ci can be strictly separated by a hyperplane. Moreover, if X1, X2, . . . , Xd+l c Rd are finite sets such that the sets Ci = conv(Xi) have property (i) (and (ii)), then (X1, . . • , Xd+l) has same-type transversals. In particular, planar convex sets C 1, C2, C3 have no line transversal if and only if each of them can be separated by a line from the other two. The proof of this neat result is left to Exercise 3. We will not need the assertion that (i) implies (ii). Same-type lemma for d+l sets. To prove the same-type lemma for the case m = d+l, it now suffices to choose the sets Yi C Xi in such a way that their convex hulls are separated in the sense of (ii) in Lernma 9.3.2. This can be done by an iterative application of the ham-sandwich theorem (Theorem 1.4.3). Suppose that for some nonempty index set I c { 1, 2, . . . , d + 1}, the sets conv(UiEI Xi) and conv(UjԎJ XJ) cannot be separated by a hyperplane. For notational convenience, we assume that d+ 1 E I. Let h be a hyperplane simultaneously bisecting xl' x2, . . . ' xd, whose existence is guaranteed by the ham-sandwich theorem. Let r be a closed half-space bounded by h and containing at least half of the points of Xd+l · For all i E I, including i = d+l, we discard the points of Xi not lying in 'Y, and for j ԏ I we throw away the points of Xi that lie in the interior of "Y (note that points on h are never discarded); see Figure 9 .1. We claim that union of the resulting sets with indices in I is now strictly separated from the union of the remaining sets. If h contains no points of the sets, then it is a separating hyperplane. Otherwise, let the points contained in h be a1, . . . , at; we have t < d by the general position assu1nption. For each aj, choose a point aj very near to ai. If ai lies in some Xi with i E J, then aj is chosen in the complement of 'Y, and otherwise, it is chosen in the interior of 'Y. We let h' be a hyperplane passing through al, . . . , am and lying 9.3 Order Types and the Same-Type Lemma ,·· . . . I . . . · · .. .. .. initial sets 'Y h I = {2, 3} I = {3} ( '\ I \ \ / h I = {1, 3} Figure 9.1. Proof of the same-type lemma for d = 2, m = 3. 219 result very close to h. Then h' is the desired separating hyperplane, provided that the aj are sufficiently close to the corresponding aj, as in the picture below: h '-' • · · · . . · . . . . . -Ù > -· · · · · · . . . I h' . . . . -ai . · . · . . . . Thus, we have "killed" the index set I, at the price of halving the sizes of the current sets; more precisely, the size of a set Xi is reduced from JXil to fiXil/21 (or larger). We can continue with the other index sets in the same manner. After no more than 2d-l halvings, we obtain sets satisfying the separation condition and thus having same-type transversals. The same­ type lemma is proved. The lower bound for c( d, d+ 1) is doubly exponential, d roughly 2-2 • 0 A simple application. We recall that by the Erdos-Szekeres theorem, for any natural number k there is a natural number n = n( k) such that any n-point set in the plane in general position contains a subset of k points in convex position (forming the vertices of a convex k-gon). The same-type lemma immediately gives the following result: 220 Chapter 9: Geometric Selection Theorems 9.3.3 Theorem (Positive-fraction Erdos-Szekeres theorem). For ev­ ery integer k > 4 there is a constant Ck > 0 such that every sufficiently large finite set X C R2 in general position contains k disjoint subsets Y1, . . . , Yk, of size at least Ck lXI each, such that each transversal of (Y1, . . . , Yk) is in convex position. Proof. Let n = n(k) be the number as in the Erdos-Szekeres theorem. We partition X into n sets X 1 , . . . , Xn of almost equal sizes, and we apply the same-type lemma to them, obtaining sets Y1, . . . , Yn, Yi C Xi, with same­ type transversals. Let (y1, . . . , Yn) be a transversal of (Y1, . . . , Y n). By the Erdos-Szekeres theorem, there are i1 < i2 < · · · < ik such that Yi1 , • • • , Yi,.. are in convex position. Then }i1 , • • • , Yik arc as required in the theorem. 0 Bibliography and remarks. For more information on order types, the reader can consult the survey by Goodman and Pollack [GP93]. The same-type lemma is from Barany and Valtr [llV98], and a very similar idea was used by Pach [Pac98]. Barany and Valtr proved the positive-fraction Erdos-Szekeres theorern (the case k = 4 was estab­ lished earlier by Nielsen), and they gave several more applications of the same-type lemma, such as a positive-fraction Radon lemma and a positive-fraction Tverberg theorem. Another, simple proof of the positive-fraction Erdos-Szekeres the­ orem was found by Pach and Solyrnosi [PS98b]; see Exercise 4 for an outline. The equivalence of (i) and (ii) in Lemma 9.3.2 is fron1 Goodman, Pollack, and Wenger [GPW96]. A nice strengthening of the same-type lemma was proved by P6r [P6r02]: Instead of just selecting a Yi frorn each Xi, the Xi can be completely partitioned into such Yi. That is, for every d and m there exists n == n( d, m) such that whenever Xt, X2, . . . , Xrn C Rd are finite sets with lXII = IX21 = . . . = IXml and with uxi in general position, there are partitions Xi = Yit UYi2U · · · UYin, i = 1, 2, . . . , m, such that for each j == 1, 2, . . . , n , the sets Ytj, Y2,j, . . . , Y mj have the same size and same-type transversals. Schematically: . J = 1 2 3 n ß--á-ã Xt ·--¹-» x2 ä--æ-è x3 é--ë-í x4 (the sets in each column have same-type transversals). For the proof, one first observes that it suffices to prove the existence of n( d, d+ 1 ); the larger m follow as in the proof of the san1e-type lemma, by re­ fining the partitions for every ( d+ 1 )-tuple of the indices i. The key 9.3 Order Types and the Same-Type Lemma step is showing n(d, d+1) < 2n(d-1, d+1). The Xi are projected on a generic hyperplane h and the appropriate partitions are found for the projections by induction. Let x: c h be the projection of Xi, let Y{, . . . , Y('d+l) be one of the "columns" in the partitions of the x: (we omit the index j for simpler notation), let k = I Y/ I, and let Yi C Xi be the preimage of Y/. As far as separation by hyperplanes is concerned, the 'Ƶ behave like d+ 1 points in general position in R d-l, and so there is only one inseparable (Radon) partition (see Exercise 1 .3.9), i.e., an I c { 1' 2, . . . ' d+ 1} (unique up to complementation) such that uiEJ '/ cannot be separated from uiǾI ''. By an argument resernbling proofs of the ham-sandwich theorem, it can be shown that there is a half­ space 1 in Rd and a number k1 such that I'Y n Yil = k1 for i E I and I'Y n Yi I = k - kl for i ¢ I. Letting zi = Yi n 'Y for i E I and zi = Yi \ 'Y for i rj. 1 and Ti = Yi \ Zi, one obtains that (Z1, . . . , Zd+l) satisfy condition (ii) in Lemma 9.3.2, and so they have same-type transver­ sals, and similarly for the Ti. A 2-dimensional picture illustrates the construction: Y' 1 Y:' 2 Y' 3 I = {1, 3} h The problem of estin1ating n(d, m) (the proof produces a doubly ex­ ponential bound) is interesting even for d = 1 , and there P6r showed, by ingenious arguments, that n(1, m) = 8(m2). Exercises 221 1. Let p = (PI, P2, . . . , Pn) be a sequence of points in R d containing d+ 1 affinely independent points. Explain how we can decide the following questions, knowing the order type of p and nothing else about it: (a) Is it true that for every k points among the Pi, k = 2, 3, . . . , d+l, the affine hull has the maximum dimension k-1? 0 (b) Does Pd+2 lie in conv( {PI, . . . , Pd+ 1 })? 0 (c) Are the points p1 , . . . ,Pn convex independent (i.e., is each of them a vertex of their convex hull)? [I] 2. Let p = (PI, P2, . . . , Pn) be a sequence of points in Rd whose affine hull is the whole of Rd. Explain how we can determine the order type of p, up to a global change of all signs, from the knowledge of sgn(AfNal(p)) (the signs of affine functions on the Pi; see Section 5.6). l:il 222 Chapter 9: Geometric Selection Theorems (Conversely, sgn(AfNal(p)) can be reconstructed from the order type, but the proof is more complicated; see, e.g., [BVS+99].) 3. (a) Prove that in the setting of Lemma 9.3.2, if the convex hulls of the Xi have property (i), then (XI, . . . , Xd+l) has san1e-type transversals. Proceed by contradiction. 0 (b) Prove that property (ii) (separation) implies property (i) (no hyper­ plane transversal). Proceed by contradiction and use Radon's lemma. 0 (c) Prove that (i) implies (ii). [I] 4. Let k > 3 be a fixed integer. (a) Show that for n sufficiently large, any n-point set X in general position in the plane contains at least en 2k convex independent subsets of size 2k, for a suitable c = c( k) > 0. 0 (b) Let S = {p1 , P2, . . . , P2k} be a convex independent subset of X, where the points are nun1bered along the circurr1ference of the con­ vex hull in a clockwise order, say. The holder of S is the set H ( S) = {PI, P3, . . . , P2k-I}. Show that there is a set H that is the holder of at least 0( nk) sets S. lil (c) Derive that each of the indicated triangular regions of such an H contain 0 ( n) points of X: Infer the positive-fraction Erdos··-Szekeres theorem in the plane. 0 (d) Show that the positive-fraction Erdos-Szekeres theorem in higher dimensions is implied by the planar version. [I] 5. (A Ramsey-type theorem for segments) (a) Let L be a set of n lines and P a set of n points in the plane, both in general position and with no point of P lying on any line of L. Prove that we can select subsets L' C L, IL'I > an, and P' C P, IP'I > an, such that P' lies in a single cell of the arrangement of L' (where a > 0 is a suitable absolute constant). You can use the san1e-type lemma for m = 3 (or an elementary argument). 0 (b) Given a set S of n segments and a set L of n lines in the plane, both in general position and with no endpoint of a segment lying on any of the lines, show that there exist S' C S and L' C L, IS'I, IL'I > {3n, with a suitable constant j3 > 0, such that either each segment of S' intersects each line of L' or all segments of S' are disjoint fron1 all lines of L'. 0 (c) Given a set R of n red segments and a set B of n blue segments in the plane, with RUB in general position, prove that there are subǚetǚ R' C R, IR'I > 1n, and B' C B, IB'I > "fn, such that either each segment 9.4 A Hypergraph Regularity Lemma 223 of R' intersects each segment of B' or each segment of R' is disjoint from each segn1ent of B' ( 7 > 0 is another absolute constant) . 0 The result in (c) is due to Pach and Solymosi [PS01]. 9.4 A Hypergraph Regularity Lemma Here we consider a fine tool from the theory of hypergraphs, which we will need for yet another version of the selection lemma in the subsequent section. It is a result inspired by the famous Szemeredi regularity lemma for graphs. Very roughly speaking, the Szemeredi regularity lemma says that for given c > 0, the vertex set of any sufficiently large graph G can be partitioned into some number, not too small and not too large, of parts in such a way that the bipartite graphs between "most" pairs of the parts look like random bipartite graphs, up to an "error" bounded by €. An exact formulation is rather complicated and is given in the notes below. The result discussed here is a hypergraph analogue of a weak version of the Szemeredi regularity lemma. It is easier to prove than the Szemeredi regularity lemma. Let 1-l = (X, E) be a k-partite hypergraph whose vertex set is the union of k pairwise disjoint n-element sets X 1 , X 2 , . . . , X k, and whose edges are k-tuples containing precisely one element from each Xi. For subsets Yi C Xi, i = 1, 2, . . . , k, let e(Y1, . . . , Yk) denote the number of edges of 1i contained in Y1 U · · · U Yk. In this notation, the total number of edges of 1i is equal to e(X1, . . . , Xk) · Further, let denote the density of the subhypergraph induced by the }i. 9.4.1 Theorem (Weak regularity lemma for hypergraphs). Let 1i be a k-partite hypergraph as above, and suppose that p(1-£) > {3 for some {3 > 0. Let 0 < c < l. Suppose that n is sufficiently large in terms of k, {3, and c. Then there exist subsets Yi c xi of equal size IYil = s > {31/ek n, i = 1, 2, . . . , k, such that (i) (High density) p(Yi, . . . , Y k) > {3, and (ii) (Edges on all large subsets) e(Z1, . . . , Zk) > 0 for any Zi C Yi with I zi I > c s, i = 1 , 2, . . . , k. The following scheme illustrates the situation (but of course, the vertices of the Yi and Zi need not be contiguous). 224 Chapter 9: Geometric Selection Theorems for all Z1, . . . , Zk there exists an edge Proof. Intuitively, the sets Yi should be selected in such a way that the subhypergraph induced by them is as dense as possible. We then want to show that if there were Z 1, • • . , Z k of size at least c s with no edges on them, we could replace the Yi by sets with a still larger density. But if we looked at the usual density p(Y1, . . . , Yk), we would typically get too small sets Yi. The trick is to look at a modified density parameter that slightly favors larger sets. Thus, we define the magical density tt(Yt, . . . , Yk) by We choose Yi , . . . , Y k, Yi C Xi, as sets of equal size that have the maxin1um possible magical density tt(Y1, • • • , Yk). We denote the common size IYtl = . . . = IYkl by s. First we derive the condition (i) in the theorem for this choice of the }i. We have e(Yt, . . . , Yk) ( ) ( ) ek {3 ek sk-ck = J.t Y1, . . . , Yk > J.t X1, . . . , Xk = {3n > s , and so e(Y1, • • • , Yk) > {3sk, which verifies (i). Since obviously e(Y1, . . . , Yk) < sk, we have tt(Y1, . . . , Yk) < sek. Combining with tt(Yt, . . . , Yk) 2: f3ne:k de­ rived above, we also obtain that s > (31/ek n. It remains to prove (ii). Since cs is a large number by the assumptions, rounding it up to an integer does not matter in the subsequent calculations (as can be checked by a simple but somewhat tedious analysis). In order to simplify matters, we will thus assume that c s is an integer, and we let Z1 C Yt, . . . , Zk C Yk be cs-element sets. We want to prove e( Zt, . . . , Zk) > 0. We have 9.4 A Hypergraph Regularity Lemma 225 e ( Z 1 , . . . , Z k) = e ( Yt , . . . , Yk ) (9.2) - e(Yt \ Zt, Y 2, Y 3, . . . , Y k) - e(Zt, Y 2 \ Z2, Y3, . . . , Y k) - e(Zt, Z2, Y3 \ Z3, . . . , Y k) We want to show that the negative terms are not too large, using the as­ sumption that the magical density of Y1, . . . , Y k is maximum. The problem is that Y1, . . . , Y k maximize the magical density only among the sets of equal size, while we have sets of different sizes in the tertns. To get back to sets of equal size, we use the following observation. If, say, R1 is a randon1ly chosen subset of Y1 of some given size r, we have E [p ( R 1 , Y 2 , . . . , Y k)] = p ( Yi , . . . , Yk) , where E[ · ] denotes the expectation with respect to the random choice of an r­ element R1 C Y1. This preservation of density by choosing a random subset is quite intuitive, and it is not difficult to verify it by counting (Exercise 1). For estimating the term e (Y1 \ Z 1, Y 2, . . . , Yk), we use random subsets R2, . . . , Rk of size (1-c:)s of Y 2, . . . , Y k, respectively. Thus, Now for any choice of R2, . . . , Rk, we have p(Yi \ Zt, R2, . . . , Rk) = ((1 - c:)s)-ek ¶-t(Yt \ Zt, R2, . . . , Rk) < ( ( 1 - c) S) -ek JL( Yl , Y 2, · · · ' Yk) Therefore, = ( 1 - c) -e k p(Yt , . . . , Yk) . To estimate the term e(Z1, Z2, . . . , Zi-1, Yi \ Zi, Yi+t, . . . , Y k), we use random subsets Ri c Yi \ Zi and Ri+ 1 c Yi+ 1, . . . , Rk c Yk, this time all of size c s. A sin1ilar calculation as before yields (This estimate is also valid for i = 1, but it is worse than the one derived above and it would not suffice in the subsequent calculation.) From (9.2) we obtain that e(Zt, . . . , Zk) is at least e(Y1, • . . , Y k) multiplied by the factor 226 Chapter 9: Geometric Selection Theorems k 1 - (1 - c) - (1 -c)c-ek Lci-1 = c - c1-ek (1 - ck-l) i=2 = c ( 1 + c-ek (ck-1 - 1)) = c ( 1 + eek In(1/el(ck-1 _ 1)) > c (1 + (1 + ck In ! )(ek-l - 1)) = ek+l (1 - ln 1 + ek In 1) E E E > Ek+ 1 ( 1 _ ln 1) -E c > 0. Theorem 9.4.1 is proved. Bibliography and remarks. The Szemeredi regularity lemma is from [Sze78], and in its full glory it goes as follows: For every e > 0 and f or every ko, there exist K and no such that every graph G on n > no vertices has a partition (V 0, V1, . . . , Vk) of the vertex set into k+l parts, ko < k < K, where !V ol < en, IV1 I = IV 2I = · · · = IVkl = m, and all but at most c:k2 of the (€) pairs {Vi, Vj} are c:-regular, which means that for every A C Vi and B C Vj with I AI, IBI > em we have Jp(A, B) -p(\li, Vj )I < £ . Understanding the idea of the proof is easier than understanding the statement. The regularity lemn1a is an extremely powerful tool in modern combinatorics. A survey of applications and variations can be found in Koml6s and Simonovits (KS96J. Our presentation of Theorem 9.4.1 essentially follows Pach (Pac98], whose treatment is an adaptation of an approach of Koml6s and S6s. One can formulate various hypergraph analogues of the Szemeredi regularity len1ma in its full strength. For instance, for a 3-uniform hypergraph, one can define a triple vl' v2' v2 of disjoint subsets of vertices to be c:-regular if lp(A1, A2, A3) - p(V1, V2, V3)1 < c for every Ai C Vi with IAil > c:l\lil, and formulate a statement about a parti­ tion of the vertex set of every 3-regular hypergraph in which almost all triples of classes are c:-regular. Such a result indeed holds, but this formulation has significant shortcomings. For example, the Szemeredi regularity len1ma allows approximate counting of small subgraphs in the given graph (see Exercise 3 for a simple example), which is the key to many applications, but the notion of c:-regularity for triple sys­ tems just given does not work in this way (Exercise 4). A technically quite complicated but powerful regularity lemma for 3-regular hyper­ graphs that does admit counting of small subhypergraphs was proved by Frankl and Rodl [FROl]. The first insight is that for triple systems, one should not partition only vertices but also pairs of vertices. Let us mention a related innocent-looking problem of geometric flavor. For a point c E S = { 1, 2, . . . , n} d, we define a jack with center D 9.4 A Hypergraph Regularity Len1n1a c as the set of all points of S that differ from c in at most 1 coordinate. The problem, formulated by Szekely, asks for the maximum possible cardinality of a system of jacks in S such that no two jacks share a line (i.e., every two centers differ in at least 2 coordinates) and no point is covered by d jacks. It is easily seen that no more than nd-l jacks can be taken, and the problem is to prove an o(nd-l) bound for every fixed d. The results of Frankl and Rodl [FROl] imply this bound for d = 4, and recently Rodl and Skokan announced a positive solution for d = 5 as well; these results are based on sophisticated hypergraph regularity lemmas. A positive answer would imply the famous theorem of Szemeredi on arithmetic progressions (see, e.g., Gowers (Gow98] for recent work and references) and would probably provide a "purely combinatorial" proof. Exercises 227 1. Verify the equality E[p(R1, Y2, . . . , Yk)] = p(Y1, . . . , Yk), where the ex­ pectation is with respect to a random choice of an r-element R1 C Y1 . Also derive the other similar equalities used in the proof in the text. @J 2. (Density Ramsey-type result for segments) (a) Let c > 0 be a given positive constant. Using Exercise 9.3.5( c) and the weak regularity leinma, prove that there exists f3 == /3( c) > 0 such that whenever R and B are sets of segments in the plane with RUB in general position and such that the number of pairs ( r, b) with r E R, b E B, and r n b i= 0 is at least en 2, then there are subsets R' C R and B' C B such that IR'I > {3n, JB'I > f3n, and each r E R' intersects each b E B'. 0 (b) Prove the analogue of (a) for non crossing pairs. Assuming at least en 2 pairs ( r, b) with r n b :=:: 0, select R' and B' of size f3n such that r n b == 0 for each r E R' and b E B'. CD These results are from Pach and Solymosi [PSOl]. 3. (a) Let G = (V, E) be a graph, and let V be partitioned into classes V1, V 2, V3 of size m each. Suppose that there are no edges with both vertices in the same Vi, that I p(Vi, Vj) - l I < c for all i < j, and that each pair (Vi, ltj) is c-regular (this means that Jp(A, B) - p(Vi, ltj)j < E for any A C Vi and B C Vj with IAI, IBI > em). Prove that the number of triangles in G is ( ! + o ( 1)) m 3, where the o( 1) notation refers to c  0 (while m is considered arbitrary but sufficiently large in terms of c). [I] (b) Generalize (a) to counting the number of copies of K4, where G has 4 classes vl' . . . ' v 4 of equal size (if all the densities are about l' then the number should be (2-6 + o(I))m4). 0 4. For every c > 0 and for arbitrarily large m, construct a 3-uniforrn 4-partite hypergraph with vertex classes v l ' . . . ' v4' each of size m, that contains no Ki3) (the system of all triples on 4 vertices), but where Jp(\li, Vj, Vk) - lI < c for all i < j < k and each triple (Vi, Vj, Vk) is 228 Chapter 9: Geometric Selection Theorems £-regular. The latter condition means lp(A,i, Aj, Ak) - p(Vi, Yj, Vk)l < E for every Ai c Vi, Aj c ltj' Ak c vk of size at least Em. ĥ 9.5 A Positive-Fraction Selection Lemma Here we discuss a stronger version of the first selection lemma (Theo­ rem 9 .1.1). Recall that for any n-point set X c R d, the first selection lemma provides a "heavily covered" point, that is, a point contained in at least a fixed fraction of the (dC1) simplices with vertices in points of X. The the­ orem below shows that we can even get a large collection of simplices with a quite special structure. For example, in the plane, given n red points, n white points, and n blue points, we can select Ù red, Ù white, and Ù blue points in such a way that all the red-white-blue triangles for the resulting sets have a point in common. Here is the d-dimensional generalization. 9.5.1 Theorem (Positive-fraction selection lemma). For all natural numbers d, there exists c = c(d) > 0 with the following property. Let d . . . Xt, X2, . . . , Xd+t c R be finite sets of equal size, with Xt UX2U · · · UXd+t in general position. Then there is a point a E Rd and subsets Z1 C X1 , . . . , zd+l c xd+l' with IZil > ciXi I, such that the convex hull of every transver­ sal of (Zt, . . . , Zd+I) contains a. As was remarked above, for d = 2, one can take c = 112• There is an elementary and not too difficult proof (which the reader is invited to discover). In higher dimensions, the only known proof uses the weak regularity lemma for hypergraphs. Proof. Let X = X1 U · · · U Xd+t· We may suppose that all the Xi are large (for otherwise, one-point Zi will do). Let F0 be the set of all "rainbow" X-simplices, i.e., of all transversals of (Xt, . . . , Xd+t), where the transversals . are formally considered as sets for the moment. The size of F0 is, for d fixed, at least a constant fraction of (Ǽǽ l) (here we use the assumptions that the X 1 are of equal size). Therefore, by the second selection lemma (Theorem 9.2.1), there is a subset J=i C Fo of at least f3nd+t X-simplices containing a common point a, where j3 = j3(d) > 0. (Note that we do not need the full power of the second selection lemma here, since we deal with the complete ( d+ 1 )-partite hypergraph.) For the subsequent argument we need the common point a to lie in the interior of many of the X -simplices. One way of ensuring this would be to assume a suitable strongly general position of X and use a perturba­ tion argument for arbitrary X. Another, perhaps simpler, way is to apply Lemma 9.1.2, which guarantees that a lies on the boundary of at most O(nd) of the X -simplices of .F1. So we let F2 C F1 be the X -simplices containing a in the interior, and for a sufficiently large n we still have IF21 > f3'nd+t. Next, we consider the (d+l)-partite hypergraph 1-l with vertex set X and edge set F2• We let E = c(d, d + 2), where c(d, m) is as in the same-type 9.5 A Positive-Fraction Selection Lemma 229 lemma, and we apply the weak regularity lemma (Theorem 9.4.1) to H. This yields sets Yt C Xt, · . . , Y d+t C Xd+t , whose size is at least a fixed fraction of the size of the Xi, and such that any subsets Z1 c Y1 , . . . , Zd+t c Yd+t of size at least ciYil induce an edge; this means that there is a rainbow X-simplex with vertices in the Zi and containing the point a. The argument is finished by applying the same-type lemma with the d+2 sets Yt, Y 2, . . . , Y d+t and Yd+2 == {a}. We obtain sets Z1 C Y1, . . . , Zd+l C yd+l and zd+2 = {a} with same-type transversals, and with IZil > EIYil for 'i = 1, 2, . . . , d+l. (Indeed, the san1e-type len1ma guarantees that at least one point is selected even from an 1-point set.) Now either all transversals of ( Z1, . . . , Zd+l) contain the point a in their convex hull or none does (use Exercise 9.3.1( d)). But the latter possibility is excluded by the choice of the Yi (by the weak regularity lemma). The positive-fraction selection lemma is proved. D It is amazing how many quite heavy tools are used in this proof. It would be nice to find a more direct argument. Bibliography and remarks. The planar case of Theorem 9.5.1 was proved by Barany, Fiiredi, and Lovasz [BFL90] (with c(2) > 112 ), and the result for arbitrary dimension is due to Pach [Pac98]. 10 Transversals and Epsilon Nets Here we are going to consider problems of the following type: We have a family F of geometric shapes satisfying certain conditions, and we would like to conclude that F can be "pierced" by not too many points, meaning that we can choose a bounded number of points such that each set ofF contains at least one of them. Such questions are sometimes called Gallai-type problems, because of the following nice problem raised by Gallai: Let :F be a finite family of closed disks in the plane such that every two disks in :F intersect. What is the smallest number of points needed to pierce F? For this problem, the exact answer is known: 4 points always suffice and are sometimes necessary. We will not cover this particular (quite difficult) result; rather, we con­ sider general methods for proving that the number of piercing points can be bounded. These methods yield numerous results where no other proofs are available. On the other hand, the resulting estimates are usually quite large, and in some simpler cases (such as Gallai's problem mentioned above), spe­ cialized geometric arguments provide much better bounds. Some of the tools introduced in this chapter are widely applicable and sometimes more significant than the particular geometric results. Such im­ portant tools include the transversal and matching numbers of set systems, their fractional versions (connected via the duality of linear programming), the Vapnik-Chervonenkis dimension and ways of estimating it, and epsilon nets. 10.1 General Preliminaries: Transversals and Matchings Let :F be a system of sets on a ground set X; both F and X may generally be infinite. A subset T C X is called a transversal of F if it intersects all the sets of F. 232 Chapter 10: Transversals and Epsilon Nets The transversal number of :F, denoted by r(:F), is the smallest possible car­ dinality of a transversal of :F. Many combinatorial and geometric problems, some them considered in this chapter, can be rephrased as questions about the transversal number of suitable set systems. Another important parameter of a set system :F is the packing number (or matching number) of :F, usually denoted by v(:F). This is the maximum cardinality of a system of pairwise disjoint sets in :F: A subsystem M C :F of pairwise disjoint sets is called a packing (or a match­ ing; this refers to graph-theoretic n1atching, which is a system of pairwise disjoint edges). Any transversal is at least as large as any packing, and so always v(:F) < r(:F). In the reverse direction, very little can be said in general, since r(:F) can be arbitrarily large even if v(:F) = 1. As a simple geometric example, we can take the plane as the ground set X and let the sets of :F be n lines in general position. Then v( :F) = 1, since every two lines intersect, but r( :F) > u n, because no point is contained in more than two of the lines. Fractional packing and transversal numbers. Now we introduce an­ other parameter of a set system, which always lies between v and r and which has proved extremely useful in arguments estimating r or v. First we restrict ourselves to set systems on finite ground sets. Let F be a system of subsets of a finite set X. A fractional transversal for :F is a function c.p: X ---+ [0, 1] such that for each S E F, we have :I: xES 1. The size of a fractional transversal c.p is L:xEX c.p( x), and the fractional transversal number r (:F) is the infimum of the sizes of fractional transversals. So in a fractional transversal, we can take one-third of one point, one-fifth of another, etc., but we must put total weight of at least one full point into every set. 10. 1 General Preliminaries: Transversals and Matchings 233 Similarly, a fractional packing for :F is a function 'lj;: :F ---+ [0, 1] such that for each x E X, we have 'EsEF: xES 'lf;(S} < 1. So sets receive weights and the total weight of sets containing any given point must not exceed 1. The size of a fractional packing '¢ is 'EsEF 'lf;(S), and the fractional packing nurnber v (:F) is the supremum of the sizes of all fractional packings for :F. It is instructive to consider ·the "triangle" system of 3 sets on 3 points, and check that v = 1, T = 2, and v = r = و. Any packing M yields a fractional packing (by assigning weight 1 to the sets in M and 0 to others), and so v < v. Similarly, r < T. \Ve promised one parameter but introduced two: r and v. But they happen to be the same. 10.1.1 Theorem. For every set system F on a finite ground set, we have v(F) = r(F). Moreover, the comn1on value is a rational number, and there exist an optimal f ractional transversal and an optimal f ractional packing attaining only rational values. This is not a trivial result; the proof is a nice application of the duality of linear programming. Here is the version of the linear programming duality we need. 10.1.2 Proposition (Duality of linear programming). Let A be an m x n real Inatrix, b E Rm a (column) vector, and c E Rn a (column) vector. Let P = {x E Rn: x > 0, Ax > b} and D = {y E Rm: y > 0, yT A < cT} (the inequalities between vectors should hold in every component). If both P i= 0 and D =I= 0, then min {cTx: x E P} = max {yTb: y E D} ; in particular, both the minimum and the maximum are well-defined and attained. This result can be quickly proved by piecing together a larger matrix from A, b, and c and applying a suitable version of the Farkas lemma (Lemma 1.2.5) to it (Exercise 6). It can also be derived directly from the separation theorem. 234 Chapter 10: Transversals and Epsilon Nets Let us remark that there are several versions of the linear programming duality (differing, for example, in including or omitting the requirement x > 0, or replacing Ax > b by Ax = b, or exchanging minima and maxima), and they are easy to mix up. Proof of Theorem 10.1.1. Set n = lXI and m = IFI, and let A be the m x n incidence matrix of the set system F: Rows correspond to sets, columns to points, and the entry corresponding to a point p and a set S is 1 if p E S and 0 if p fl. S. It is easy to check that v (F) and r (F) are solutions to the following optimization problems: r(:F) = min {18x: x > 0, Ax > 1m} , v(:F) == max {yT1m: y > 0, yT A < 1;} , where 1n E Rn denotes the (column) vector of all 1's of length n. Indeed, the vectors :1.Ø E R n satisfying x > 0 and Ax > 1m correspond precisely to the fractional transversals of F, and similarly, the y E Rn with y > 0 and yT A < 18 correspond to the fractional packings. There is at least one fractional transversal, e.g., x = 1n, and at least one fractional packing, namely, y = 0Ԋ and so Proposition 10.1.2 applies and shows that v(F) = r(:F). At the same tin1e, r (F) is the minimum of the linear function x t--7 1?; x over a polyhedron, and such a minimum, since it is finite, is attained at a vertex. The inequalities describing the polyhedron have rational coefficients, and so all vertices are rational points. 0 Remark about infinite set systems. Set systems encountered in geome­ try are usually infinite. In almost all the considerations concerning transver­ sals, the problem can be red need to a problem about finite sets, usually by a simple ad hoc argument. Nevertheless, we include here a few remarks that can aid a simple consistent treatment of the infinite case. However, they will not be used in the sequel in any essential way. There is no problem with the definitions of v and T in the infinite case, but one has to be a little careful with the definition of v and r to preserve the equality v = r. Everything is still fine if we have finitely many sets on an · infinite ground set: The infinite ground set can be factored into finitely many equivalence classes, where two points are equivalent if they belong to the same subcollection of the sets. One can choose one point in each equivalence class and work with a finite system. For infinitely many sets, some sort of compactness condition is certainly needed. For example, the system of intervals {[i, oo ) : i = 1, 2, . . . } has, ac­ cording to any reasonable definition, v = 1 but r = oo. If we let :F be a farnily of closed sets in a co1npact 1netric space X ( cotnpact Hausdorff space actually suffices), we can define v(:F) as sup,p LSE:F '¢(8), where the supremum is over all '¢: F ---+ [0, 1] attaining only finitely many nonzero values and such that 'LsE:F: xEs 'l/J(S) < 1 for each x E X. 10.1 General Preliminaries: Transversals and Matchings 235 For the definition of r, the first attempt might be to consider all functions <p: X ---+ [0, 1] attaining only finitely many nonzero values and summing up to at least 1 over every set. But this does not work very well: For example, if we let F be the system of all compact subsets of [0, 1} of Lebesgue measure u, say, then v < 2 but r would be infinite, since any finite subset is avoided by some member of F. It is better to define a fractional transversal of F as a Borel measure J-L on X such that J-L ( S) > 1 for all S E F, and r (F) as the infimum of J-.t( X) over all such J-L· With this definition, the validity of the first part Theorem 10.1.1 is preserved; i.e., v (:F) = r (F) for all systems F of closed sets in a compact X. The proof uses a little of functional analysis, and we omit it; it can be found in [KM97a]. The rationality of v and r no longer holds in the infinite case. Bibliography and remarks. Gallai's problem about pairwise in­ tersecting disks mentioned at the beginning of this chapter was first solved by Danzer in 1956, but he hasn't published the solution. For another solution and a historical account see Danzer [Dan86]. Attempting to summarize the contemporary knowledge about the transversal number and the packing number in combinatorics would mean taking a much larger bite than can be swallowed, so we restrict ourselves to a few sketchy remarks. An excellent source for many com­ binatorial results is Lovasz's problem collection [Lov93]. A quite old result relating v and T is the famous Konig's edge­ covering theorem from 1912, asserting that v(F) = r(F) if F is the system of edges of a bipartite graph (this is also easily seen to be equivalent to Hall's marriage theorem, proved by Frobenius in 1917; see Lovasz and Plummer [LP86) for the history). On the other hand, an appropriate generalization to systems of triples, namely, T < 2v for any tripartite 3-uniforrn hypergraph, is a celebrated recent result of Aharoni [AhaOl] (based on Aharoni and Haxell [AHOO]), while the generalization T < (k-1)v for k-partite k-uniform hypergraphs, known as Ryser's conjecture, remains unproved for k > 4. While computing v or T for a given F is well known to be NP­ hard, r can be computed in ti1ne polynornial in lXI + IFI by linear programming (this is another reason for the usefulness of the frac-. tional parameter). The problem of approximating T is practically very important and has received considerable attention. More often it is considered in the dual form, as the set cover problem: Given F with U F = X, find the smallest subcollection :F' C :F that still covers X. The size of such F' is the transversal number of the set system dual to (X, F), where each set S E F is assigned a point Ys and each point x E X gives rise to the set {ys: x E S}. For the set cover problem, it was shown by Chvatal and indepen­ dently by Lovasz that the greedy algorithn1 (always take a set covering the maximum possible number of yet uncovered points) achieves a so-236 Cl1apter 10: Transversals and Epsilon Nets lution whose size is no more than (1 + ln lXI) times larger than the optimal one.1 Lova.sz actually observed that the proof implies, for any finite set system F, r(F) < r (F) · (1 + ln Ll(F) ), where .d(F) is the maximum degree ofF, i.e., the maximum number of sets with a common point (Exercise 4). The weaker bound with Ll(F) replaced by IFI is easy to prove by probabilistic argument (Exercise 3). It shows that in order to have a large gap between r and r, the set system must have very many sets. Exercises 1. (a) Find examples of set systems with r bounded by a constant and r arbitrarily large. ITl (b) Find examples of set systems with v bounded by a constant and v arbitrarily large. ITJ 2. Let :F be a system of finitely many closed intervals on the real line. Prove that v(:F) == r(F). 0 3. Prove that r(F) < r(F) · ln(IFI+l) for all (finite) set systems F. Choose a transversal as a random sample. 0 4. (Analysis of the greedy algorithm for transversal) Let F be a finite set system. We choose points x1, x2, . . . , Xt of a transversal one by one: Xi is taken as a point contained in the maximum possible number of uncovered sets (i.e., sets of F containing none of x1, • • • , Xi_1) . (a) Prove that the size t of the resulting transversal satisfies where d == Ll(F) is the maximum degree ofF and vk(:F) is the maximum size of a simple k-packing in :F. A subsystem M C F is a simple k-packing if Ll(M) < k (so v1(F) = v(F)). 0 (b) Conclude that r(F) < t < r(F) · E%=1 !· 0 5. Konig's edge-covering theorem asserts that if E is the set of edges of a bipartite graph, then v(E) = r(E). Hall's marriage theorem states that if G is a bipartite graph with color classes A and B such that every subset S C A has at least lSI neighbors in B, then there is a matching in G containing all vertices of A. 1 As a part of a very exciting development in complexity theory, it was re­ cently proved that no polynomial-time algorithm can do better in general unless P == NP; see, e.g., [Hoc96) for proofs and references. 1 0.2 Epsilon Nets and VC-Din1ension 237 (a) Derive Konig's edge-covering theorem from Hall's marriage theorem. õ (b) Derive Hall's marriage theorem from Konig's edge-covering theorem. ƴ 6. Let A, b, c, P, and D be as in Proposition 10.1.2. (a) Check that cT x > yTb for all x E P and all y E D. ITl (b) Prove that if P ¥- 0 and D ¥- 0, then the system Ax < b, yT A > c, cT x > yTb has a nonnegative solution x, y (which implies Proposition 10.1.2). Apply the version of the Farkas lemma as in Exer­ cise 1. 2. 7 (b) . \ 10.2 Epsilon Nets and VC-Dimension Large sets should be easier to hit by a transversal than small ones. The notion of c-net and the related theory elaborate on this intuition. We begin with a special case, where the ground set is finite and the size of a set is simply measured as the cardinality. 10.2.1 Definition (Epsilon net, a special case). Let (X, F) be a set system with X finite and let c E [0, 1] be a real number. A set N C X (not necessarily one of the sets of F) is called an c-net for {X, F) if N n S i- 0 for all S E F with lSI > ciXI. So an c-net is a transversal for all sets larger than ciXI. Sometimes it is convenient to write f instead of c, with r > 1 a real parameter. A beautiful result (Theorem 10.2.4 below) describes a simple combinatorial condition on the structure of F that guarantees the existence of Ŏ-nets of size only O(r logr) for all r > 2. If we want to deal with infinite sets, measuring the size as the number of points is no longer appropriate. For example, a "large" subset of the unit square could naturally be defined as one with large Lebesgue measure. So in general we consider an arbitrary probability measure J.L on the ground set. In concrete situations we will most often encounter J.L concentrated on finitely many points. This means that there is a finite set Y C X and a positive function w: Y -4 {0, 1] with EyEY w(y) = 1, and J.L is given by J.L(A) = EyEAnY w(y). In particular, if the weights of all points y E Y are the same, i.e., 1ǻ1 , we speak of the uniform measure on Y. Another common example of J.L is a suitable multiple of the Lebesgue measure restricted to some geometric figure. 10.2.2 Definition (Epsilon net). Let X be a set, let J.-t be a probability measure on X, let :F be a system of J.L-measurable subsets of X, and let c E [0, 1) be a real number. A subset N C X is called an c-net for (X, :F) with respect to J.-t if N n S ¥- 0 for all S E F with J.-t( S) > c. 238 Chapter 10: Transversals and Epsilon Nets VC-dimension. In order to describe the result promised above, about ex­ istence of small c--nets, we need to introduce a parameter of a set system called the Vapnik-Chervonenkis dimension, or VC-dimension for short. Its applications are much wider than for the existence of c--nets. Let :F be a set system on X and let Y C X. We define the restriction of :F on Y (also called the trace of :F on Y) as :Fly = {S n Y: S E :F}. It may happen that several distinct sets in :F have the same intersection with Y; in such a case, the intersection is still present only once in Fly. 10.2.3 Definition (VC-dimension). Let :F be a set system on a set X. Let us say that a subset A C X is shattered by F if each of the subsets of A can be obtained as the intersection of some S E :F with A, i.e., if :FIA = 2A. W e define the VC-dimension of :F, denoted by dim(:F), as the supremum of the sizes of all finite shattered subsets of X. If arbitrarily large subsets can be sl1attered, tl1e VC-dimension is oo. Let us consider two examples. First, let 1l be the system of all closed half-planes in the plane. We claim that dim(1-l) = 3. If we have 3 points in general position, each of their subsets can be cut off by a half-plane, and so such a 3-point set is shattered. Next, let us check that no 4-point set can be shattered. Up to possible degeneracies, there are only two essentially different positions of 4 points in the plane: • 0 • 0 0 • • • In both these cases, if the black points are contained in a half-plane, then a white point also lies in that half-plane, and so the 4 points are not shat­ tered. This is a rather ad hoc argument, and later we will introduce tools for bounding the VC-dimension in geometric situations. We will see that bounded VC-dimension is rather common for families of simple geometric objects in Euclidean spaces. A rather different example is the system K2 of all convex sets in the plane. Here the VC-dimension is infinite, since any finite convex independent set A is shattered: Each B C A can be expressed as the intersection of A with a convex set, namely, B = A n conv(B) . \ ' < \ < ' \ :-, _ . · · · · · · ·0 · -... '• '• . . .. . ... . 10.2 Epsilon Nets and VC-Dimension 239 We can now formulate the promised result about small c--nets. 10.2.4 Theorem (Epsilon net theorem). If X is a set with a probability 111 easure J-l, F is a system of J.t-measurable subsets of X with dim(F) < d, d > 2, and r > 2 is a parameter, then there exists a ; -net for (X, F) with respect to J-L of size at most Cdr In r, where C is an absolute constant. The proof below gives the estimate C < 20, but a more accurate calcula­ tion shows that C can be taken arbitrarily close to 1 for sufficiently large r . 1v1ore precisely, for any d > 2 there exists an r0 > 1 such that for all r > r0 , each set system of VC-dimension d admits a ; -net of size at most dr ln r. 1V1oreover, this bound is tight in the worst case up to smaller-order terms. For the proof (and also later on) we need a fundamental lemma bounding the number of distinct sets in a system of given VC-dimension. First we define the shatter function of a set system F by tr.r(m) = max IFIY I · YCX, IYI=m In words, 1r .r( m) is the maxin1um possible number of distinct intersections of the sets of :F with an m-point subset of X. 10.2.5 Lemma (Shatter function lemma). For any set system F of VC-dimension at most d, we have tr.r(m) < уd(m) for all m, where уd(m) = (7{) + (7) + . . . + (r;I) . Thus, the shatter function for any set system is either 2m for all m (the case of infinite VC-dimension) or it is bounded by a fixed polynomial. For d fixed and m ---+ oo , <Pd(m) can be simply estimated by O(md). For more precise calculations, where we are interested in the dependence on d, we can use the estimate <P d ( m) < ( e ;r) d, where e is the basis of natural logarithms. This is valid for all m, d > 1. Proof of Lemma 10.2.5. Since VC-dimension does not increase by passing to a subsystem, it suffices to show that any set system of VC-dimension at most d on an n-point set has no more than <Pd(n) sets. We proceed by induction on d, and for a fixed d we use induction on n. Consider a set system (X, F) with lXI == n and dim( F) == d, and fix some x E X. In the induction step, we would like to remove x and pass to the set system Ft = :Fix \ {x} on n-1 points. This :fi has VC-dimension at most d, and hence JF1 J < уd(n -1) by the inductive hypothesis. How many more sets can F have compared to :fi? The only way that the number of sets decreases by removing x is when two sets S, S' E :F give rise to the same set in :F1, which means that S' = S U { x}, x ¢ S, or the other way round. This suggests that we define an auxiliary set system F2 consisting of all sets in F1 that correspond to such pairs S, S' E :F: F2 = { S E F: x ¢ S, S U { x} E F}. By the above discussion, we have IFJ = IFt l + IF2 l· Crucially, we observe that dim(F2) < d-1, since if A C X \ { x} is shattered by F2, then AU {x} is 240 Chapter 10: Transversals and Epsilon Nets shattered by F. Therefore, IF2I < d-1 ( n-1}. The resulting recurrence has already been solved in the first proof of Proposition 6.1.1. 0 The rest of the proof of the epsilon net theorem is a clever probabilistic argument; one might be tempted to believe that it works by some magic. First we need a technical lemma concerning the binomial distribution. 10.2.6 Lemma. Let X = X1 +X2 +· · · +Xn, where the Xi are independent random variables, Xi attaining the value 1 with probability p and the value 0 with probability 1-p. Then Prob [X > ´ np] > ´, provided that np > 8. Proof. This is a routine consequence of Chernoff-type tail estimates for the binomial distribution, and in fact, considerably stronger estimates hold. The simple result we need can be quickly derived from Chebyshev,s inequality for X, stating that Prob [IX - E(X] I > t] < Var [X] jt2, t > 0. Here E(X] = np and Var [X] == E:" 1 Var [Xi] < np. So 4 Prob [X < ;np] < Prob [IX - E[X] I > ;np] < - < ´­ np 0 Proof of the epsilon net theorem. Let us put s = Cdr ln r (assuming without harm that it is an integer), and let N be a random sample picked by s independent random draws, where each element is drawn from X according to the probability distribution J.l· (So the same element can be drawn several times; this does not really matter much, and this way of random sampling is chosen to make calculations simpler.) The goal is to show that N is a <-net with a positive probability. To simplify formulations, let us assume that all S E :F satisfy J-L( S) > ; ; this is no loss of generality, since the smaller sets do not play any role. The probability that the random sample N misses any given set S E F is at most (1 - ;)8 < e-s/r, and so if s were at least r ln(IFI+1), say, the conclusion would follow immediately. But r is typically much smaller than IFI (it can be a constant, say), and so we need to do something more sophisticated. Let Eo be the event that the random sample N fails to be a <-net, i.e., misses some S E F. We bound Prob [Eo] from above using the following thought experiment. By s more independent random draws we pick another random sample M.2 We put k = ; r , again assuming that it is an integer, and we let E1 be the following event: There exists an S E :F with N n S = 0 and IM n Sl > k. 2 This double sampling resembles the proof of Proposition 6.5.2, and indeed these proofs have a lot in common, although they work in different settings. 10.2 Epsilon Nets and VC-Dimension 241 Here an explanation concerning repeated elements is needed. Formally, we regard N and M as sequences of elements of X, with possible repetitions, so N = (xi , x2, . . . , x 8) , M = (yi , y2, . . . , y8). The notation jM n Sj then really n1eans I { i E 1, 2, . . . , s: Yi E S} I, and so an element repeated in M and lying in S is counted the appropriate number of times. Clearly, Prob [E1) < Prob (Eo), since E1 requires Eo plus something more. We are going to show that Prob[E1] > ´ Prob[E0] . Let us investigate the conditional probability Prob [E1 I N] , that is, the probability of E1 when N is fixed and M is rando1n. If N is a j-net, then E1 cannot occur, and Prob[Eo I N] = Prob[E1 I N] = 0. So suppose that there exists an S E :F with N n S = 0. There may be many such S, but let us fix one of them and denote it by SN. We have Prob[EI I N] > Prob[IM n SNI > k]. The quantity IM n SNI behaves like the random variable X in Lemma 10.2.6 with n = s and p = ࢜, and so Prob[IM n SNI > k] > k· Hence Prob[Eo i N] < 2 Prob[EI I N] for all N, and thus Prob[Eo] < 2 Prob[E1]. Next, we are going to bound Prob [E1] differently. Instead of choos­ ing N and M at random directly as above, we first make a sequence A = ( z1, z2, . . . , Z2s) of 2s independent random draws from X. Then, in the second stage, we randomly choose s positions in A and put the elements at these positions into N, and the remaining elements into M (so there are (2 8 8) possibilities for A fixed). The resulting distribution of N and M is the same as above. We now prove that for every fixed A, the conditional probabil­ ity Prob (E1 I A] is small. This implies that Prob [E1] is small, and therefore Prob [Eo] is small as well. So let A be fixed. First let S E :F be a fixed set and consider the con­ ditional probability Ps = Prob[N n s = 0, IM n Sl > k I A]. If lA n Sl < k, then Ps = 0. Otherwise, we bound Ps < Prob[N n S = 0 1 A]. The latter is the probability that a random sample of s positions out of 2s in A avoids the at least k positions occupied by clements of S. This is at most es;k) < (1 _ ¶) 8 < e-(k/2s)s = e-k/2 = e-(Cdln r)/4 = r-Cd/4. (2ss) -2s -This was an estimate of Ps for a fixed S E F. Now, finally, we use the assumption about the VC-dimension of F, via the shatter function lemma: The sets of :F have at most k" depends only on A n S, it suffices to consider at most <Pd(2s) distinct sets S, and so for every fixed A, Prob [E1 I A] < d(2s) · r-Cd/4 < ( 2ß8) d r-Cd/4 = ( 2er In r · r-C/4 t < à if d, r > 2 and C is sufficiently large. So Prob [Eo] < 2 Prob [E1] < 1, which proves Theorem 10.2.4. D 242 Chapter 10: Transversals and Epsilon Nets The epsilon net theorem implies that for set systems of small VC-dimen­ sion, the gap between the fractional transversal number and the transversal number cannot be too large. 10.2. 7 Corollary. Let :F be a finite set system on a ground set X with dim(:F) < d. Then we have where C is as in the epsilon net theorem. Proof. Let r = r(:F). Since :F is finite, we may assume that an optimal fractional transversal r.p: X --t [0, 1] is concentrated on a finite set Y. This r.p, after rescaling, defines a probability measure 11 on X, by letting J-L( { y}) = S r.p(y), y E Y. Each S E F has M(S) > S by the definition of fractional transversal, and so a ; -net for (X, :F) with respect to 1-l is a transversal. By the epsilon net theorem, there exists a transversal of size at most Cdr In r. 0 We mention a concrete application of the corollary in the next section, where we collect examples of set systems of bounded VC-dimension. Bibliography and remarks. The notion of VC-dimension orig­ inated in statistics. It was introduced by Vapnik and Chervonenkis [VC71] . Under different names, it has also appeared in other papers (Sauer (Sau72] and Shelah [She72]), but the work [VC71] was probably the most influential for subsequent developments. The name VC-di­ mension and some other, by now more or less standard, terminology were introduced by Haussler and Welzl [HW87]. VC-dimension and the related theory play an important role in several mathematical fields, such as statistics (the theory of empirical processes), computational learning theory, con1putational geometry, discrete gcon1etry, cornbina­ torics of hypergraphs, and discrepancy theory. The shatter function lemma was independently discovered in the three already mentioned papers [VC71], [Sau72], [She72]. The shatter function, together with the dual shatter function (de­ fined as the shatter function of the dual set system) was introduced and applied by Welzl [Wel88]. Implicitly, these notions were used much earlier, and they appear in the literature under various names, such as growth functions. The notion of c--net and the epsilon net theorem (with X finite and J-L uniform) are due to Haussler and Welzl [HW87]. Their proof is essentially the one shown in the text, and it closely follows an ear­ lier proof by Vapnik and Chervonenkis [VC71] concerning the related notion of £-approximations. In the same setting as in the definition of 10.8 Bounding the VC-Dimension and Applications 243 £-nets, a set A C X is an £-approximation for (X, F) with respect to IL if for all S E F, !A n SI J-L(S) -IAI < E. So while an £-net intersects each large set at least once, an €-ap­ proximation provides a "proportional representation" up to the er­ ror of £. Vapnik and Chervonenkis [VC71] proved the existence of ; -approximations of size 0( dr2 log r) for all set system of VC-dimen­ sion d. Koml6s, Pach, and Woginger [KPW92] improved the dependence on d in the Hausslcr-Welzl bound on the size of E-ncts. The improve­ ment is achieved by choosing the second sample M of size t somewhat larger than s and doing the calculations more carefully. They also proved an almost matching lower bound using suitable random set systems. The proofs can be found in [PA95] as well. The proof in the Vapnik-Chcrvonenkis style, while short and clever, does not seem to convey very well the reasons for the existence of small £-nets. Somewhat longer but more intuitive proofs have been found in the investigation of deterministic algorithms for constructing £-approximations and €-nets; one such proof is given in [Mat99a], for instance. Exercises 1. Show that for any integer d there exists a convex set C in the plane such that the family of all isometric copies of C has VC-dimension at least d. 0 2. Show that the shatter function lemma is tight. That is, for all d and n construct a system of VC-dimension d on n points with d( n) sets. [I] 10.3 Bounding the VC-Dimension and Applications The VC-dimension can be determined without great difficulty in several sim­ ple cases, such as for half-spaces or balls in Rd, but for only slightly more com­ plicated families its computation becomes challenging. On the other hand, a few simple steps explained below show that the VC-dimension is bounded for any family whose sets can be defined by a formula consisting of polynomial equations and inequalities combined by Boolean connectives (conjunctions, disjunctions, etc.) and involving a bounded number of real parameters. This includes families like all ellipsoids in R d, all boxes in R d, arbitrary intersec­ tions of pairs of circular disks in the plane, and so on. On the other hand, arbitrary convex polygons are not covered (since a general convex polygon cannot be described by a bounded number of real parameters) and indeed, this family has infinite VC-dimension. 244 Chapter 1 0: Transversals and Epsilon Nets We begin by determining the VC-dimension for half-spaces. 10.3.1 Lemma. The VC-dimension of the system of all (closed) half -spaces in Rd equals d+l. Proof. Obviously, any set of d+ 1 affinely independent points can be shat­ tered. On the other hand, no d+2 points can be shattered by Radon's lemma. 0 Next, we turn to the family Pd,D of all sets in Rd definable by a single polynomial inequality of degree at most D. 10.3.2 Proposition. Let R[x 1 , x2, . . • , xd] 0}: p E R[x1. x2, . . . , xdi<D }· Then dim(Pd,D) < (dǺD) . Proof. The following simple but powerful trick is known as the Veronese mapping in algebraic geometry (or as linearization; it is also related to the reduction of Voronoi diagrams to convex polytopes· in Section 5.7). Let M be the set of all possible nonconstant monomials of degree at most D in x1, . . . , xd. For example, for D = d = 2, we have M = {x1, x2, x1x2, xr,xœ}. Let m = JMJ and let the coordinates in Rm be indexed by the monomials in M. Define the map <p: Rd --+ Rm by cp(x)11 = J..t (x), where the monomial J..t serves as a formal symbol (index) on the left-hand side, while on the right­ hand side we have the number obtained by evaluating J.l at the point x E Rd. For example, for d = D = 2, the map is 0}. For example, if p(xi, X2) = 7 + 3x2 - X!X2 + xr E P2,2, the corresponding half-space is hp = {y E R5: 7 + 3y2 - Y3 + Y4 > 0}. Then we get hp n cp(A) = <p(B). Since, finally, <p is injective, we obtain a set of size JAJ in Rm shattered by half-spaces. By Lemma 10.3.1, we have dim(Pd,D) < IMI+l = (Ddd) . D Geometrically, the Veronese map embeds Rd into Rm as a curved man­ ifold in such a way that any subset of R d definable by a single polynomial inequality of degree at most D can be cut off by a half-space in R m. Except for few simple cases, this is hard to visualize, but the formulas work in a really simple way. 10.3 Bounding the VC-Dimension and Applications 245 By Proposition 10.3.2, any subfamily of some Pd,D has bounded VC-di­ mension; this applies, e.g., to balls in Rd (D = 2) and ellipsoids in Rd (D = 2 as well). For concrete families, the bound from Proposition 10.3.2 is often very weak. First, if we deal only with special polynomials involving fewer than ( Ddd) monomials, then we can use an embedding into R m with a smaller m. We also do not have to use only coordinates corresponding to monomials in the embedding. For example, for the family of all balls in Rd, a suitable embedding is <p: Rd  Rd+l given by (xt, . . . , xd) ......r (xi, x2, . . . , xi + x³ + · · · + xĸ). It is closely related to the "lifting" transforming Voronoi diagrams in Rd to convex polytopes in Rd+l discussed in Section 5.7. Estimates for the VC-dimension can also be obtained from Theorem 6.2.1 about the number of sign patterns of polynomials or from similar results. Combinations of polynomial inequalities. Families like all rectangular boxes in R d or l unes (differences of two disks in the plane) can be handled using the following result. 10.3.3 Proposition. Let F(X1, X2, . . . , Xk) be a fixed set-theoretic expres­ sion (using tl1e operations of union, intersection, and difference) with variables X 1, . . . X k standing for sets; for instance, Let S be a set system on a ground set X with dim(S) = d < oo. Let T = {F(St, . . . , Sk): St, . . . , Sk E S}. Then dim(T) = O(kdln k). Proof. The trick is to look at the shatter functions. Let A C X be an m-point set. It is easy to verify by induction on the structure of F that for any 81, 82, . . . , Sk, we have F(St, . . . , Sk) n A = F(St n A, . . . , Sk n A). In particular, F(S1, • • . , Sk) n A depends only on the intersections of the Si with A. Therefore, 1r7(m) < 1rs(m)k. By the shatter function lemma, we have 1rs(m) < d(m). If A is shattered by T, then 1r7(m) = 2m. From this we have the inequality 2m < d(m)k. Calculation using the estimate d(m) < (e;F)d leads to the claimed bound. D Propositions 10.3.3 and 10.3.2 together show t.hat families of geometric shapes definable by formulas of bounded size involving polynomial equations and inequalities have bounded VC-dimension. (In the terminology introduced in Section 7. 7, families of semialgebraic sets of bounded description complex­ ity have bounded VC-dimension.) In the subsequent example we will en­ counter a family of quite different nature with bounded VC-dimension. First we present a general observation. VC-dimension of the dual set system. Let (X, F) be a set system. The dual set system to (X, F) is defined as follows: The ground set is Y = 246 Chapter 1 0: Transversals and Epsilon Nets {ys: S E F}, where the Ys are pairwise distinct points, and for each x E X we have the set {ys: S E F, x E S} (the same set may be obtained for several different x, but this does not matter for the VC-dimension). 10.3.4 Lemma. Let (X, F) be a set system and let (Y, Q) be the dual set system. Then dim(Q) < 2dim(F)+l . Proof. We show that if dim(Q) > 2d, then dim(F) > d. Let A be the inci­ dence matrix of (X, F), with columns corresponding to points of X and rows corresponding to sets of F. Then the transposed matrix AT is the incidence matrix of (Y, Q). If Y contains a shattered set of size 2d, then A has a 2d x 22d submatrix M with all the possible 0/1 vectors of length 2d as columns. We claim that M contains as a submatrix the 2d x d matrix M1 with all pos­ sible 0/1 vectors of length d as rows. This is simply because the d columns of M1 are pairwise distinct and they all occur as columns of M. This M1 corresponds to a shattered subset of size d in (X, :F). Here is an example for d = 2: M = 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 the submatrix M1 is marked bold. 1 1 1 1 1 1 0 1 1 1 0 1 0 An art gallery problem. An art gallery, for the purposes of this section, is a compact set X in the plane, such as the one drawn in the following picture: The set X is the lightly shaded area, while the black regions are walls that are not part of X. We want to choose a small set G c X of guards that 10.3 Bounding the VC-Dimension and Applications 247 together can see all points of X, where a point x E X sees a point y E X if the segment xy is fully contained in X. The visibility region V ( x) of a point x E X is the set of all points y E X seen by x, as is illustrated below: It is easy to construct galleries that require arbitrarily many guards; it suffices to include many small niches so that each of them needs an individual guard. To forbid this cheap way of making a gallery difficult to guard, we consider only galleries where each point can be seen from a reasonably large part of the gallery. That is, we suppose that the gallery X has Lebesgue measure 1 and that J.t(V(x)) > E for every x E X, where E > 0 is a parameter (say 110 ) and J.t is the Lebesgue measure restricted to X. Can every such gallery be guarded by a number of guards that depends only on c? The answer to this question is still no, although an example is not entirely easy to construct. The problem is with galleries with many "holes," i.e., many connected components of the complement (corresponding to pillars in a real­ world gallery, say). But if we forbid holes, then the answer becomes yes. 10.3.5 Theorem. Let X be a simply connected art gallery (i.e., with R2\X connected) of Lebesgue measure 1, and let r > 2 be a real number such that J.t(V ( x)) > j for all x E X. Then X can be guarded by at most Cr log r points, where C is a suitable absolute constant. Proof. The bound O(r logr) for the number of guards is obtained from the epsilon net theorem (Theore1n 10.2.4). Na1nely, we introduce the set system V = {V(x): x E X}, and note that G is a set guarding all of X if and only if it is a transversal of V. Further, an E-net for (X, V) with respect to J.t is a transversal of V, since by the assumption, J.t(V) > E = j for each V E V. So the theorem will be proved if we can show that dim(V) is bounded by some constant (independent of X). Tools like Proposition 10.3.2 and Proposition 10.3.3 seem to be of little use, since the visibility regions can be arbitrarily complicated. We thus need a different strategy, one that can make use of the simple connectedness. We 248 Chapter 10: 'Iransversals and Epsilon Nets proceed by contradiction: Assuming the existence of an extremely large set A c X shattered by V, we find, by a sequence of Ramsey-type steps, a configuration forcing a hole in X. Let d be a sufficiently large nu1nber, and suppose that there is a d-point set A C X shattered by V. This means that for each subset B C A there exists a point aB E X that can see all points of B but no point of A \ B. We put E ::;;::; {a B : B C A}. In such a situation, we say that A is shattered by E. Starting with A and E, we find a smaller shattered set in a special position. We draw a line through each pair of points of A. The arrangement of these at most (;) lines has at most O(d4) faces (vertices, edges, and open convex polygons), so there is one such face F0 containing a subset E' C E of at least 2d /O(d4) points of E. These points correspond to subsets of A, and so they define a set systen1 V1 on A. If d1 = dim(V1) were bounded by a constant independent of d, then the number of sets in V1 would grow at most polynomially with d (by Lemma 10.2.5). But we know that it grows exponentially, and so d1 ---7 oo as d ---+ oo. Thus, we may assume that some subset A1 C A is shattered by a subset ф1 C E', with d1 = IA1 I large, and the whole of E1 lies in a single face of the arrangement of the lines determined by points of A1. Next, we would like to ensure a similar condition in the reverse direction, that is, all the points being shattered lying in a single cell of the arrangement of the lines determined by the shattering points. A simple, although wasteful, way is to apply Lemtna 10.3.4 about the dimension of the dual set system. This means that we can select sets A2 C E1 and E2 C A1 such that A2 is shattered by E2 and d2 Ü IA2I is still large (about log2 d1). Now we can repeat the procedure from the first step of the proof, this time selecting a set A3 C A2 of size d3 (still sufficiently large) and E3 C E2 such that A3 is shattered by E3 and all of E3 lies in a single face of the arrangement of the lines determined by the pairs of points of A3. This face must be 2-dimensional, since if it were an edge, all the points of A3 and E3 would be collinear, which is impossible. We thus have all points of A3 within a single 2-face of the arrangement of the lines determined by E3 and vice versa. In other words, no line determined by two points of A3 intersects conv(E3), and no line determined by two points of E3 intersects conv(A3). In particular, conv(A3) n conv(E3) ::;;::; 0. It follows 10.3 Bounding the VC-Dimension and Applications 249 that each point of ф3 sees all points of A3 within an angle smaller than 1r and in the same clockwise angular order; let <A be this linear order of the points of A3. Similarly, we have a common counterclockwise angular order <E of points of E3 around any point of A3. Suppose that the initial d was so large that d3 = IA3 I = 5. For each a E A3, we consider the point a (a) E ф3 that sees all points of A3 but a. Let these 5 points form a set E4 C ࣥ3· We have a situation indicated below, where dashed connecting segments correspond to invisibility and they form a rnatching between A3 and E4. •··. .-·· Since we have 5 points on each side, we may choose an a E A3 such that a is neither the first nor the last point of A3 in <A, and at the same time a = a( a) E E4 is not the first or last point in <:E. Then we have the following situation (full segments indicate visi hili ty, and the dashed segment means invisibility): a' a'' .-----The segments aa' and a' a both lie above the line aa, and they intersect as indicated (a' cannot line in the triangle aaa', because the line aa' would go between a and a', and neither can the segment a a' be outside that triangle, because then the line a a' would separate a from a'). Similarly, the segments aa" and a" a intersect as shown. The four segments aa', a' a, aa", and a" a are contained in X, and since X is simply connected, the shaded quadrilateral bounded by them must be a part of X. Hence a and a can see each other. This contradiction proves Theorem 10.3.5. D The bound on the VC-dimension obtained from this proof is rather large: about 1012. By a more careful analysis, avoiding the use of Lemma 10.3.4 on the dual VC-dimension where one loses the most, the bound has been im­ proved to 23. Determining the exact VC-dimension in the worst case might be quite challenging. The art gallery drawn in the initial picture is not chosen only because of the author's liking for several baroque buildings with pentag­ onal symmetry, but also because it is an example where V has VC-dimension at least 5 (Exercise 2). A more complicated example gives VC-dimension 6, and this is the current best lower bound. 250 Chapter 10: Transversals and Epsilon Nets Bibliography and remarks. As was remarked in the text, for bounding the VC-dimension of set systems defined by polynomial in­ equalities, we can use the linearization method (as in the proof of Proposition 10.3.2) or results like Theorem 6.2.1 on the nurnber of sign patterns. The latter can often provide asymptotically sharp bounds on the shatter functions (which are usually the more important quantita­ tive parameters in applications); for linearizations, this happens only in quite simple cases. There are fairly general results bounding the VC-di1nension for families of sets defined by functions more general than polynomials; see, e.g., Wilkie [Wil99] and Karpinski and Macintyre [KM97b]. Considerations similar to the proof of Proposition 10.3.3 appear in Dudley [Dud78]. Lemma 10.3.4 about the VC-dimcnsion of the dual set system was noted by Assouad [Ass83]. The art gallery problem considered in this section was raised by Kavraki, Latombe, Motwani, and Raghavan [KLl\1R98] in connection with automatic motion planning for robots. Theorem 10.3.5, with the proof shown, is from Kalai and Matousek [Kl\197a]. That paper also proves that for galleries with h holes, the nu1nber of guards can be bounded by a function of E and h, and provides an example showing that one may need at least O(log h) guards in the worst case for a suit­ able fixed c. Valtr [Val98J greatly improved the quantitative bounds, obtaining the lower bound of 6 and upper bound of 23 for dim(V) for sirnply connected galleries, as well as a bound of O(log2 h) for galleries with h holes. In another paper [Val99b], he constructed contractible 3-dimensional galleries where the visibility region of each point occu­ pies almost half of the total volume of the gallery but the number of guards is unbounded, which shows that Theorem 10.3.5 has no straightforward analogue in dimension 3 and higher. Here is another result from [KM97aJ: If a planar gallery X is such that among every k points of X there are 3 that can be guarded by a single guard, then all of X can be guarded by 0( k3 log k) guards. Let us stress that our ex­ ample was included mainly as an illustration to VC-dimension, rather than as a typical specirnen of the extensive subject of studying guards in art galleries from the mathematical point of view. This field has a large number results, some of them very nice; see, e.g., the handbook chapter [UrrOO] for a survey. Exercises 1. (a) Determine the VC-dimension of the set system consisting of all tri­ angles in the plane. [I] (b) What is the VC-dimension of the system of all convex k-gons in the plane, for a given integer k? 0 10.4 Weak Epsilon Nets for Convex Sets 251 2. Show that dim(V) > 5 for the art gallery shown above Theorem 10.3.5. 12] Can you construct an example with VC-dimension 6, or even higher? 3. Show that the unit square cannot be expressed as { (x, y) E R2: p(x, y) > 0} for any polynomial p(x, y). 0 4. (a) Let H be a finite set of lines in the plane. For a triangle T, let HT be the set of lines of H intersecting the interior of T, and let T C 2H be the system of the sets HT for all triangles T. Show that the VC-dimension of T is bounded by a constant. I2J (b) Using (a) and the epsilon net theorem, prove the suboptimal cut­ ting lemma (Lemma 6.5.1): For every finite set H of lines in the plane and for every r, 1 < r < IHI, there exists a Ƈ-cutting for L consisting of 0( r2 log2 r) generalized triangles. Use the proof in Section 4.6 as an inspiration. 0 (c) Generalize (a) and (b) to obtain a cutting lemma for circles with the same bound O(r2 log2 r) (see Exercise 4.6.3). IT] 5. Let d > 1 be an integer, let U = {1, 2, . . . , d} and V = 2u. Let the shattering graph SG d have vertex set U U V and edge set { {a, A}: a E U, A E V, a E A}. Prove that if H is a bipartite graph with classes R and S, IRI = r and lSI = s, such that r+log2 s < d, then there is an r-element subset R1 C U and an s-element S1 C V such that the subgraph induced in SGd by R1 u 81 is isomorphic to H. Thus, the shattering graph is "universal" : It contains all sufficiently small bipartite su bgraphs. 0 6. For a graph G, let N(G) = {Nc (v): v E V(G)} be the system of vertex neighborhoods (where Nc(v) = {u E V(G): {u, v} E E(G)}). (a) Prove that there is a constant do such that dim (N (G)) < do for all planar G. 0 (b) Show that for every C there exists d = d( C) such that if G is a graph in which every subgraph on n vertices has at most Cn edges, for all n > 1, then dim(N(G)) < d. (This implies (a) and, more generallyժ shows that bounded genus of G implies bounded dim(N(G)).) 0 (c) Show that for every k there exists d = d(k) such that if dim(N(G)) > d, then G contains a subdivision of the complete graph Kk as a subgraph. (This gives an alternative proof that if dim(N( G)) is large, then the genus of G is large, too.) 0 10.4 Weak Epsilon Nets for Convex Sets Weak e-nets. Let 1-l be the system of all closed half-planes in the plane, and let J.-t be the planar Lebesgue measure restricted to a (closed) disk D of unit area. What should the smallest possible €-net for (R2, 1-l) with respect to J.-t look like? A natural idea would be to place the points of the E-nct equidistantly around the perimeter of the disk: 252 Chapter 10: Transversals and Epsilon Nets Is this the best way? No; according to Definition 10.2.2, three points placed as in the picture below form a valid c-net for every c > 0, since any half-plane cutting into D necessarily contains at least one of them! . . . . . . . . . . . . . ' :' NJ-. . . ·. / Țț . . . ·· · · · · .. . ' · ·· · · · · · · · ·· · · · · · · · · .. , . . . . . . . . . . ....... ș - . . ····• . . . . . . . . . . . - ...... . .. .. . .. ... . . . : . One may feel that this is a cheating. The problem is that the points of this £-net are far away from where the measure is concentrated. For some applica­ tions of c-nets this is not permissible, and for this reason, c-nets of this kind are usually called weak c-nets in the literature, while a "real" €-net in the above example would be required to have all of its points inside the disk D. For £-nets obtained using the epsilon net theorem (Theorem 10.2.4), this presents no real problem, since we can always restrict the considered set system to the subset where we want our c-net to lie. In the above exa1nple we would simply require an £-net for the set system (D, Hln)· The restriction to a subset does not increase the VC-dimension. On the other hand, there are set systems of infinite VC-dimension, and there we cannot require small £-nets to exist for every restriction of the ground set. Indeed, if (X, F) has infinite VC-dimension, then by definition, there is an arbitrarily large A C X that is shattered by F, meaning that Fl A = 2A. And the complete set system (A, 2A) certainly does not admit small £-nets: Any µ-net, say, for (A, 2A) with respect to the uniform measure on A must have at least l IAI elements! In this sense, the epsilon net theorem is an "if and only if" result: A set system (X, F) and all of its restrictions to smaller ground sets admit £-nets of size depending only on £ if and only if dim( F) is finite. As was mentioned after the definition of VC-dimension, the (important) system K2 of convex sets in the plane has infinite VC-dimension. Therefore, the epsilon net theorem is not applicable, and we know that restrictions of K2 to some bad ground sets (convex independent sets, in this case) provide arbitrarily large complete set systems. But yet it turns out that not too large (weak) c-nets exist if the ground set is taken to be the whole plane (or, actually, it can be restricted to any convex set). These are much less 10.4 Weak Epsilon Nets for Convex Sets 253 understood than the E-nets in the case of finite VC-dimensions, and many interesting questions remain open. As has been done in the literature, we will restrict ourselves to measures concentrated on finite point sets, and first we will talk about uniform mea­ sures. To be on the safe side, let us restate the definition for this particular case, keeping the traditional terminology of "weak E-nets." 10.4.1 Definition {Weak epsilon net for convex sets). Let X be a finite point set in Rd and E > 0 a real number. A set N C Rd is called a weak E-net for convex sets with respect to X if every convex set containing at least EIXI points of X contains a point of N. In the rest of this section we consider exclusively E-nets with respect to convex sets, and so instead of "weak E-net for convex sets with respect to X" we simply say "weak E-net for X." 10.4.2 Theorem {Weak epsilon net theorem). For every d > 1, E > 0, and finite X C Rd, there exists a weak E-net for X of size at most f(d, E), where f(d, E) depends on d and c but not on X. The best known bounds are /(2, ;) = O(r2) in the plane and j(d, ;) = O(rd(logr)b(d)) for every fixed d, with a suitable constant b(d) > 0. The proof shown below gives f ( d, ; ) ;;;; 0 ( rd+ 1). On the other hand, no lower bound superlinear in r is known (for fixed d). Proof. The proof is simple once we have the first selection lemma (Theo­ rem 9 .1.1) at our disposal. Let an X C Rd be an n-point set. The required weak E-net N is con­ structed by a greedy algorithm. Set No = 0. If Ni has already been con­ structed, we look whether there is a convex set C containing at least En points of X and no point of Ni. If not, Ni is a weak E-net by definition. If yes, \Ve set xi = X n c, and we apply the first selection lemma to xi. This gives us a point ai contained in at least cd(ǹ-: ti) = 0(Ed+lnd+1) Xi-simplices. We set Ni+I = Ni U { ai} and continue with the next step of the algorithm. Altogether there are (d:1) X-simplices. In each step of the algorithm, at least n (Ed+ 1 n d+ 1) of them are "killed," meaning that they were not inter­ sected by Ni but are intersected by Ni+l · Hence the algorithm takes at most O(E-(d+l)) steps. D In a forthcoming application, we also need weak E-nets for convex sets with respect to a nonuniform measure (but still concentrated on finitely many points). 10.4.3 Corollary. Let J-L be a probability measure concentrated on finitely many points in Rd. Then weak E-nets for convex sets with respect to J-t exist, of size bounded by a function of d and E. 254 Chapter 10: Transversals and Epsilon Nets Sketch of proof. By taking c a little smaller, we can make the point weights rational. Then the problem is reduced to the weak epsilon net theorem with X a multiset. One can check that all ingredients of the proof go through in this case, too. D 10.4.4 Corollary. For every finite system F of convex sets in Rd, we have r(:F) < f(d, 1/r(:F)), where f(d, c-) is as in the weak epsilon net theorem. The proof of the analogous consequence of the epsilon net theorem, Corol-lary 10.2. 7, can be copied almost verbatim. Bibliography and remarks. Weak £-nets were introduced by Haus­ sler and Welzl [HW87]. The existence of weak £-nets for convex sets was proved by Alon, Barany, Fiiredi, and Kleitman [ABFK92] by the rnethod shown in the text but with a slight quantitative irnprovcrnent, achieved by using the second selection lemma (Theorem 9.2.1) instead of the first selection lemma. The estimates for f(d, ;) mentioned after Theorem 10.4.2 have the following sources: The bound O(r2) in the plane is from [ABFK92] (see Exercise 1), and the best general bound in Rd, close to O(rd), is due to Chazelle, Edelsbrunner, Grini, Guibas, Sharir, and Welzl [CEG+95]. It seems that these bounds are quite far from the truth. Intuitively, one of the "worst" cases for constructing a weak E-net should be a convex independent set X. For such sets in the plane, though, near-linear bounds have been obtained by Chazelle et al. [CEG+95]; they are presented in Exercises 2 and 3 below. The original proof of the result in Exercise 3 was formulated using hyperbolic geometry. A simple lower bound for the size of weak E-nets was noted in [MatOl]; it concerns the dependence on d for e fixed and shows that f(d, 5 ƥ) = n (eJd/2) as d -t oo. Exercises 1. Complete the following sketch of an alternative proof of the weak epsilon net theorem. (a) Let X be an n-point set in the plane (assume general position if convenient). Let h be a vertical line with half of the pointǚ of X on each side, and let X1, X2 be these halves. Let M be the set of all intersections of segments of the form XtX2 with h, where Xl E XI and X2 E x2. Let No be R weak c-'-net for M (this is a one-dimensional situation!). Recursively construct weak s"-nets N1, N2 for X1 and X2, respectively, and set N = N0 U N1 U N2. Show that with a suitable choice of E1 and €11' N is a weak €-net for X of size O(c-2). m (b) Generalize the proof from (a) to R d (use induction on d). Estimate the exponent of E in the resulting bound on the size of the constructed weak €-net. 0 10.5 The Hadwiger-Debrunner (p, q )-Problem 255 2. The airn of this exercise is to show that if X is a finite set in the plane in convex position, then for any E > 0 there exists a weak E-net for X of size nearly linear in .! . • £ (a) Let an n-point convex independent set X C R2 be given and let e < n be a parameter. Choose points po,PI, · · · , P£-1 of X, appearing in this order around the circumference of conv(X), in such a way that the set Xi of points of X lying (strictly) between Pi-I and Pi has at most njf points for each i. Construct a weak E'-net Ni for each Xi (recursively) with E1 == RE /3, and let lv f be the set containing the intersection of the segment POPJ-1 with PJPi, for all pairs i,j, 1 < i < j-1 < f-2. Show that the set N == {po, . . . , P£- I } U N1 U · · · U Nt. U JV I is a weak E-net for x. m (b) If f (E) denotes the minimum necessary size of a weak E-net for a finite convex independent point set in the plane, derive a recurrence for j(E) using (a) with a suitably chosen R, and prove the bound for /(c) == 0 (! (log !) c). What is the srnallest c you can get? 0 3. In this exercise we want to show that if X is the vertex set of a regular convex n-gon in the plane, then there exists a weak E-net for X of size 0(! ). Suppose X lies on the unit circle u centered at 0. For an arc length o: < 1r radians, let r( a) be the radius of the circle centered at 0 and touching a chord of u connecting two points on u at arc distance o:. For i == 0, 1, 2, . . . , let Ni be a set of l c(i.¯ )Ά J points placed at regular intervals on the circle of radius r(c(1.01)'i /10) centered at 0 (we take only those i for which thiH is well-defined). Show that 0 u ui Ni is a weak E-net of size 0( ه) for X (the constants 1.01, etc., are rather arbitrary and can be greatly improved). 0 10.5 The Hadwiger-Debrunner (p, q )-Problem Let F be a finite family of convex sets in the plane. By Helly's theorem, if every 3 sets from :F intersect, then all sets of :F intersect (unless :F has 2 sets, that is). What if we know only that out of every 4 sets of :F, there are some 3 that intersect? Let us say that :F satisfies the (4, 3)-condition. In such a case, :F may consist, for instance, of n-1 sets sharing a common point and one extra set lying somewhere far away from the others. So we cannot hope for a nonempty intersection of all sets. But can all the sets of F be pierced by a bounded nurnber of points? That is, does there exist a constant C such that for any family :F of convex sets in R2 satisfying the ( 4, 3)-condition there are at moHt C points such that each set of F contains at least one of them? This is the simplest nontrivial case of the so-called (p, q)-problem raised by Hadwigcr and Dcbrunncr and solved, many years later, by Alon and Kleit­ man. 256 Chapter 10: Transversals and Epsilon Nets 10.5.1 Theorem (The (p, q)-theorem). Let p, q, d be integers with p > q > d+l. Then there exists a number HDd(p, q) such that the f ollowing is true: Let F be a finite f amily of convex sets in Rd satisf ying the (p, q)-condition; that is, among any p sets of F there are q sets with a common point. Then :F has a transversal consisting of at most HDd(P, q) points. Clearly, the condition q > d+ 1 is necessary, since n hyperplanes in gen­ eral position in R d satisfy the ( d, d)-condition but cannot be pierced by any bounded number of points independent of n. It has been known for a long time that if p(d-1) < (q-1)d, then HDd(p, q) exists and equals p-q+ 1 (Exercise 2). This is the only nontrivial case where exact values, or even good estimates, of HDd(P, q) are known. The reader might (rightly) wonder how one can get interesting examples of families satisfying the ( 4, 3 )-condition, say. A large collection of examples can be obtained as follows: Choose a probability measure J.. t in the plane (J.. t(R2) = 1), and let F consist of all convex sets S with J..t ( S) > 0.5. The ( 4, 3 )-condition holds, because 4 sets together have measure larger than 2, and so some point has to be covered at least 3 times. The proof below shows that every family :F of planar convex sets fulfilling the ( 4, 3)-condition somewhat resembles this example; namely, that there is a probability measure J-l such that tt( S) > c for all S E F, with sorne srnall positive constant c > 0 (independent of :F). Note that the existence of such J.-L implies the (p, 3) condition for a sufficiently large p ! p(c). The Alon-Kleitman proof combines an amazing number of tools. The whole structure of the proof, starting from basic results like Helly's theorem, is outlined in Figure 10.1. The emphasis is on simplicity of the derivation rather than on the best quantitative bounds (so, for example, Tverberg's theorem is not required in full strength). The most prominent role is played by the fractional Helly theorem and by weak c--nets for convex sets. An unsat­ isfactory feature of this method is that the resulting estimates for HDd(P, q) are enorn1ously large, while the truth is probably much smaller. Since we have prepared all of the tools and notions in advance, the proof is now short. We do not attempt to optimize the constant resulting from the proof, and so we may as well assume that q = d+1. By Corollary 10.4.4, we know that T is bounded by a function of r for any finite system of convex sets in Rd. So it ren1ains to show that if F satisfies the (p, d+ 1 )-condition, then r (F) = v (F) is bounded. 10.5.2 Lemma (Bounded v) . Let F be a finite f amily of convex sets in Rd satisf ying the (p, d+1)-condition. Then v(F) < C, where C depends on p and d but not on F. Proof. The first observation is that ifF satisfies the (p, d+ 1 )-condition, then many ( d+ 1 )-tuples of sets ofF intersect. This can be seen by double counting. Every p-tuple of sets of F contains (at least) one intersecting ( d+ 1 )-tuple, 1 0.5 The Hadwiger-Debrunner (p, q)-Problem Helly's theorem • The lexicographic mini­ mum of the intersection of d+l convex sets in Rd is determined by d sets double counting Radon's lemma • Tverberg's theorem (finiteness of T( d, r) suffices) 257 fractional Helly theorem Þ double Ýnting alternative direct proof (Exercise 10.4.2) double counting with much worse bound Ė --------ė ------Ę . . . . . . . ' I first selection lemma 1 ' ' . greedy algorithm "" 1 ل -----م ن ----------· -· --------' (p, d+ 1 )-condition => v bounded weak £-nets for convex sets of size depending only on d and £ linear programming duality => v = r T bounded by a function of d and r for systems of convex sets (p, q )-theorem : (p, d+l)-condition => T bounded Figure 10.1. Main steps in the proof of the (p, q)-theorem. and a single (d+l)-tuple is contained in (;=;ǷǸ) p-tuples (where n = IFI). Therefore, there are at least 258 Chapter 10: Transversals and Epsilon Nets intersecting ( d+ 1 )-tuples, with a > 0 depending on p, d only. The fractional Helly theorem (Theorem 8.1.1) implies that at least (3n sets of :F have a common point, with f3 = j3(d, a) > 0 a constant.3 How is this related to the fractional packing nu1nber? It shows that a fractional packing that has the same value on all the sets of :F cannot have size larger than 1, for otherwise, the point lying in j3n sets would receive weight greater than 1 in that fractional packing. The trick for handling other fractional packings is to consider the sets in F with appropriate multiplicities. Let '¢: :F ---7 [0, 1] be an optimal fractional packing (~SEF: xES '¢(8) < 1 for all x). As we have noted in Theorem 10.1.1, we may assume that the values of 'lj J are rational numbers. \iVrite '¢(8) = mg) , where D and the m(S) are integers (D is a common denominator). Let us form a new collection Fm of sets, by putting m(S) copies of each S into Fm; so Fm is a multiset of sets. Let N = IFml = ȎSEF m(S) = D·v(:F). Suppose that we could conclude the existence of a point a lying in at least (3N sets of Fn" (counted with multiplicity). Then 1 > I: V;(S) = I: m;;) = Ü · (3N = (3v(F), SEF: aES SEF: aE S and so v(:F) < 1· The existence of a point a in at least {3 N sets of Frn follows from the fractional Helly theorem, but we must be careful: The new family Frn does not have to satisfy the (p, d+ 1 )-condition, since the (p, d+ 1 )-condition for :F speaks only of p-tuples of distinct sets from :F, while a p-tuplc of sets from Fm may contain multiple copies of the same set. Fortunately, :F m does satisfy the (p', d+ 1 )-condition with p' = d(p-l) + 1. Indeed, a p' -tuple of sets of Frn contains at least d+ 1 copies of the same set or it contains p distinct sets, and in the latter case the (p, d+ 1 )-condition for :F applies. Using the fractional Helly theorem (which does not require the sets in the considered family to be distinct) as before, we see that there exists a point a common to at least (3N sets of Fm for some (3 = j3(p, d). Lemma 10.5.2 is proved, and this also concludes the proof of the (p, q)-theorcm. D Bibliography and remarks. The (p, q)-problem was posed by Hadwiger and Debrunner in 1957, who also solved the special case in Exercise 2 below. The solution described in this section follows Alan and Kleitman [AK92]. Much better quantitative bounds on HDd(p, q) were obtained by Kleitman, Gyarfas, and Toth [KGT01] for the smallest nontrivial val­ ues of p, q, d: 3 < HD2 (4, 3) < 13. 3 By removing these {3n sets and iterating, we would get that :F can be pierced by O(log n) points. The main point of the (p, q)-theorem is to get rid of this log n factor. 1 0.6 A (p, q)-Theorem for Hyperplane Transversals 259 Exercises 1. For which values of p and ·r does the following hold? Let :F be a finite family of convex sets in Rd, and suppose that any subfamily consisting of at most p sets can be pierced by at most r points. Then :F can be pierced by at most C points, for some C = Cd(p, r ). 0 2. Let p > q > d+1 and p(d-1) < (q-1)d. Prove that HDd(P, q) < p-q+1. You n1ay want to start with the case of HD2(5, 4). 0 3. Let X c R2 be a (4k+l)-point set, and let :F =::= {conv(Y): Y c X, IYI =::= 2k+l}. (a) Verify that :F has the ( 4, 3)-property, and show that if X is in convex positionࣦ then T(:F) > 3. 0 (b) Show that T(:F) < 5 (for any X). III These results are due to Alan and Rosenfeld (private contmunication). 10.6 A (p, q)-Theorem for Hyperplane Transversals The technique of the proof of the (p, q )-theorem is quite general and allows one to prove (p, q )-t heore1ns for various families. That is, if we have some basic family B of sets, such as the family K of all convex sets in Theorem 10.5.1, a (p, q)-theorem for B means that if :F C B satisfies the (p, q)-condition, then r(:F) is bounded by a function of p and q (depending on B but not on the choice of :F). To apply the technique in such a situation, we first need to bound v (:F) using the (p, q )-condition. ·To this end, it suffices to derive a fractional Helly­ type theorem for B. Next, we need to bound T(:F) as a function of T(:F). If the VC-dimension of :F is bounded, this is just Corollary 10.2. 7, and other­ wise, we need to prove a "weak E-net theorem'' for :F. Here we present one sophisticated illustration. 10.6.1 Theorem (A (p, q)-theorem for hyperplane transversals). Let p > d+l and let :F be a finite family of convex sets in Rd such that among every p In embers of :F, there exist d+ 1 tl1at l1ave a corr1111on hyper­ plane transversal (i.e., there is a hyperplane intersecting all of them). Then there are at most C === C(p, d) h.yperplanes whose union intersects all mem­ bers of :F. Note that here the piercing is not by points but by hyperplanes. Let Thyp (:F), TfYP (F), and vhyp (F) be the notions corresponding to the transversal number, fractional transversal number, and fractional packing number in this setting.4 We prove only the planar case, since some of the required auxiliary results beco1ne more complicated in higher dimensions. 4 We could reformulate everything in terms of piercing by points if we wished to do so, by assigning to every S E :F the set Ts of all hyperplanes intersecting S. Then, e.g., 7hyp{:F) = r( {Ts: S E :F} ). 260 Chapter 10: Transversals and Epsilon Nets To prove Theorem 10.6.1 for d = 2, we first want to derive a fractional Helly theorem. 10.6.2 Lemma (Fractional Helly for line transversals). IfF is a family of n convex sets in the plane such that at least a(ˇ) triples have line transver­ sals, then at least (3n of the sets have a line transversal, (3 = (3( a) > 0. Proof. Let :F be a family as in the lemma. We distinguish two cases de­ pending on the number of pairs of sets in F that intersect. First, suppose that at least Ž (ˈ) pairs { S, S'} E (Ƕ) satisfy S n S' -:1 0. Project all sets ofF vertically on the x-axis. The projections form a family of intervals with at least Ž (ƽ) intersecting pairs, and so by the one-dimensional fractional Helly theorem, at least {3'n of these have a common point x. The vertical line through x intersects (3' n sets of F. Next, it remains to deal with the case of at most Ž (ˈ) intersecting pairs in :F. Call a triple {81, 82, 83} good if it has a line transversal and its three members are pairwise disjoint. Since each intersecting pair gives rise to at most n triples whose members are not pairwise disjoint, there are at most n · Ž (ƽ) < ϶ (ˇ) nondisjoint triples, and so at least ϶ (ˇ) good triples remain. Let {81, S2, S3} be a good triple; we claim that its sets have a line transversal that is a common tangent to (at least) two of them. To see this, start with an arbitrary line transversal, translate it until it becomes tangent to one of the si' and then rotate it while keeping tangent to si until it be­ comes tangent to an s1, i =J. j. Let L denote the set of all lines that are common tangents to at least two disjoint me1nbers of F. Since two disjoint convex sets in the plane have exactly 4 common tangents, ILl < 4(ˈ). First, to see the idea, let us make the simplifying assumption that no 3 sets of F have a common tangent. Then each line f E L has a unique defining pair of disjoint sets for which it is a common tangent. As we have seen, for each good triple { S1, S2, S3} there is a line f E L such that two sets of the triple are the defining pair of f and the third is intersected by f. Now, since we have ϶ (ˇ) good triples and ILl < 4(ƽ), there is an fo E L playing this role for at least 0. Each of these <5n triples contains the defining pair of £0 plus some other set, so altogether £0 intersects at least 6n sets. (Note the similarity to the proof of the fractional Helly theorem.) Now we need to relax the simplifying assumption. Instead of working with lines, we work with pairs ( f, { S, S'}), where S, S' E :F are disjoint and f is one of their common tangents, and we let L be the set of all such pairs. We still have ILl < 4(ˈ) , and each good triple {S1, S2, S3} gives rise to at least 10.6 A (p, q)-Theorem for Hyperplane Transversals 261 one (f, {8, 8'}) E L, where {8, 8'} C {81, 82, 83}. The rest of the argument is as before. 0 The interesting feature is that while this fractional Helly theorem is valid, there is no Helly theorem for line transversals! That is, for all n one can find farnilies of n disjoint planar convex sets (even segments) such that any n-1 have a line transversal but there is no line transversal for all of them (Exercise 5.1 .9}. Lemma 10.6.2 implies, exactly as in the proof of Lemma 10.5.2, that vhyp is bounded for any family satisfying the (p, d+ 1 )-condition. It remains to prove a weak €-net result. 10.6.3 Lemma. Let L be a finite set (or multiset) of lines in the plane and let r > 1 be given. Then there exists a set N of O(r2) lines (a weak E-net) such that whenever 8 C R 2 is an ( arcwise) connected set intersecting more than 1;1 lines of L, then it intersects a line of N. Proof. Recall from Section 4.5 that a m-cutting for a set L of lines is a collection {A 1, . . . , At} of generalized triangles covering the plane such that the interior of each qi is intersected by at most gj lines of L. The cutting r lemma (Lemma 4.5.3) guarantees the existence of a m-cutting of size O(r2). The cutting lemma docs not directly cover multisets of lines. Nevertheless, with some care one can check that the perturbation argument works for multisets of lines as well. Thus, let {q1, . . . , qt} be a ;-cutting for the considered L, t == O(r2). The weak E-net N is obtained by extending each side of each qi into a line. Indeed, if an arcwise connected set 8 intersects more than 1;1 lines of L, then it cannot be contained in the interior of a single Ai, and consequently, it intersects a line of N. 0 Conclusion of the proof of Theorem 10.6.1. Lemma 10.6.3 is now used exactly as the E-nets results were used before, to show that Thyp(F) = O(rhyp(F)2) in this case. This proves the planar version of Theorem 10.6.1. Bibliography and remarks. Theorem 10.6.1 was proved by Alon and Kalai (AK95], as well as the results indicated in Exercises 3 and 4 below. It is related to the following conjecture of Griinbaum and 1\fotzkin: Let F be a f amily of sets in Rd such that the intersec­ tion of any at most k sets ofF is a disjoint union of at most k closed convex sets. Then the Helly number ofF is at most k(d+1). So here, in contrast to Exercise 4, the Helly number is determined exactly. I mention this mainly because of a neat proof by Amenta [Ame96] using a technique originally developed for algorithmic purposes. D 262 Chapter 10: Transversals and Epsilon Nets It is not completely honest to say that there is no Helly theorem for line (and hyperplane) transversals, since there are very nice theorerr1s of this sort, but the assumptions must be strengthened. For example, Hadwiger's transversal theorem asserts that if :F is a finite fan1ily of disjoint convex sets in the plane with a linear ordering < such that every 3 members of F can be intersected by a directed line in the order given by <, then :F has a line transversal. This has been generalized to hyperplane transversals in Rd, and many related results are known; see, e.g., the survey Goodman, Pollack, and Wenger [GPW93]. The application of the Alon---Kleitman technique for transversals of d-intervals in Exercise 2 below is due to Alon [Alo98]. Earlier, a similar result with the slightly stronger bound r < (d2 - d)v was proved by Kaiser [Kai97] by a topological method, following an initial breakthrough by Tardos [Tar95], who dealt with the case d = 2. By the Alon--Kleitman method, Alon [Alo] proved analogous bounds for families whose sets are subgraphs with at most d components of a given tree, or, more generally, subgraphs with at most d components of a graph G of bounded tree-width. In a sense, the latter is an "if and only if" result, since for every k there exists w( k) such that every graph of tree-width w(k) contains a collection of subtrees with v = 1 and r > k. Alon, Kalai, lVIatousek, and 1\feshulam [AKMMOl] investigated generalizations of the Alon-Kleitman technique in the setting of ab­ stract set systerns. They showed that (p, d+ 1 )-theorems for all p fol­ low from a suitable fractional Helly property concerning ( d+ 1 )-tuples, and further that a set system whose nerve is d-Leray (see the notes to Section 8.1) has the appropriate fractional Helly property and conse­ quently satisfies (p, d+ 1 )-theorems. Exercises 1. (a) Prove that if :F is a finite family of circular disks in the plane such that every two members of :F intersect, then r(F) is bounded by a con­ stant (this is a very weak version of Gallai's problem mentioned at the beginning of this chapter). 0 (b) Show that for every p > 2 there is an n0 such that if a family of n0 disks in the plane satisfies the (p, 2)-condition, then there is a point common to at least 3 disks of the family. 0 (c) Prove a {p, 2 )-theorem for disks in the plane (or for balls in R d). 0 2. A d-interval is a set J C R of the form J = J1 U /2 U · · · U Id, where the Ij C R are closed intervals on the real line. (In the literature this is customarily called a homogeneous d-interval.) (a) Let :F be a finite family of d-intervals with v(:F) = k. The family may contain multiple copies of the same d-interval. Show that there is a 10.6 A (p, q )-Theorem for Hyperplane Transversals 263 {3 == {3( d, k) > 0 such that for any such F, there is a point contained in at least {J · IFI n1embers of F. 0 Can you prove this with f3 == 2!k? 0 (b) Prove that T(:F) < dT(F) for any finite family of d-intervals. 0 (c) Show that T(:F) < 2d2v(:F) for any finite family of d-intervals, or at least that T is bounded by a function of d and v. 0 3. Let JC# denote the family of all unions of at most k convex sets in R d (so the d-intervals from Exercise 2 are in JCt). Prove a (p, d+ 1 )-theorem for this family by the Alon-Kleitman technique: Whenever a finite fam­ ily F C JCØ satisfies the (p, d+1)-condition, r(F) < f(p, d, k) for some function f. 0 4. (a) Show that the farnily /($ as in Exercise 3 has no finite Helly number. That is, for every h there exists a subfamily :F C JC$ of h+ 1 sets in which every h members intersect but n :F == 0. 0 (b) Use the result of Exercise 3 to derive that for every k, d > 1, there exists an h with the following property. Let F c JCØ be a finite family such that the intersection of any subfamily of :F lies in JCØ (i.e., is a union of at most k convex sets). Suppose that every at most h members of :F have a common point. Then all the sets of :F have a common point. (This is expressed by saying that the family JC# has Helly order at most h.) B 11 Attempts to Count k-Sets Consider an n-point set X c Rd, and fix an integer k. Call a k-point subset S C X a k-set of X if there exists an open half-space 1 such that S = X n 1; that is, S can be "cut off" by a hyperplane. In this chapter we want to estimate the maximum possible number of k-sets of an n-point set in Rd, as a function of n and k. This question is known as the k-set problem, and it seems to be extremely challenging. Only partial results have been found so far, and there is a sub­ stantial gap between the upper and lower bounds even for the number of planar k-sets, in spite of considerable efforts by many researchers. So this chapter presents work in progress, much more so than the other parts of this book. I believe that the k-set problem deserves to be such an exception, since it has stimulated several interesting directions of research, and the partial results have elegant proofs. 11.1 Definitions and First Estimates For technical reasons, we are going to investigate a quantity slightly different from the number of k-sets, which turns out to be asymptotically equivalent, however. First we consider a planar set X C R 2 in general position. A k-facet of X is a directed segment xy, x, y E X, such that exactly k points of X lie (strictly) to the left of the directed line determined by x and y. • • a 4-facet 266 Chapter 11: Atten1pts to Count k-Sets Similarly, for X C Rd, a k-facet is an oriented (d-1)-dimensional simplex with vertices x1, x2, . . . , Xd E X such that the hyperplane h determined by x1, x2, • • • , xd has exactly k points of X (strictly) on its positive side. (The orientation of the simplex 1neans that one of the half-spaces determined by h is designated as positive and the other one as negative.) Let us stress that we consider k-facets only for sets X in general position (no d+l points on a common hyperplane). In such a case, the 0-facets are precisely the facets of the convex hull ·of X, and this motivates the name k-facet (so k-facets are not k-dimensional!). A special case of k-facets are the halving facets. These exist only if n - d is even, and they are the n 2 d -facets; i.e., they have exactly the same nurnber of points on both sides of their hyperplane. Each halving facet appears as an n 2 d_facet with both orientations, and so halving facets can be considered unoriented. In the plane, instead of k-facets and halving facets, one often speaks of k-edges and halving edges. The drawing shows a planar point set with the halving edges: We let KFAC(X, k) denote the number of k-facets of X, and KFACd(n, k) is the maximum of KFAC(X, k) over all n-point sets X C Rd in general position. Levels, k-sets, and k-facets. The maximum possible number of k-sets is attained for point sets in general position: Each k-set is defined by an open half-space, and so a sufficiently small perturbation of X loses no k-scts (while it may create some new ones). Next, we want to show that for sets in general position, the number of k-facets and the number of k-sets are closely related (although the exact relations are not simple). ·The best way seems to be to view both notions in the dual setting. Let X C Rd be a finite set in general position. Let H = {V(x): x E X} be the collection of hyperplanes dual to the points of X, where V is the duality "with the origin at xd = -oo" as defined in Section 5.1. We may assume that each k-set S of X is cut off by a nonvertical hy­ perplane hs that docs not pass through any point of X. If S lies below hs, then the dual point Ys = V(hs) is a point lying on no hyperplane of H and having exactly k hyperplanes of H below it. So Ys lies in the interior of a cell at level k of the arrangement of H. Similarly, if S lies above hs, then Ys is in a cell at level n-k. Moreover, if Ys1 and Ys2 lie in the same cell, then 81 = S2, and so k-sets exactly correspond to cells of level k and n-k. Similarly, we find that the k-facets of X correspond to vertices of the arrangement of H of levels k or n-k-d (we need to subtract d because of 11.1 Definitions and First Estimates 267 the d hyperplanes passing through the vertex that are not counted in its level). The arrangement of H has at most O(nd-1) unbounded cells (Exer­ cise 6.1.2). Therefore, all but at n1ost O(nd-1) cells of level k have a top­ most vertex, and the level of such a vertex is between k-d+ 1 and k. On the other hand, every vertex is the topmost vertex of at most one cell of level k. A similar relation exists between cells of level n-k and ver­ tices of level n-k-d. Therefore, the number of k-sets of X is at most O(nd-1) + 2:;-ǵ KFAC(X, k-j). Conversely, KFAC(X, k) can be bounded in terms of the number of k-sets; this we leave to Exercise 2. From now on, we thus consider only estimating KFACd(nԊ k). Viewing KFACd(n, k) in terms of the k-level in a hyperplane arrangement, we obtain some immediate bounds from the results of Section 6.3. The k-level has certainly no more vertices than all the levels 0 through k together, and hence KFACd(n, k) = 0 ( nld/2J (k+ 1) r d/21) by Theorem 6.3.1. On the other hand, the arrangements showing that Theo­ rem 6.3.1 is tight (constructed using cyclic polytopes) prove that for k < n /2, we have KFACd( n, k) = n ( nld/2J (k+l) r d/21-1) ; this determines KFACd(n, k) up to a factor of k. The levels 0 through n together have O(nd) vertices, and so for any par­ ticular arrangement of n hyperplanes, if k is chosen at random, the expected k-level complexity is O(nd-1 ). This means that a level with a substantially higher complexity has to be exceptional, much bigger than most other levels. It seems hard to imagine how this could happen. Indeed, it is widely believed that KFACd(n, k) is never much larger than nd-l. On the other hand, levels with somewhat larger con1plexity can appear, as we will see in Section 11.2. Halving facets versus k-facets. In the rest of this chapter we will mainly consider bounds on the halving facets; that is, we will prove estimates for the function HFACd(n) = Ú KFACd(n, n 2 d), n-d even. It is easy to see that for all k, we have KFACd(n, k) < 2 · HFACd(2n+d) (Ex­ ercise 1). Thus, for proving asymptotic bounds on maxo<k 0. Then we have, for all k < n-d -2 ' 268 Chapter 11: Attempts to Count k-Sets Proof. We use the method of the probabilistic proof of the cutting lemma from Section 6.5 with only small modifications; we assume familiarity with that proof. We work in the dual setting, and so we need to bound the number of vertices of level k in the arrangement of a set H of n hyperplanes in general position. Since for k bounded by a constant, the complexity of the k-level is asymptotically determined by Clarkson's theorem on levels (Theorem 6.3.1), we can assume 2 < k < 9 . We set r = ԋ and p = : = !, and we let S C H be a random sample obtained by independent Bernoulli trials with success probability p. This tirne we let T(S) denote the bottom-vertex triangulation of the bottom unbounded cell of the arrangement of S (actually, in this case it seems simpler to use the top-vertex triangulation instead of the bottom-vertex one); the rest of the arrangement is ignored. (For d = 2, we can take the vertical decomposition instead.) Here is a schematic illustration for the planar case: lines of S T(S) lj level k of H The conditions (CO)-(C2) as in Section 6.5 are satisfied for this T(S) (in (CO) we have constants depending on d, of course), and as for ( C3), we have IT(S)I = O(!Si ld/2J + 1) for all S C H by the asyrnptotic upper bound theorem (Theorem 5.5.2) and by the properties of the bottom-vertex triangulation. Thus, the analogy of Proposition 6.5.2 can be derived: :For every t > 0, the expected number of simplices with excess at least t in T(S) is bounded as follows: (11.1) Let Vk denote the set of the vertices of level k in the arrangement of H, whose size we want to estimate, and let V k(S) be the vertices in V k that have level 0 with respect to the arrangement of S; i.e., they are covered by a simplex of T(S). First we claim that, typically, a significant fraction of the vertices of Vk appears in Vk(S), namely, E[IVk(S)I] > !IVkl· For every v E V k, the proba­ bility that v E Vk ( S), i.e., that none of the at most k hyperplanes below v goes into S, is at least (1 - p)k = (1 - ! )k > ! , and the claim follows. It rerpains to bound E [IVk(S)I] from above. Let Ll E T(S) be a simplex and let HA be the set of all hyperplanes of H intersecting Ll. Not all of these hyperplanes have to intersect the interior of q (and thus be counted in the excess of 6.), but since H is in general position, there are at most a constant number of such exceptional hyperplanes. We note that all the vertices in 1 1. 1 Definitions and First Estimates 269 V k(S) n Û have the same level in the arrangement of HLl (it is k minus the number of hyperplanes below Û). By the assumption in the theorem, we thus have IV k(S) n gI = O(IHLlld-cd) = O((th i)d-cd) = O((thk)d-cd), where th is the excess of Û. Therefore, E[IV k(S)I] < O(kd-cd) . L ti -ed. ET(S) Using (11.1), the sum is bounded by (J((N)ld/2J); this is as in Section 6.5. We have shown that I V kl < 4E[IV k(S)IJ = 0 ( nld/2Jkrd/2l-cd), and Theorem 11.1.1 is proved. Bibliography and remarks. We summarize the bibliography of k­ sets here, and in the subsequent sections we only mention the origins of the particular results described there. In the following we always assume k > 1, which allows us to write k instead of k+l in the bounds. The first paper concerning k-sets is by Lovasz [Lov71], who proved an O(n312) bound for the number of halving edges. Straus (unpub­ lished) showed an 0( n log n) lower bound. This appeared, together with the bound O(nJk) for planar k-sets, in Erdos, Lovasz, Simmons, and Straus [ELSS73). The latter bound was independently found by Edelsbrunner and Welzl [EW85]. It seems to be the natural bound to come up with if one starts thinking about planar k-sets; there are nu­ merous variations of the proof (see Agarwal, Aronov, Chan, and Sharir [AACS98]), and breaking this barrier took quite a long time. The first progress was made by Pach, Steiger, and Szemeredi [PSS92), who im­ proved the upper bound by the tiny factor of log k. A significant breakthrough, and the current best planar upper bound of O(nk113) , was achieved by Dey [Dey98). A simpler version of his proof, involving new insights, was provided by Andrzejak, Aronov, Har-Peled, Seidel, and Welzl [AAHP+98]. An iinprovement over the 0( n log k) lower bound [ELSS73) was obtained by Toth [T6t01b], namely, KFAC2(n, k) > nexp(cy"!!g"7C) for a constant c > 0 (a similar bound was found by Klawe, Paterson, and Pippenger in the 1980s in an unpublished manuscript, but only for the number of vertices of level k in an arrangement of n pseudolines in the plane). The first nontrivial bound on k-sets in higher dimension was proved by Baxany, Fiiredi, and Lovasz [BFL90]. They showed that HFAC3(n) = O(n2·998). Their method includes the main ingredients of most of the subsequent improvements; in particular, they proved a planar version of the second selection lemma (Theorem 9.2.1) and con­ jectured the colored Tverberg theorem (see the notes to Sections 8.3 D 270 Chapter 11: Atte1npts to Count k-Sets and 9.2). Aronov, Chazelle, Edelsbrunner, Guibas, Sharir, and Wenger [ACE+91] improved the bound for the planar second selection lemma (with a new proof) and showed that HFAC3(n) = O(n813 log513 n). A nontrivial upper bound for every fixed dimension d, HFACd(n) = 0( nd-cd) for a suitable cd > 0, was obtained by Alon, Baniny, Fi.iredi, and Kleitman [ABFK92], following the method of [BFL90] and using the recently established colored Tverberg theorem. Dey and Edelsbrunner [DE94] proved a slightly better 3-dimensional bound HFAC3(n) = O(n813) by a direct and simple 3-dimensional argument avoiding the use of a planar selection lemma (see Exercise 11.3.8). A new significant improvement to HFAC3(n) = O(n2·5) was achieved by Sharir, Smorodinsky, and Tardos (SST01]; their argument is sketched in the notes to Section 11.4. Theorem 11.1.1 is due to Agarwal et al. [AACS98]. Their proof uses a way of random sampling different from ours, but the idea is the same. Another interesting result on planar k-sets, due to Welzl [Wel86], is LkEK KFAC(X, k) = 0 (nJL:kEK k) for every n-point set X c R2 and every index set K C {1, 2, . . . , ln/2J } (see Exercise 11.3.2). Using identities derived by Andrzejak et al. [AAHP+98] (based on Dey's method), the bound can be improved to 0 ( n(IKI · L:kEK k) 113) ; this was communicated to me by Emo Welzl. Edelsbrunner, Valtr, and Welzl [EVW97] showed that "dense" sets X, i.e., n-point X C Rd such that the ratio of the n1axin1um to nlini­ mum interpoint distance is O(n11d), cannot asymptotically maximize the number of k-sets. For example, in the plane, they proved that a bound of HFAC2(n) == O(n1+0) for arbitrary sets implies that any n-point dense set has at most O(n1+a/2) halving edges. Alt, Felsner, Hurtado, and Noy [AFH+oo] showed that if X c R2 is a set contained in a union of C convex curves, then KFAC(X, k) = O(n) for all k, with the constant of proportionality depending on C. Several upper bounds concern the maximum combinatorial com­ plexity of level k for objects other than hyperplanes. For segments in the plane, the estimate obtained by combining a result of Dey [Dey98] with the general tools in Agarwal et al. (AACS98J is O(nk113a(࢏)). Their method yields the same result for the level k in an arrangement of n extendible pBeudosegments (defined in Exercise 6.2.5). For arbi­ trary pseudosegments, the result of Chan mentioned in that exercise ( n pseudosegments can be cut into 0( n log n) extendible pseudoseg-ments) gives the slightly worse bound O(nk113a(ԋ) log213(k+1)). The study of levels in arrangements of curves with more than one pairwise intersection was initiated by Tamaki and Tokuyama [TT98], who considered a family of n parabolas in R2 (here is a neat motiva­ tion: Given n points in the plane, each of them moving along a straight 11.1 Definitions and First Estimates line with constant velocity, how many times can the pair of points with median distance change?). They showed that n parabolas can be cut into O(n513) pieces in total so that the resulting collection of curves is a family of pseudosegments (see Exercise 6). This idea of cutting curves into pseudosegments proved to be of great importance for other problems as well; see the notes to Section 4.5. Tamaki and Tokuyama obtained the bound of O(n2-1112) for the maximum complexity of the k-level for n parabolas. Using the tools from [AACS98] and a cutting into extendible pseudosegments, Chan [ChaOOa] improved this bound to 0( nk1-219 log213 (k+ 1) ). All these results can be transferred without much difficulty from parabolas to pseudocircles, which are closed planar Jordan curves, ev­ ery two intersecting at most twice. Aronov and Sharir [AS01a] proved that if the curves are circles, then even cutting into 0 ( n 3 /2+c) pseu­ dosegments is possible (the best known lower bound is f2(n413); see Exercise 5). This upper bound was extended by Nevo, Pach, Pinchasi, and Sharir [NPPSO 1] to certain families of pseudo circles: The pseudo­ circles in the family should be selected from a 3-parametric family of real algebraic curves and satisfy an additional condition; for example, it suffices that their interiors can be pierced by 0( 1) points (also see Alon, Last, Pinchasi, and Sharir [ ALPS01] for related things). Tamaki and Tokuyama constructed a family of n curves with at most 3 pairwise intersections that cannot be cut into fewer than O(n2) pseudosegmcnts, demonstrating that their approach cannot yield non­ trivial bounds for the complexity of levels for such general curves (Ex­ ercise 5). However, for graphs of polynomials of degree at most s, Chan [ChaOOa] obtained a cutting into roughly O(n2-1/3s-l ) pseu­ dosegments and consequently a nontrivial upper bound for levels. His bound was improved by Ncvo et al. [NPPS01]. As for higher-dimensional results, Katoh and Tokuyama [KT99] proved the bound O(n2k213) for the complexity of the k-level for n triangles in R 3. Bounds on k-sets have surprising applications. For example, Dey's results for planar k-sets mentioned above imply that if G is a graph with n vertices and m edges and each edge has weight that is a linear function of time, then the minimum spanning tree of G changes at most O(mn113) times; see Eppstein [Epp98]. The number of k-sets of the infinite set (Zci )d (lattice points in the nonnegative orthant) appears in computational algebra in connection with Grobner bases of certain ideals. The bounds of 0 ( ( k log k) d-I) and f2 ( kd-l log k) for every fixed d, as well as references, can be found in Wagner [Wag01]. 271 272 Chapter 11: Attempts to Count k-Sets Exercises 1. Verify that for all k and all dimensions d, KFACd(n, k) < 2 ·HFACd(2n+ d). ǣ 2. Show that every vertex in an arrangement of hyperplanes in general po­ sition is the topmost vertex of exactly one cell. For X C Rd finite and in general position, bound KFAC(X, k) using the numbers of j-sets of X, k < j < k+d-1. Ͱ 3. Suppose that we have a construction that provides an n-point set in the plane with at least f ( n) halving edges for all even n. Show that this implies KFAC2(n, k) = fl( ln/2kjj(2k)) for all k < ×- Ͱ 4. Suppose that for all even n, we can construct a planar n-point set with at least f(n) halving edges. Show that one can construct n-point sets with O(nf(n)) halving facets in R3 (for infinitely many n, say). [i] Can you extend the construction to Rd, obtaining f2(nd-2f(n)) halving facets? 5. (Lower bounds for cutting curves into pseudosegments) In this exercise, r is a family of n curves in the plane, such as those considered in connection with Davenport-Schinzel sequences: Each curve intersects every vertical line exactly once, every two curves intersect at most s times, and no 3 have a common point. (a) Construct such a family r with s = 2 (a family of pseudoparabolas) whose arrangement has O(n413) empty lenses, where an empty lens is a bounded cell of the arrangement of r bounded by two of the curves. (The number of empty lenses is obviously a lower bound for the number of cuts required to turn r into a family of pseudosegments.) 121 (b) Construct a family r with s = 3 and with O(n2) empty lenses. [!] 6. (Cutting pseudoparabolas into pseudosegments) Let r be a family of n pseudoparabolas in the plane as in Exercise 5(a). For every two curves "(, "(1 E r with exactly two intersection points, the lens defined by "f and "(1 consists of the portions of 1 and "(1 between their two intersection points, as indicated in the picture: (a) Let A be a family of pairwise nonoverlapping lenses in the arrange­ ment of r, where two lenses are nonoverlapping if they do not share any edge of the arrangement (but they may intersect, or one may be enclosed in the other). The goal is to bound the maximum size of A. We define a bipartite graph G with V(G) = r x {0, 1} and with E(G) consisting of all edges { ( 'Y, 0), ( 1', 1)} such that there is a lens in A whose lower portion comes from "( and upper portion from "(1• Prove that G contains no K3,4 and hence IAI = O(n513). Supposing that K3,4 were present, correspond-. t "1 " d " " I I • d tng o ower curves /1 , 'Y2, 'Y3 an upper curves 'Yt , . . . , 'Y 4, const er 1 1 . 2 Sets with Many Halving Edges 273 the upper envelope U of /'l, /'2, /'3 and the lower envelope L of 1'ف, . . . , 1'ق. (A more careful argument shows that even K3,3 is excluded.) m (b) Show that the graph G in (a) can contain a K2,r for arbitrarily large r. ITJ (c) Given r, define the lens set system (X, .C) with X consisting of all bounded edges of the arrangement of r and the sets of .C corresponding to lenses (each lens contributes the set of arrangement edges contained in its two arcs). Check that r( .C) is the smallest number of cuts needed to convert r into a collection of pseudosegrnents, and that the result of (a) implies v(.C) = O(n513). II] (d) Using the method of the proof of Clarkson's theorem on levels and the inequality in Exercise 10.1.4(a), prove that r(.C) = O(n513). 0 7. (The k-set polytope) Let X c Rd be an n-point set in general position and let k E {1, 2, . . . , n-1 }. The k-set polytope Qk(X) is the convex hull of the set {LX: s c X, lSI = k} xES in Rd. Prove that the vertices of Qk(X) correspond bijectively to the k-sets of X. [!] The k-set polytope was introduced by Edelsbrunner, Valtr, and Welzl [EVW97]. It can be used for algorithmic enumeration of k-sets, for ex­ ample by the reverse search method mentioned in the notes to Section 5.5. 11.2 Sets with Many Halving Edges Here we are going to construct n-point planar sets with a superlinear number of halving edges. It seems more intuitive to present the constructions in the dual setting, that is, to construct arrangements of n lines with many vertices of level n 2 2 . A simpler construction. \Ve begin with a construction providing 0( n log n) vertices of the rniddle level. By induction on m, we construct a set Lm of 2m lines in general posi­ tion with at least fm = (m+1)2m-2 vertices of the middle level (i.e., level 2m-1-l). We note that each line of Lm contains at least one of the middle­ level vertices. For m = 1 we take two nonvertical intersecting lines. Let m > 1 and suppose that an Lm satisfying the above conditions has already been constructed. First, we select a subset M c Lm of 2m-l lines, and to each line of i E M we assign a vertex v(f) of the middle level lying on £, in such a way that v( f) =1- v( f') for f =1- f'. The selection can be done greedily: We choose a line into M, take a vertex of the middle level on it, and exclude the other line passing through that vertex from further consideration. 274 Chapter 11: Attempts to Count k-Sets Next, we replace each line of Lrn by a pair of lines, both almost parallel to the original line. For a line f E l'v /, we let the two lines replacing f intersect at v (f). Each of the remaining lines is replaced by two almost parallel lines whose intersection is not near to any vertex of the arrangement of Lrn. This yields the set Lm+ I· As the following picture shows, a middle-level vertex of the form v(f) yields 3 vertices of the new middle level (level 2rn -1 in the arrangement of Lrn+I): Each of the other middle-level vertices yields 2 vertices of the new middle level: Hence the number of Iniddle-level vertices for Lm+l is at least 2/.,, + 2rn-l = 2 [(·m + 1)2m-2] + 2m-l = frn+l· D A better construction. This construction is more complicated, but it shows the lower bound n · en(Ä) for the number of vertices of the middle level (and thus for the number of halving edges). This bound is smaller than n1+8 for every ࢐ > 0 but much larger than n(log n )c for any constant c. For simplicity, we will deal only with values of n of a special form, thus providing a lower bound for infinitely many n. Simple considerations show that HFAC2(n) is nondecreasing, and this gives a bound for all n. The construction is again inductive. We first explain the idea, and then we describe it more formally. In the first step, we let Lo consist of two intersecting nonvertical lines. Suppose that after m steps, a set of lines Lrn in general position has already been constructed, with many vertices of the middle level. First we replace every line f E Lrn by a.,n parallel lines; let us call these lines the bundle of f. So if v is a vertex of the middle level of L.,,, we get am vertices of the middle level near v after the replacement. £' bundle of R bundle of I!' 1 1 . 2 Sets with Many Halving Edges 275 ·Then we add two new lines Av and J.-Lv as indicated in the next picture, and we obtain 2arn vertices of the middle level: . . . 1-lv If nm == J Lrn J and frn is the number of vertices of the middle level in Lm, the construction gives roughly nrn+l Ú amnrn + 2frn and fm+l Ú 2amfm· ·This recurrence is good: With a suitable choice of the multiplicities am, it leads to the claimed bound. But the construction as presented so far is not at all guaranteed to work, because the new lines Av and J.-Lv might mess up the levels of the other vertices. We must make some extra provisions to get this under control. First of all, we want the auxiliary lines Av and ILv to be nearly parallel to the old line f' in the picture. This is achieved by letting the vertical spacing of the am lines in the bundle of £' be much smaller than the spacing in the bundle of R: Namely, if the lines of Lrn are € 1 , €2, . . . , Rnm , then the vertical spacing in the bundle of fi is set to ci, where c > 0 is a suitable very small number. Let .ei be a line of Lrn, and let di denote the number of indices j < i such that f i intersects fi in a vertex of the middle level. In the new arrangement of Lm+l we obtain am lines of the bundle of fi and 2di lines of the form Av and J-lv ' which are almost parallel to ei' and di of them go above the bundle and di below. Thus, for points not very close to fi, the effect is as if fi were replicated ( a,n + 2di) times. This is still not good; we would need that all lines have the same multiplicities. So we let D be the maximum of the di, and for each i, we add D - di more lines parallel to fi below fi and D - di parallel lines above it. 276 Chapter 11: Atte1npts to Count k-Sets How do we control D? We do not know how many middle-level vertices can appear on the lines of Lm+ 1; some vertices are necessarily there by the construction, but some might arise "just by chance," say by the interaction of the various auxiliary lines Av and Jlv, which we do not really want to analyze. So we take a conservative attitude and deal only with the middle-level vertices we know about for sure. Here is the whole construction, this time how it really goes. Suppose that we have already constructed a set Lm = { f 1, . . . , fnm } of lines in general po­ sition (which includes being nonvertical) and a set V m of rniddle-level vertices in the arrangement of Lm, such that the number of vertices of V.m. lying on fi is no more than Drn, for all i == 1, 2, . . . , nrrt· We let c = Ern be sufficiently small, and we replace each fi by am parallel lines with vertical spacing c:i. Then for each v E V m , we add the two lines Av and Jlv as explained above, and finally we add, for each 'i, the 2(Dm - di) lines parallel to fi, half above and half below the bundle, where di is the number of vertices of V m lying on ei. Since Lm+l is supposed to be in general position, we should not forget to apply a very small perturbation to Lm,+l after completing the step just described. For each old vertex v E V m, we now really get the 2am new middle-level vertices near v as was indicated in the drawing above, and we put these into V m+l· So we have What about Dm+l, the maximum number of points of V m+l lying on a single line? Each line in the bundle of fi has exactly di vertices of V m+ 1 . The lines Av get 2am vertices of Vrn+ 1, and the remaining auxiliary lines get none. So It remains to define the am, which are free parameters of the construction. A good choice is to let am == 4Dm. Then we have Do = 1, Dm = sm, and am = 4 · 8m. From the recurrences above, we further calculate _ 2 6m 81+2+···+(m-1) nm -. . ' fm = 8m . st+2+ .. ·+(m-1). So lognm is O(m2), while log(fm/nm) = log (d(ern) = O(m). We indeed have fm > nm · en( y'Iognm ) as prornised. D Bibliography and remarks. The first construction is from Erdos et al. [ELSS73] and the second one from T6th [T6t01b]. In the original papers, they are phrased in the primal setting. 1 1.3 The Lovasz Lemma and Upper Bounds in All Dimensions 277 11.3 The Lovasz Lemma and Upper Bounds in All Dimensions In this section we prove a basic property of the halving facets, usually called the Lovasz lemma. It implies nontrivial upper bounds on the number of halving facets, by a simple self-contained argument in the planar case and by the second selection lennna (Theorem 9.2.1) in an arbitrary dimension. We prove a slightly more precise version of the Lovasz lemma than is needed here, since we will use it in a subsequent section. On the other hand, we consider only halving facets, although similar results can be obtained for k­ facets as well. Sticking to halving facets simplifies matters a little, since for other k-facets one has to be careful about the orientations. Let X c Rd be an n-point set in general position with n - d even. Let T be a (d-1)-point subset of X and let V r = {x E X \ T: T U {x} is a halving facet of X}. In the plane, T has a single point and V r are the other endpoints of the halving edges emanating from it. In 3 dimensions, conv(T) is a segment, and a typical picture might look as follows: where T = { t1, t2} and the triangles are halving facets. Let h be a hyperplane containing T and no point of X \ T. Since IX \ Tl is odd, one of the open half-spaces determined by h, the larger half-space, contains more points of X than the other, the smaller half-space. 11.3.1 Lemma (Halving-facet interleaving lemma). Every hyperplane h as above "almost l1alves" the halving f acets containing T. More precisely, if r is the number of points of V T in the smaller half-space of h, then the larger half-space contains exactly r+ 1 points of Vr. Proof. To get a better picture, we project T and V T to a 2-dimensional plane p orthogonal to T. (For dimension 2, no projection is necessary, of course.) Let the projection of T, which is a single point, be denoted by t and the projection of V r by V ,f. Note that the points of Vr project to distinct points. The halving facets containing T project to segments emanating from t. The hyperplane h is projected to a line h', which we draw vertically in the following indication of the situation in the plane p: 278 Chapter 11: Attempts to Count k-Sets smaller half-space larger half-space We claim that for any two angularly consecutive segments, such as at and bt, the angle opposite the angle atb contains a point of V f (such as z). Indeed, the hyperplane passing through t and a has exactly n 2 d points of X in both of its open half-spaces. If we start rotating it around T towards b, the point a enters one of the open half-spaces (in the picture, the one below the rotating hyperplane). But just before we reach b, that half-space again has n 2 d points. Hence there was a moment when the number of points in this half-space went from n 2 d + 1 to n 2 d , and this must have been a moment of reaching a suitable z. This means that for every two consecutive points of V f, there is at least one point of V T in the corresponding opposite wedge. There is actually exactly one, for if there were two, their opposite wedge would have to contain another point. Therefore, the numbers of points of V T in the half-spaces determined by h differ exactly by 1. To finish the proof of the lemma, it remains to observe that if we start rotating the hyperplane h around T in either direction, the first point of V r encountered must be in the larger half-space. So the larger half-space has one Inore point of VT than the smaller half-space. (Recall that the larger half-space is defined with respect to X, and so we did not just parrot the definition here.) • D 11.3.2 Corollary (Lovasz lemma). Let X c Rd be an n-point set in general position, and let f be a line that is not parallel to any of the halving f acets of X. Then f intersects the relative interior of at most 0( nd-l) halving f acets of X. Proof. We can move f a little so that it intersects the relative interiors of the same halving facets as before but intersects no boundary of a halving facet. Next, we start translating f in a suitably chosen direction. (In the plane there are just two directions, and both of them will do.) The direction is selected so that we never cross any (d - 3)-dimensional flat determined by the points of X. To this end, we need to find a two-dimensional plane passing through f and avoiding finitely many (d - 3)-dimensional flats in Rd, none of them intersecting I!; this is always possible. 11.3 The Lovasz Lemma and Upper Bounds in All Dimensions 279 As we translate the line R., the number of halving facets currently inter­ sected by f may change only as f crosses the boundary of a halving facet F, i.e., a (d-2)-dimensional face of F. By the halving-facet interleaving lemma, by crossing one such face T, the number of intersected halving facets changes by 1. After moving far enough, the translated line € intersects no halving facet at all. On its way, it crossed no more than O(nd-1) boundaries, since there are only O(nd-1) simplices of dimension d-2 with vertices at X. This proves the corollary. D 11.3.3 Theorem. For each d > 2, the maximum number of halving f acets satisfies HFACd(n) = O(nd-1/sd-l ) , where sd-l is an exponent for which the statement of the second selection lemma (Theorem 9.2.1) holds in dimension d-1. In particular, in the plane we obtain HFAC2(n) == O(n312). For higher dimensions, this result shows that HFACd(n) is asymptotically somewhat smaller than nd, but the proof method is inadequate for proving bounds close to n d-l. Theorem 11.3.3 is proved from Corollary 11.3.2 using the second selection lemma. Let us first give a streamlined proof for the planar case, although later on we will prove a considerably better planar bound. Proof of Theorem 11.3.3 for d = 2. Let us project the points of X ver­ tically on the x-axis, obtaining a set Y. The projections of the halving edges of X define a system of intervals with endpoints in Y. By Corollary 11.3.2, any point is contained in the interior of at most 0 ( n) of these intervals, for otherwise, a vertical line through that point would intersect too many halving edges. Mark every qth point of Y (with q a parameter to be set suitably later). Divide the intervals into two classes: those containing some marked point in their interior and those lying in a gap between two marked points. The number of intervals of the first class is at most O(n) per marked point, i.e., at most O(n2 jq) in total. The number of intervals of the second class is no more than ( q! 1) per gap, i.e., at most ( ؿ + 1) ( q; 1) in total. Balancing both bounds by setting q == I yin l , we get that the total number of halving edges is 0( n312) as claimed. D Note that we have implicitly applied and proved a one-dimensional second selection lemma (Exercise 9.2.1 ). Proof of Theorem 11.3.3. We consider an n-point X c Rd. We project X vertically into the coordinate hyperplane xd == 0, obtaining a point set Y, which we regard as lying in Rd-1. If the coordinate system is chosen suitably, Y is in general position. Each halving facet of X projects to a (d-1)-dimensional Y-simplex in R d-l; let F be the family of these Y -simplices. If we write IFI == a(Q), then 280 Chapter 11: Attempts to Count k-Sets by the second selection lemma, there exists a point a contained in at least co:8d-l (Q) simplices of F. Only at most O(nd-1) of these contain a in their boundary, by Lemma 9.1.2, and the remaining ones have a in the interior. By the Lovasz lentma (Corollary 11.3.2) applied on the vertical line in Rd passing through the point a, we thus get co:8d- l (ؾ) = 0( nd-1). We calculate that IFI = a(Ϡ) = O(nd-1/sd-l) as claimed. D Bibliography and remarks. The planar version of the Lovasz lemma (Corollary 11.3.2) originated in Lovasz [Lov71]; the proof implicitly contains the halving-facet interleaving lemma. A higher­ dimensional version of the Lovasz lemma appeared in Barany, Fiiredi, and Lovasz (BFL90]. Welzl [Wel01] proved an exact version of the Lovasz lemma, as is outlined in Exercises 5 and 6 below. This question ifi equivalent to the upper bound theorem for convex polytopes, via the Gale trans­ form. The connection of k-facets and h-vectors of convex polytopes was noted earlier by several authors (Lee [Lee91], Clarkson [Cla93], and Mulmuley [Mul93b]), sometimes in a slightly different but essen­ tially equivalent form. Using this correspondence and the generalized lower bound theorem mentioned in Section 5.5, Welzl also proved that the n1axiinuin total nuntber of j-facets with j < k for an n-point set in R 3 (or, equivalently, the maximum possible number of vertices of level at most k in an arrangement of n planes in general position in R3) is attained for a set in convex position, from which the exact maximum can be calculated. It also implies that in R 3, a set in convex position n1inin1izes the nuntber of halving facets (triangles). An interesting connection of this result to another problem was dis­ covered by Sharir and Welzl [SW01]. They quickly derived the follow­ ing theorem, which was previously established by Pach and Pinchasi [PP01] by a difficult elementary proof: If R, B c R2 are n-point sets ("red" and "blue") with RUB in general position, then there are at least n balanced lines, where a line f is balanced if IRnfl = !Bnfl = 1 and on both sides of f the number of red points equals the number of blue points (for odd n, the existence of at least one balanced line fol­ lows from the ham-sandwich theorem). A proof based on Welzl's result in R3 mentioned above is outlined in Exercise 4. Let us remark that conversely, the Pach-Pinchasi theorem implies the generalized lower bound theorem for (d+4)-vertex polytopes in Rd. Exercises 1. (a) Prove the following version the Lovasz lemma in the planar case: For a set X C R 2 in general position, every vertical line i intersects the interiors of at most k+ 1 of the k-edges. 8J 1 1 .3 The Lovasz Lemma and Upper Bounds in All Dimensions 281 (b) Using (a), prove the bound KFAC2(n, k) = O(nJk+I) (without appealing to Theorem 11.1.1). 0 2. Let K C {1, 2, . . . , ln/2J}. Using Exercise 1, prove that for any n-point set X c R2 in general position, the total number of k-edges with k E K (or equivalently, the total number of vertices of levels k E K in an arrangement of n lines) is at most 0 (nJEkEK k ). (Note that this is better than applying the bound KFAC2(n, k) = O(nv'k) for each k E K separately.) 0 3. (Exact planar Lovasz lemma) Let X C R2 be a 2n-point set in general position, and let f be a vertical line having k points of X on the left and 2n-k points on the right. Prove that f crosses exactly min(k, 2n-k) halving edges of X. Á 4. Let X be a set of 2n+ 1 points in R 3 in general position, and let Pt, P2, . . . , P2n+ 1 be the points of X listed by increasing height ( z-co­ ordinate). (a) Using Exercise 3, check that if Pk+l is a vertex of conv(X), then there are exactly min( k, 2n-k) halving triangles having Pk+ 1 as the middle­ height vertex (that is, the triangle is PiPk+lPi with i < k+I < j). 0 (b) Prove that every (2n+l)-point convex independent set X C R3 in general position has at least n2 halving triangles. m (c) Assuming that each (2n+1)-point set in R3 in general position has at least n2 halving triangles (which follows from (b) and the re­ sult mentioned in the notes above about the number of halving trian­ gles being minimized by a set in convex position), infer that if X = {Pl, . . . ,P2n+l} C R3 is in general position, then for every k, there are always at least min(k, 2n-k) halving triangles having Pk+l as the middle­ height vertex (even if Pk+t is not extremal in X). 0 (d) Derive from (c) the result about balanced lines mentioned in the notes to this section: If R, B C R 2 are n-point sets (red and blue points), with R U B in general position, then there are at least n balanced lines I! (with IR n fl = IB n £1 = 1 and such that on both sides of f the number of red points equals the number of blue points). Embed R2 as the z = 1 plane in R3 and use a central projection on the unit sphere in R3 centered at 0. ill See [SWOl] for solutions and related results. 5. (Exact Lovasz lemma) Let X c Rd be an n-point set in general position and let f be a directed line disjoint from the convex hulls of all ( d-1 )­ point subsets of X. We think of f as being vertical and directed upwards. We say that I! enters a j-facet F if it passes through F from the positive side (the one with j points) to the negative side. Let hj = hj ( 1!, X) denote the number of j-facets entered by £, j = 0, 1, . . . , n - d. Further, let sk(f!, X) be the number of (d + k)-element subsets S C X such that f n conv(S) # 0. d . -(a) Prove that for every X and f as above, Sk = E7 k (Å)hi. li1 282 Chapter 1 1 : Attempts to Count k-Sets (b) Use (a) to show that ho, . . . , hn-d are uniquely determined by So, St, . . . , Sn-d. õ (c) Infer from (b) that if X' is a set in general position obtained from X by translating each point in a direction parallel to i, then hj ( f, X) = hi (£, X') for all j. Derive hi = hn-d-j. 0 (d) Prove that for every x E X and all .i, we have hj(f, X \ {x}) < hj(f, X). 0 (e) Choose x E X uniformly at random. Check that E ( hj ( /!,, X \ { x}) 1 = n-d-j h · + j+l h · 0 n J n J+l · (f) From (d) and (e), derive hj+ l < 1! hj, and conclude the exact Lovasz lemma: õ h . < . { (j + d -1) (n - j -1) } J - min d - 1 ' d - 1 . 6. (The upper bound theorem and k-facets) Let a = (a1 , a2 , . . . , an) be a sequence of n > d+ 1 convex independent points in R d in general position, and let P be the d-dimensional simplicial convex polytope with vertex set { a 1 , . . . , an}· Let g = (9t, . . . , 9n) be the Gale transform of a, 9t, . . . , 9n E Rn-d-l, and let bi be a point in Rn-d obtained from 9i by appending a number ti as the last coordinate, where the ti are chosen so that X == { b1, . . . , bn} is in general position. (a) Let f be the Xn-d-axis in Rn-d oriented upwards, and let Sk sk(f, X} and hi == hj (£, X) be as in Exercise 5. Show that !k(P) Sd-k-t (f, X), k == 0, 1, . . . , d -1. 0 (b) Derive that hi (P) = hj (£, X), j == 0, 1, . . . , d, where hj is as at the end of Section 5.5, and thus (f) of the preceding exercise implies the upper bound theorem in the formulation with the h-vector (5.3). IIJ If (a) and (b) are applied to the cyclic polytopes, we get equality in the bound for h1 in Exercise 5(f). In fact, the reverse passage (fro1n an X c R n-d in general position to a simplicial polytope in R d) is possible as well (see [WelOl]), and so the exact Lovasz lemma can also be derived from the upper bound theorem. 7. This exercise shows limits for what can be proved about k-scts using Corollary 11.3.2 alone. (a) Construct an n-point set X c R2 and a collection of!1(n312) segments with endpoints in X such that no line intersects more than O(n) of these segments. 0 (b) Construct an n-point set in R3 and a collection of !1(n512) triangles with vertices at these points such that no line intersects more than O(n2) triangles. 0 8. (The Dey-Edelsbrunner proof of HFAC3(n) = O(n813)) Let X be an n-point set in R3 in general position (make a suitable general position assumption), and let T be a collection oft triangles with vertices at points of X. By a crossing we mean a pair (T, e), where T E T is a triangle 1 1.4 A Better Upper Bound in the Plane 283 and e is an edge of another triangle from T, such that e intersects the interior of T in a single point (in particular, e is vertex-disjoint from T). (a) Show that if t > Cn2 for a suitable constant C, then two triangles sharing exactly one vertex intersect in a segment, and conclude that at least one crossing exists. 0 (b) Show that at least t - Cn2 crossings exist. 0 (c) Show that for t > C' n2, with C' > C being a sufficiently large con­ stant, at least n(t3 jn4) crossings exist. Infer that there is an edge crossing 0( t3 jn6) triangles. (Proceed as in the proof of the crossing number the­ orem.) 0 (d) Use Corollary 11.3.2 to conclude that HFAC3(n) = O(n813). III 11.4 A Better Upper Bound in the Plane Here we prove an improved bound on the number of halving edges in the plane. 11.4.1 Theorem. The maximum possible number of ha.lving edges of an n­ point set in the plane is at most O(n413). Let X be an n-point set in the plane in general position, and let us draw all the halving edges as segments. In this way we get a drawing of a graph (the graph of halving edges) in the plane. Let deg( x) denote the degree of x in this graph, i.e., the number of halving edges incident to x, and let cr(X) denote the number of pairs of the halving edges that cross. In the following example we have cr(X) = 2, and the degrees are (1, 1, 1, 1, 1, 3). Theorem 11.4.1 follows frorn the crossing nurnber theorern (Theorern 4.3.1) and the following remarkable identity. 11.4.2 Theorem. For each n-point set X in the plane in general position, where n is even, we have (11.2) Proof of Theorem 11.4.1. Theorem 11.4.2 implies, in particular, that cr(X) = O(n2). The crossing nutnber theorern shows that cr(X) = O(t3 jn2)-0(n), where t is the number of halving edges, and this implies t = O(n413). D 284 Chapter 1 1 : Attempts to Count k-Sets Proof of Theorem 11.4.2. First we note that by the halving-facet in­ terleaving lemma, deg(x) is odd for every x E X, and so the expression 5(deg(x)+1) in the identity (11.2) is always an integer. For the following arguments, we forrnally regard the set X as a sequence (xi, x2, . . . , xn)· From Section 9.3 we recall the notion of orientation of a triple (xi, Xj, xk): Assuming i < j < k, the orientation is positive if we make a right turn when going from Xi to Xk via Xj, and it is negative if we make a left turn. The order type of X describes the orientations of all the triples (xi, Xj, xk), 1 < i < j < k < n. We observe that the order type uniquely determines the halving edge graph: Whether {xi, Xj} is a .halving edge or not can be deduced from the orientations of the triples involving Xi and Xj· Similarly, the orientations of all triples determine whether two halving edges cross. The theorern is proved by a continuous motion argument. We start with the given point sequence X, and we move its points continuously until we reach some suitable configuration X0 for which the identity (11.2) holds. For example, Xo can consist of n points in convex position, where we have × halving edges and every two of them cross. The continuous motion transforming X into X0 is such that the current sequence remains in general position, except for finitely many moments when exactly one triple (xi, Xj, xk) changes its orientation. The points Xi, Xj, Xk thus become collinear at such a moment, but we assume that they always remain distinct, and we also assume that no other collinearities occur at that rnornent. Let us call such a moment a mutation at {xi, x j, x k}. We will investigate the changes of the graph of halving edges during the motion, and we will show that mutations leave the left-hand side of the identity (11.2) unchanged. Both the graph and the crossings of its edges remain combinatorially unchanged between the mutations. Moreover, some thought reveals that by a mutation at {x, y, z}, only the halving edges with both endpoints among x, y, z and their crossings with other edges can be affected; all the other halving edges and crossings remain unchanged. Let us first assume that { x, y} is a halving edge before the mutation at { x, y, z} and that z lies on the segment xy at the moment of collinearity: y y X X 1 1.4 A Better Upper Bound in the Plane 285 Figure 11.1. Welzl's Little Devils. We note that { x, z} and {y, z} cannot be halving edges before the mutation. After the mutation, { x, y} ceases to be halving, while { x, z} and {y, z} become halving. Let deg(z) = 2r+1 (before the mutation) and let h be the line passing through z and parallel to xy. The larger side of h, i.e., the one with more points of X, is the one containing x and y, and by the halving-facet inter­ leaving lemma, r+ 1 of the halving edges emanating from z go into the larger side of h and thus cross xy. So the following changes in degrees and crossings are caused by the mutation: • deg(z), which was 2r+1, increases by 2, and • cr(X) decreases by r+1. It is easy to check that the left-hand side of the identity (11.2) remains the same after this change. What other mutations are possible? One is the mutation inverse to the one discussed above, with z moving in the reverse direction. We show that there are no other types of mutations affecting the graph of halving edges. Indeed, for any mutation, the notation can be chosen so that z crosses over the segment xy. Just before the mutation or just after it, it is not possible for { x, z} to be a halving edge and {y, z} not. The last remaining possibility is a mutation with no halving edge on {x, y, z }, which leaves the graph unchanged. Theorem 11.4.2 is proved. D Tight bounds for small n. Using the identity (11.2) and the fact that all vertices of the graph of halving edges must have odd degrees, one can determine the exact maximum number of halving edges for small point con­ figurations (Exercise 1). Figure 11.1 shows examples of configurations with the maximum possible number of halving edges for n = 8, 10, and 12. These small examples seem to be misleading in various respects: For example, we know that the number maximum of halving edges is superlinear, and so the graph of halving edges cannot be planar for large n, and yet all the stnall pictures are planar. Bibliography and remarks. Theorem 1 1.4.1 was first proved by Dey [Dey98], who discovered the surprising role of the crossings of the halving edges. His proof works partially in the dual setting, and 286 Chapter 1 1 : Attempts to Count k-Sets it relies on a technique of decomposing the k-level in an arrangement into convex chains discussed in Agarwal et al. [AACS98]. The identity (11.2), with the resulting considerable simplification of Dey's proof, were found by Andrzejak et al. [AAHP+98]. They also computed the exact maximum number of halving edges up to n = 12 and proved results about k-facets and k-sets in dimension 3. Improved upper bound for k-sets in R3. We outline the argument of Sharir et al. [SSTOl] proving that an n-point set X c R 3 in general position has at most 0( n2·5) halving triangles. Let T be the set of halving triangles and let t = 171. We will count the number N of crossing pairs of triangles in T in two ways, where a crossing pair looks like this: The triangles share one vertex p, and the edge of T1 opposite to p intersects the interior of T2. The Lovasz lemma (Corollary 11.3.2: no line intersects more than O(n2) halving triangles) implies N = O(n4). To see this, we first consider pairs ( f, T), where f is a line spanned by two points p, q E X, t E 7, and R intersects the interior of T. Each of the (ƽ) lines C contributes at most O(n2) pairs, and each pair (£, T) yields at most 3 crossing pairs of triangles, one for each vertex of T. Now we are going to show that N = O(t2 /n) - O(tn), which to­ gether with N = O(n4) implies t = O(n2·5). Let p be a horizontal plane lying below all of X. For a set A c R 3, let A denote the cen­ tral projection of A from p into p. To bound N from below, we consider each p E X in turn, and we associate to it a graph Gp drawn in p. Let 'Yp be the open half-space below the horizontal plane through p. The vertex set of the geometric graph Gp is V p = (X n /'p). Let 1ip C T be the set of the halving triangles having p as the highest vertex, and let Mp C T be the triangles with p as the middle-height vertex. Each T E 1ip contributes an edge of Gp, namely, the segment T: Each T E Mp gives rise to an unbounded ray in Gp, namely, (Tn/v): 11.4 A Better Upper Bound i11 tl1e Plane . . . . . . . . -- - - · · · · · · . . . . . ..... - · . . . . . . ... ;. -: . -.. u -.. . . . I -"" • ; ... -.. .. . . . .. . ....... . . .. .. . ... Formally, we can interpret such a ray as an edge connecting the vertex q E V p to a special vertex at infinity. Let rnp == 11-lp l + IMp I be the total number of edges of Gp, including the rays, and let 'rp == IMPI be the nu1nber of rays. Write Xp for the number of edge crossings in the drawing of Gp. We have L mp == 2t and L rp == t, pEX pEX because each T E T contributes to one 1-lp and one Mp· We note that N > L:: pEx· Xp, since an edge crossing in Gp corresponds to a crossing pair of triangles with a common vertex p. A lower bound for Xp is obtained using a decomposition of GP into convex chains, which is an idea from Agarwal et al. [AACS98] (used in Dey's original proof of the O(n413) bound for planar halving edges). We fix a vertical direction in p so that no edges of Gp are vertical. Each convex chain is a contiguous sequence of (bounded or unbounded) edges of Gp that together forn1 a graph of a convex function defined on an interval. Each edge lies in exactly one convex chain. Let e be an edge of Gp whose right end is a (finite) vertex v. We specify how the convex chain containing e continues to the right of v: It follows an edge e' going from v to the right and turning upwards with respect to v but as little as possible. If there is no e' like this, then the considered chain ends at v: By the halving-facet interleaving lemma, the fan of edges emanating frorn v has an "antipodal'' structure: For every two angularly consecu­ tive edges, the opposite wedge contains exactly one edge. This implies that e is uniquely determined by e', and so we have a well-defined de­ composition of the edges of Gp into convex chains. Moreover, exactly 287 288 Chapter 1 1 : Attempts to Count k-Sets one convex chain begins or ends at each vertex. Thus, the number cp of chains equals d ( np + r P) . A lower bound for the number of edge crossings Xp is the number of pairs {Ct, C2} of chains such that an edge of Ct crosses an edge of C2. The trick is to estimate the number of pairs { C1, C2} that do not cross in this way. There are two possibilities for such pairs: C1 and C2 can be disjoint or they can cross at a vertex: The number of pairs { C 1, C2} crossing at a vertex is at most mp np, because the edge e1 of C1 entering the crossing determines both C1 and the crossing vertex, and C2 can be specified by choosing one of the at most np edges incident to that vertex. Finally, suppose that C1 and C2 are disjoint and C2 is above Ct. If we fix an edge e1 of Ct, then c2 is determined by the vertex where the line parallel to el translated upwards first hits C2: We obtain Xp > (cf) - 2m,pnp, and a calculation leads to N > L: xp == O(t2 /n) - O(nt). This concludes the proof of the O(n2·5) bound for halving facets in R 3• Having already introduced the decomposition of the graph of halv­ ing edges into convex chains as above, one can give an extremely sim­ ple alternative proof of Theorem 11.4.1. Namely, the graph of halving edges is decomposed into at most n convex chains and, similarly, into at 1nost n concave chains. Any convex chain intersects any concave chain in at most 2 points, and it follows that the number of edge crossings in the graph of halving edges is 0( n2). The application of the crossing number theorem finishes the proof. Exercises 1. (a) Find the maximum possible number of halving edges for n = 4 and n = 6, and construct the corresponding configurations. Ͱ (b) Check that the three graphs in Figure 11.1 are graphs of halving edges of the depicted point sets. [!] (c) Show that the configurations in Figure 11.1 maximize the nurnber of halving edges. 8J 12 Two Applications of High-Dimensional Polytopes From this chapter on, our journey through discrete geometry leads us to the high-dimensional world. Up until now, although we have often been consid­ ering geometric objects in arbitrary dimension, we could mostly rely on the intuition from the familiar dimensions 2 and 3. In the present chapter we can still use dimensions 2 and 3 to picture examples, but these tend to be rather trivial. For instance, in the first section we are going to prove things about graphs via convex polytopes, and for an n-vertex graph we need to consider an n-dimensional polytope. It is clear that graphs with 2 or 3 vertices cannot serve as very illuminating examples. In order to underline this shift to high dimensions, from now on we mostly denote the dimension by n instead of d as before, in agreement with the habits prevailing in the literature on high­ dimensional topics. In the first and third sections we touch upon polyhedral combinatorics. Let E be a finite set, for example the edge set of a graph G, and let F be some interesting system of subsets of E, such as the set of all matchings in G or the set of all Hamiltonian circuits of G. In polyhedral combinatorics one usually considers the convex hull of the characteristic vectors of the sets of :F; the characteristic vectors are points of { 0, 1 }E c R E. For the two examples above, we thus obtain the matching polytope of G and the traveling salesman polytope of G. The basic problem of polyhedral combinatorics is to find, for a given :F, inequalities describing the facets of the resulting polytope. So1netiines one succeeds in describing all facets, as is the case for the matching polytope. This may give insights into the combinatorial structure of :F, and often it has algorithmic consequences. If we know the facets and they have a sufficiently nice structure, we can optimize any linear function over the polytope in polynomial time. This means that given some real weights of the elernents of E, we can find in polynomial time a maxiinum-weight set in F 290 Chapter 12: Two Applications of High-Din1ensional Polytopes (e.g., a maximum-weight matching). In other cases, such as for the traveling salesman polytope, describing all facets is beyond reach. The knowledge of some facets may still yield interesting consequences, and on the practical side, it can provide a good approxi1nation algoritlnn for the Inaximuin-weight set. Indeed, the largest traveling salesman problems solved in practice, with thousands of vertices, have been attacked by these methods. We do not treat polyhedral combinatorics in any systematic manner; rather we focus on two gems (partially) belonging to this area. The first one is the celebrated weak perfect graph conjecture, stating that the complement of any perfect graph is perfect, which is proved by combining combinatorial and polyhedral arguments. The second one is an algorithmically motivated problem of sorting with partial information, discussed in Section 12.3. We associate a polytope with every finite partially ordered set, and we reduce the question to slicing the polytope into two parts of roughly equal volume by a hyperplane. A key role in this proof is played by the Brunn-Minkowski inequality. This fundamental geometric inequality is explained and proved in Section 12.2. 12.1 The Weak Perf ect Graph Conjecture First we recall a few notions from graph theory. Let G = (V, E) be a finite undirected graph on n vertices. By G we denote the complement of G, that is, the graph (V, () \ E). An induced subgraph of G is any graph that can be obtained from G by deleting some vertices and all edges incident to the deleted vertices (but an edge must not be deleted if both of its vertices remain in the graph). Let w( G) denote the clique number of G, which is the maximum size of a complete subgraph of G, and let a( G) = w(G) be the independence number of G. Explicitly, a(G) is the maximum size of an independent set in G, where a set S C V(G) is independent if the subgraph induced by S in G has no edges. The chromatic number of G is the smallest number of independent sets covering all vertices of G, and it is denoted by x( G). Both the problems of finding w( G) and finding x( G) are computationally hard. It is NP-complete to decide whether w(G) > k, where k is a part of the input, and it is NP-complete to decide whether x(G) = 3. Even approxi­ mating x(G) or w(G) is hard. So classes of graphs where the clique number and/or the chromatic number are computationally tractable are of great in­ terest. Perfect graphs are one of the most important such classes, and they in­ clude many other classes found earlier. A graph G = (V, E) is called perfect if w( G') = x( G') for every induced subgraph G' of G (including G' > G). For every graph G we have x(G) > w(G), so a high clique number is a "reason" for a high chromatic number. But in general it is not the only pos­ sible reason, since there are graphs with w( G) = 2 but x( G) arbitrarily large. 12. 1 The W eak Perfect Graph Conjecture 291 Perfect graphs are those whose chromatic number is exclusively controlled by the cliques, and this is true for G and also for all of its subgraphs. For perfect graphs, the clique number, and hence also the chromatic num­ ber, can be computed in polynornial tin1e by a sophisticated algorithrn (re­ lated to semidefinite programming briefly discussed in Section 15.5). It is not known how hard it is to decide perfectness of a given graph. No polynomial­ time algorithm has been found, but neither has any hardness result (such as coNP-hardness) been proved. But for graphs arising in many applications we know in advance that they are perfect. Typical non perfect graphs are the odd cycles C2k+ 1 of length 5 and larger, since w(C2k+l) = 2 for k > 2, while x(C2k+l) = 3. The following two conjectures were formulated by Berge at early stages of research on perfect graphs. Here is the stronger one: Strong perfect graph conjecture. A graph G is perfect if and only if neither G nor its complement contain an odd cycle of length 5 or larger as an induced subgraph. This is still open, in spite of considerable effort. The second conjecture is this: Weak perfect graph conjecture. A graph is perf ect if and only if its complement is perfect. This was proved in 1972. We reproduce a proof using convex polytopes. 12.1.1 Definition. Let G = (V, E) be a graph on n ve rtices. We assign a convex polytope P(G) C Rn to G. Let the coordinates in Rn be indexed by the vertices of G; i.e., if V = { v1 , . . . , Vn}, then the points of P( G) are of the f orm x = (xv 1 , • • • , Xvn ). For an x E Rn and a subset U C V, we put x(U) = LvEU Xv · The polytope P( G) is defined by the following inequalities: (i) Xv > 0 for each vertex v E V, and (ii) x(K) < 1 f or each clique (complete suhgraph) K in the graph G. Observations. • P( G) C [0, 1 ]n. The inequality Xv < 1 is obtained from (ii) by choosing K = { v }. • Tl1e characteristic vector of each independent set lies in P( G). • If a vector x E P(G) is integral (i.e., it is a 0/1 vector), then it is the characteristic vector of an independent set. Before we start proving the weak perfect graph conjecture, let us intro­ duce some more notation. Let w: V ---+ {0, 1, 2, . . . } be a function assigning nonnegative integer weights to the vertices of G. We define the weighted clique number· w ( G, w) as the rnaxiinurn possible weight of a clique, where the weight of a clique is the sum of the weights of its vertices. We also define the weighted 292 Chapter 12: Two Applications of High-Dimensional Polytopes chromatic number x( G, w) as the minimum number of independent sets such that each vertex v E V is covered by w( v) of them. Now we can formulate the main theorem. 12.1.2 Theorem. The following conditions are equivalent for a graph G: (i) G is perfect. (ii) w(G, w) = x(G, w) for any nonnegative integral weight function w. (iii) All vertices of tl1e polytope P( G) are integral (and thus correspond to the independent sets in G). (iv) The graph G is perfect. Proof of ( i) Ħ ( ii). This part is purely graph-theoretic. For every weight function w: V ---+ {0, 1, 2, . . . }, we need to exhibit a covering of V by inde­ pendent sets witnessing x(G, w) = w(G, w). If w attains only values 0 and 1, then we can use (i) directly, since selecting an induced subgraph of G is the same as specifying a 0/1 weight function on the vertices. For other values of w we proceed by induction on w(V). Let w be given and let vo be a vertex with 'W ( vo) > 1. We define a new weight function w': '( ) - { w(v) - 1 w v -w(v) for v = vo, for v =I= vo. Since w'(V) < w(V), by the inductive hypothesis we assume that we have independent sets It, I2, • • • , IN covering each v exactly w'(v) times, where N = w(G, w') . If w(G, 'W) > N, then we can obtain the appropriate covering for w by adding the independent set {vo}, so let us suppose w(G, w) = N. Let the notation be chosen so that v0 E I1• We define another weight function w": "( ) - { w(v) - 1 w v -w(v) for v E I1, for v ¢ It. We claim that w( G, 1.1J11) < N. If not, then there exists a clique K with w"(K) = N = w(G, w') . By the choice of the Ii, we have N < w'(K) = 2:| 1 IIi n Kl. Since a clique intersects an independent set in at most one vertex, K has to intersect each Ii. In particular, it intersects I 1 , and so w(K) > w"(K) = N, contradicting w(G, w) = N. We thus have w(G, w") < N. By the inductive hypothesis, we can produce a covering by independent sets showing that x(G, w") < N. By adding I1 to it we obtain a covering witnessing x(G, w) = N. Proof of {ii) Ħ {iii). Let x = (xv1 , • • • , Xvn) be a vertex of the convex poly­ tope P( G). Since all the inequalities defining P( G) have rational coefficients, x has rational coordinates, and we can find a natural number q such that w = qx is an integral vector. We interpret the coordinates of w as weights of the vertices of G. Let K be a clique with weight N = w(G, w). One of the inequalities defining P(G) is x(K) < 1, and hence N = w(K) < q. 12. 1 The W eak Perfect Graph Co11jecture 293 By (ii) we have x(G, w) Ԍ w(G, w) < q, and so there are independent sets /1, . . . , Iq (some of them may be empty) covering each vertex v E V precisely W-v times. Let ci be the characteristic vector of Ii; then this property of the sets Ii can be written as x C (i 1 Ǵci. Thus x is a convex combination of the ci, and since it is a vertex of P( G), it must be equal to some ci, which is a characteristic vector of an independent set in G. Proof of (iii) => (iv). It suffices to prove x(G) = w(G) for every G satisfying (iii), since (iii) is preserved by passing to an induced subgraph (right?). We prove that a graph G fulfilling (iii) has a clique K intersecting all independent sets of the maximum size a(G). Then the graph G \ K has independence number a( G) - 1, and by repeating the same procedure we can cover G by a(G) cliques. To find the required K, let us consider all the independent sets of size a Ԍ a( G) in G and let M C P(G) be the convex hull of their characteristic vectors. We note that M lies in the hyperplane h C { x: x(V) C a}. This h defines a (proper) face of P( G), for otherwise, we would have vertices of P( G) on both sides of h, and in particular, there would be a vertex z with z(V) > a. This is impossible, since by (iii), z would correspond to an independent set bigger than a. Each facet of P( G) corresponds to an equality in some of the inequalities defining P( G). The equality can be either of the form Xv C 0 or of the form x(K) C 1. The face F = P(G) n h is the intersection of some of the facets. Not all of these facets can be of the type xv = 0, since then their intersection would contain 0, while 0 fj. h. Hence all x E M satisfy x( K) = 1 for a certain clique K, and this means that K n I -=f. 0 for each independent set I of size a. Proof of (iv) => (i). This is the implication (i) ==} (iv) for the graph G. 0 Bibliography and remarks. Perfect graphs were introduced by Berge [Ber6l],[Ber62], who also formulated the two perfect graph con­ jectures. The weak perfect graph conjecture was first proved ( combi­ natorially) by Lovasz [Lov72]. The proof shown in this section follows Grotschel, Lovasz, and Schrijver [GLS88], whose account is based on the ideas of [Lov72] and of Fulkerson [Fu170]. Grotschel et al. [GLS88] denote the polytope P(G) by QSTAB(G) and call it the clique-constrained stable set polytope (another name in the literature is the fractional stable set polytope). Here stable set is another common name for an independent set, and the stable set poly­ tope STAB( G) C RIEl is the convex hull of the characteristic vectors of all independent sets in G. As we have seen, STAB(G) = QSTAB(G) if and only if G is a perfect graph. Polyno1nial-tin1e algorithms for perfect graphs are based on beautiful geometric ideas (related to the famous Lovasz 19-function), and they are presented in (GLS88] or in Lovasz [Lov] (as well as in many other sources). 294 Chapter 1 2: Two ApplicaUons of H igh-Dimensional Polytopes Polyhedral combinatorics was initiated mainly by the results of Edmonds [Edm65]. For a graph G = (V, E), let M(G) denote the matching polytope of G, that is, the convex hull of the characteris­ tic vectors of the matchings in a graph G. According to Edmonds' matching polytope theorem, M (G) is described by the following in­ equalities: Xe > 0 for all e E E, l:: eEE: vEe Xe < 1 for all v E V, and LeEE: eCS Xc < º (ISI-1) for all S C V of odd cardinality. For bipar­ tite G, the constraints of the last type are not necessary (this is an older result of Birkhoff). A modern textbook on combinatorial optimization, with an in­ troduction to polyhedral combinatorics, is Cook, Cunningham, Pul­ leyblank, and Schrijver [CCPS98]. It also contains references to the­ oretical and practical studies of the traveling salesman problem by polyhedral rnethods. A key step in many results of polyhedral combinatorics is prov­ ing that a certain system of inequalities defines an integral polytope, i.e., one with all vertices in zn. Let us mention just one important related concept: the total unimodularity. An m x n matrix A is to­ tally unimodular if every square su b1natrix of A has determinant 0, 1, or -1. Total unimodularity can be tested in polynomial time (using a deep characterization theorem of Seymour). All polyhedra defined by totally unimodular matrices are integral, in the sense formulated in Exercise 6. For other aspects of integral polytopes (sometimes also called latt·ice polytopes) see, e.g., Barvinok [Bar97] (and Section 2.2). Exercises 1. What are the integral vertices of the polytope P( C5)? Find some nonin­ tegral vertex (and prove that it is really a vertex!). 0 2. Prove that for every graph G and every clique K in G, the inequality x(K) < 1 defines a facet of the polytope P( G). In other words, there is an x E P(G) for which x(K) == 1 is the only inequality among those defining P( G) that is satisfied with equality. Ġ 3. (On Konig's edge-covering theorem) Explain why bipartite graphs are perfect, and why the perfectness of the complements of bipartite graphs is equivalent to Konig's edge-covering theorem asserting that the maximurn number of vertex-disjoint edges in a bipartite graph equals the minimum number of vertices needed to intersect all edges (also see Exercise 10.1.5). Ġ 4. (Comparability graphs and Dilworth's theorem) For a finite partially ordered set (X, <) (see Section 12.3 for the definition), let G = (X, E) be the graph with E = {{u, v} E (dz): u < v or v < u}; that is, edges correspond to pairs of comparable elements. Any graph isomorphic to such a G is called a comparability graph. We also need the notions of a 12. 1 The Weak Perfect Graph Conjecture 295 chain (a subset of X linearly ordered by <) and an antichain (a subset of X with no two elements comparable under <). (a) Prove that any finite (X, <) is the union of at most c antichains, \vhere c is the length of the longest chain, and check that this implies the perfectness of comparability graphs. \ (b) Derive from (a) the Erdos-Szekeres lemma: If a1, a2, • • . , an are ar­ bitrary real numherH, then there exist indices it < i2 < · · · < ik with k2 > n and such that the subsequence ai1 , ai2, • • • , aik is monotone (non­ decreasing or decreasing). ë (c) Check that the perfectness of the complements of comparability graphs is equivalent to the following theorem of Dilworth [Dil50]: Any finite (X, <) is the union of at most a chains, where a is the maximum number of elements of an antichain. OJ 5. (Hoff1nan 's characterization of polytope integrality) Let P be a (bounded) convex polytope in Rn such that for every a E zn, the minimum of the function x M (a, x) over all x E P is an integer. Prove that all vertices of p are integral (i.e.' they belong to zn). IT] 6. (Kruskal-Hoffman theorem) (a) Show that if A is a nonsingular n x n totally uni1nod ular n1atrix (all square submatrices have determinant 0 or ±1), then the mapping X M Ax maps zn bijectively onto zn. [I] (b) Show that if A is an m x n totally unimodular matrix and b is an m-dimensional integer vector such that the system Ax = b has a real solution x, then Ax = b has an .integral solution as well. [I] (c) Let A be an m X n totally unimodular matrix and let u, v E zn and w, z E zm be integer vectors. Show that all vertices of the convex polyhedron given by the inequalities u < x < v and w < Ax < z are integral. OJ 7. (Helly-type theorem for lattice points in convex sets) (a) Let A be a set of 2d + 1 points in zd. Prove that there are a, b E A with ƣ(a + b) E zd. [I] .. (b) Let {1' . . . ' rn be cloHed half-Hpaces in R d' n > 2d + 1' and suppose that the intersection of every 2d of them contains a lattice point (a point of zd). Prove that there exists a lattice point common to all the 'Yi. Ƴ (c) Prove that the number 2d in (b) is the best possible, i.e., there are 2d half-spaces such that every 2d - 1 of them have a common lattice point but there is no lattice point common to all of them. õ (d) Extend the Helly-type theorem in (b) to arbitrary convex sets instead of half-spaces. Ͱ The result in (d) was proved by Doignon [Doi73]; his proof starts \vith (a) and proceeds on the level of abstract convexity {while the proof suggested in (b) is more geometric). 296 Chapter 1 2: Two Applications of High-Dimensional Polytopes 12.2 The Brunn -Minkowski Inequality Let us consider a 3-dimensional convex loaf of bread and slice it by three parallel planar cuts. As we will derive below, the middle cut cannot have area smaller than both of the other two cuts. Let us choose the coordinate system so that the cuts are perpendicular to the x1-axis and denote by v ( t) the area of the cut by the plane x1 = t. Then the claim can be stated as follows: For any t1 < t < t2 we have v(t) > min(v(t1), v(t2)). Thus, there is some to such that the function t t--t v(t) is nondecreasing on (-oo, to] and nonincreasing on [to, oo ). Such a function is called unimodal. A similar result is true for any convex body C in Rn+I if v(t) denotes the n-dimensional volume of the intersection of C with the hyperplane {xi = t}. How can one prove such a statement? In the planar case, with n = 1, it is easy to see that v(t) is a concave function on the interval obtained by projecting C on the x1-axis. r" .... c .... , / I' I Û ' .... .... .... ... .. .... Lj , .. / . I I : I : . 1 Loo Þ.-:.-"" Ý' r-. " r-. I' I'-v .... Ǒ ΄ Dz I I I I I I I I I I ' I . I . ' . t . v . . • . ( ) I . • \ ' I . \ . . 1\ ' . . ؽ! I ؼ This might tempt one to think that v(t) is concave on the appropriate interval in higher dimension, too, but this is false in general! (See Exercise 1.) There is concavity in the game, but the right function to look at in Rn+l is v(t)11n. Perhaps a little more intuitively, we can define r( t) as the radius of the n-di­ mensional ball whose volume equals v(t). We have r(t) = Rnv(t)11n, where 1 2.2 The Brunn-Minkowski Inequality 297 Rn is the radius of a unit-volume ball in R n; let us call r( t) the equivalent radius of C at t. 12.2.1 Theorem (Brunn's inequality for slice volumes). Let C c Rn+l be a compact convex body and let the interval [tmin, tmax] be the pro­ jection of C on the x1 -axis. Then the equivalent radius function r( t) (or, equivalently, the function v(t)11n) is concave on [tmin, tmax]. Consequently, for any t1 < t < t2 we have v(t) > min( v(t1), v(t2) ) . Brunn's inequality is a consequence of the following more general and more widely applicable statement dealing with two arbitrary compact sets. 12.2.2 Theorem (Brunn-Minkowski inequality). Let A and B be non­ empty compact sets in R n. Then vol(A + B)11n > vol(A) l/n + vol(B)1/n. Here A + B = {a + b: a E A, b E B} denotes the Minkowski sum of A and B. If A' is a translated copy of A, and B' a translated copy of B, then A' + B' is a translated copy of A + B. So the position of A + B with respect to A and B depends on the choice of coordinate system, but the shape of A + B does not. One way of interpreting the Minkowski sun1 is as follows: Keep A fixed, pick a point bo E B, and translate B into all possible positions for which b0 lies in A. Then A + B is the union of all such translates. Here is a planar example: Sometin1es it is also useful to express the Minkowski sum A+B as a projection of the Cartesian product A x B c R2n by the 1napping {x, y) LJ x+y, x, y E Rn. Proof of Brunn's inequality for slice volumes from the Brunn­ Minkowski inequality. First we consider "convex combinations" of sets A, B C Rn of the form (1-t)A + tB, where t E [0, 1] and where tA stands for {ta: a E A}. As t goes from 0 to 1, (1-t)A + tB changes shape continuously from A to B. Now, if A and B are both convex and we place them into Rn+l so that A lies in the hyperplane { x1 = 0} and B in the hyperplane { x1 = 1}, it is not difficult to check that (1-t)A+tB is the slice of the convex body conv(AUB) by the hyperplane { x1 = t }; see Exercise 2: 298 Chapter 12: Two Applications of High-Dimensional Polytopes Xt = 0 Xt = t Let us consider the situation as in Brunn's inequality, where C C Rn+l is a convex body. Let A and B be the slices of C by the hyperplanes { x1 = t1} and {x = t2}, respectively, where t1 < t2 are such that A, B =/= 0. For convenient notation, we change the coordinate system so that t1 = 0 and t2 = 1. To prove the concavity of the function v(t)1fn in Brunn's inequality, we need to show that for all t E (0, 1), (1-t) vol(A)1/n + tvol(B)1/n < vol(M)1/n, (12.1) where M is the slice of C by the hyperplane ht = { x 1 = t}. Let C' = conv(A U B) and M' = C' n ht. We have C' C C and !v i' C M. By the remark above, M' = (1-t)A + tB, and so the Brunn-11inkowski inequality applied to the sets ( 1-t) A and tB yields vol(M)1/n > vol(M')1/n = vol((l-t)A + tB)1fn > vol((1-t)A)1fn + vol(tB)1fn = (1-t) vol(A)l/n + t vol(B)1fn. This verifies ( 12.1) . D Proof of the Brunn-Minkowski inequality. The idea of this proof is simple but perhaps surprising in this context. Call a set A C Rd a brick set if it is a union of finitely many closed axis-parallel boxes with disjoint interiors. First we show that it suffices to prove the inequality for brick sets (which is easy but a little technical), and then for brick sets the proof goes by induction on the number of bricks. 12.2.3 Lemma. If the Brunn-l\1inkowski inequality holds for all nonempty brick sets A', B' C R n, then it is valid for all nonempty compact sets A, B C Rn as well. Proof. We use a basic fact from measure theory, namely, that if X1 :J X2 :J X3 :J · · · is a sequence of measurable sets in Rn such that X = nÓ 1 Xi, then the numbers vol(Xi) converge to vol(X). 12.2 The Brunn-Minkowski Inequality 299 Let A, B C Rn be nonempty and compact. For k == 1, 2, . . . , consider the closed axis-parallel cubes with side length 2-k centered at the points of the scaled grid 2-kzn (these cubes cover Rn and have disjoint interiors). Let Ak be the union of all such cubes intersecting the set A, and sin1ilarly for Bk. We have A 1 ::) A2 ::) · · · and nk Ak == A (since any point not belonging to A has a positive distance from it, and the distance of any point of Ak from A is at most 2-kfo). Therefore, vol(Ak) ---t vol(A) and vol(Bk) ---t vol(B). \Ve claim that A+ B ::) nk(Ak +Bk)· To see this, let x E Ak + Bk for all k. We pick Yk E Ak and Zk E Bk with x == Yk + Zk, and by passing to convergent subsequences we may assume that Yk ---t y E A and zk ---t z E B. Then we obtain x = y + z E A + B. Thus limk-HxJ vol(Ak + Bk) < vol(A + B). By the Brunn-Minkowski inequality for the brick sets Ak, Bk, we have vol(A + B)l/n > limk-+oo vol(Ak + Bk)l/n > limk-+oc(vol(Ak)1/n + vol(Bk)lfn) vol(A)11n + vol(B)1/n. D Proof of the Brunn-Minkowski inequality for brick sets. Let A and B be brick sets consisting of k bricks in total. If k == 2, then both A and B, and A + B too, are bricks. Then if x1, . . . , Xn are the sides of A and y1, . . . , Yn are the sides of B, it suffices to establish the inequality (Tin ) 1/n (Tin ) 1/n (Tin ( )) 1/n . . i=I xi + i=I Yi < i=l Xi +Yi ; we leave this to Exercise 3. Now let k > 2 and suppose that the Brunn-Minkowski inequality holds for all pairs A, B of brick sets together consisting of fewer than k bricks. Let A and B together have k bricks, and let the notation be chosen so that A has at least two bricks. Then it is easily seen that there exists a hyperplane h parallel to some of the coordinate hyperplanes and with at least one full brick of A on one side and at least one full brick of A on the other side (Exercise 4). By a suitable choice of the coordinate system, we may assume that h is the hyperplane { x1 == 0}. Let A' be the part of A on one side of h and A" the part on the other side. More precisely, A' is the closure of A n h ffi, where h ffi is the open half-space { x1 > 0}, and similarly, A" is the closure of A n he. Hence both A' and A" have at least one brick fewer than A. Next, we translate the set B in the x1-direction in such a way that the hyperplane h divides its volume in the same ratio as A is divided (translation does not influence the validity of the Brunn-Minkowski inequality). Let B' and B" be the respective parts of B. 300 Chapter 12: Two Applications of High-Dimensional Polytopes B' A' 0 D B" Putting p = vol(A')/ vol(A), we also have p = vol(B')/ vol(B). (If vol(A) = 0 or vol(B) = 0, then the Brunn-Minkowski inequality is obvious.) The sets A' and B' together have fewer than k bricks, so we can use the inductive assumption for them, and similarly for A", B". The set A' + B' is contained in the closed half-space {x1 > 0}, and A" + B" lies in the opposite closed half-space { x1 < 0}. Therefore, crucially, vol(A + B) > vol(A' + B') + vol(A" + B"). We calculate vol(A + B) > vol(A' + B') + vol(A" + B") (induction} > [ vol(A')1fn + vol(B')1fn r + [ vol(A"?fn + vol(B")1/nr [pl/n vol(A)1fn + p1fn vol(B)1fn] n + [ {1-p}1/n vol(A)1fn + (1-p?fn vol(B)1fn] n = [vol(A)1fn + vol(B)1fnr . This concludes the proof of the Brunn-Minkowski inequality. Bibliography and remarks. Brunn's inequality for slice volumes appears in Brunn's dissertation from 1887 and in his Habilitations­ schrift from 1889. Minkowski's formulation of Theorem 12.2.2 (proved for convex sets) was published in the 1910 edition of his book [Min96]. A proof for arbitrary compact sets was given by Lusternik in 1935; see, e.g., the Sangwine-Yager [SY93] for references. The proof of the Brunn-Minkowski inequality presented here fol­ lows Appendix III in Milman and Schechtman [MS86]. Several other proofs are known. A modern one, explained in Ball [Bal97], derives a more general inequality dealing with functions. Namely, if t E ( 0, 1) and J, g, and h are nonnegative measurable functions Rn --t R such that h((l-t)x + ty) > f(x)1-tg(y)t for all x, y E Rn, then fRn h > (JRn f) l-t (JRn g) t (the Prekopa-Leindler ineq'Uality). By letting f, g, and h be the characteristic functions of A, B, and A + B, respectively, we obtain vol( (1-t)A + tB) > vol(A) l-t vol(B)t. This is an alternative form of the Brunn-Minkowski inequality, from which D 12.2 The Brunn-Minkowski Inequality 301 the version in Theorem 12.2.2 follows quickly (see Exercise 5). Ad­ vantageously, the dimension does not appear in the Pn§kopa-Leindler inequality, and it is simple to derive the general case from the !-dimen­ sional case by induction; see Exercise 7. This passage to a dimension­ free form of the inequality, which can be proved from the !-dimensional case by a simple product argument, is typical in the modern theory of geometric inequalities (a similar phenomenon for measure concentra­ tion inequalities is mentioned in the notes to Section 14.2). The Brunn-l\1inkowski inequality is just the first step in a so­ phisticated theory; see Schneider [Sch93] or Sangwine-Yager [SY93]. Among the most prominent notions are the mixed volumes. As was discovered by Minkowski, if Kt, . . . , Kr C Rn are convex bodies and .A1 , A2, . . . , Ar are nonnegative real parameters, then vol(A1K1 +.A2K2+ · · · + ArKr) is a homogeneous symmetric polynomial of degree n. For 1 < i1 < i2 < · · · < i. < r the coefficient of A· A· · · · A· ----n -' 'lt 'l2 tn is denoted by V ( Ki1 , Ki2, • • • , Kin} and called the mixed volume of Ki 1 , Ki2, • • • , Kin . A powerful generalization of the Brunn-Minkowski inequality, the Alexandrov-Fenchel inequality, states that for any con­ vex A, B, K3, K2, . . . , Kn C Rn, we have V(A, B, K2, . . . , Kn)2 > V(A, A, K3, . . . , Kn) · V(B, B, K3, . . . , Kn)· Exercises 1. Let A be a single point and B the n-dimensional unit cube. What is the function v(t) = vol( (1-t)A + tB)? Show that v(t)fJ is not concave on (0, 1] for any {3 > 6. II1 2. Let A, B C Rn be convex sets. Show that the sets conv( ( {0} xA) U ( {1} xB)) and UtE(O,l) [ { t} X ( (1-t)A + tB)] (in Rn+l) are equal. m 3. Prove that (g Xir/n + (gYir/n < (g(xi + Yi)r/n for arbitrary positive reals Xi, Yi. 0 4. Show that for any brick set A with at least two bricks, there exists a hyperplane h parallel to one of the coordinate hyperplanes that has at least one full brick of A on each side. 0 5. (Dimension-free form of Brunn-Minkowski) Consider the following two statements: (i) Theorem 12.2.2, i.e., vol(A + B)11n > vol(A)1fn + vol(B)1/n for every nonempty compact A, B C Rn. (ii) For all compact C, D C Rn and all t E (0, 1), vol((l-t)C + tD) > vol( C)1-t vol(D)t. (a) Derive (ii) fro1n (i); prove and use the inequality (1 -t)x+ty > x1-tyt (x, y positive reals, t E (0, 1)). 0 302 Chapter 12: Two Applications of High-Dime11sional Polytopes (b) Prove (i) from (ii). 0 6. Give a short proof of the !-dimensional Brunn-Minkowski inequality: vol(A + B) > vol(A) + vol(B) for any nonempty measurable A, B c R. 0 7. (Brunn-Minkowski via Prekopa-Leindler) The goal is to establish state­ ment (ii) in Exercise 5. (a) Let J, g, h: R ---+ R be bounded nonnegative measurable functions such that h((l-t)x+ty) > f(x)1-tg(y)t for all x, y E R and all t E (0, 1). Use the one-dimensional Brunn-Minkowski inequality (Exercise 6) to prove J h > (1-t) (J f) + t (J g) (all integrals over R); by the inequality in Exercise 5(a), the latter expression is at least (J f) l-t (J g)t. First show that we may assume sup f = sup g = 1. IIJ (b) Prove statement (ii) in Exercise 5 by induction on the dimension, using (a) in the induction step. 0 12.3 Sorting Partially Ordered Sets Here we present an amazing application of polyhedral combinatorics and of the Brunn-Minkowski inequality in a proble1n in theoretical computer science: sorting of partially ordered sets. We recall that a partially ordered set, or poset for short, is a pair (X, -<), where X is a set and -< is a binary relation on X (called an ordering) satisfying three axioms: reflexivity ( x -< x for all x), transitivity (x -< y and y -< z implies x -< z) , and weak antisymmetry (if x -< y and y -< x, then x = y). The ordering -< is linear· if every two elements of x, y E x·are comparable; that is, X -< y or y -< X. Let X be a given finite set with some linear ordering <. For example, the elements of X could be identical-looking golden coins ordered by their weights (assuming that no two weights exactly coincide). We want to sort X according to <; that is, to list the elements of X in increasing order. We can get information about < by pairwise comparisons: We can choose two elements a, b E X and ask an oracle whether a < b or a > b. In our example, we have precise scales such that only one coin fits on each scale, which allows us to make pairwise comparisons. Our sorting procedure may be adaptive: The elements to be con1pared next may be selected depending on the outcome of previous comparisons. We want to make as few comparisons as possible. In the usual sorting problem we begin with no information about the or­ dering < whatsoever. As is well known, 8 ( n log n) comparisons are sufficient and also necessary in the worst case. Here we consider a different setting, when we start with some information already given. Namely, we obtain (ex­ plicitly) some partial ordering -< on X, and we are guaranteed that x -< y implies x < y; that is, < is a linear extension of -<. In the example with coins, some weighings have already been made for us before we start. How many comparisons do we need to sort? 12.3 Sorting Partially Ordered Sets 303 Let E( -<)denote the set of all linear extensions of a partial ordering -< and let e( -<) == IE( -<)1 be the number of linear extensions. To sort means to select one among the e(-<) possible linear extensions. Since a comparison of distinct elements a and b can have two outcomes, we need at least log2 e( -<) comparisons in the worst case to distinguish the appropriate linear extension. Is this lower bound always asymptotically tight? Can one always sort using O(log2 e( -<)) comparisons, for any -<? An affirmative answer is implied by the following theorem: 12.3.1 Theorem (Efficient comparison theorem). Let (X, -<) be a poset, and suppose that -< is not linear. Then there exist elements a, b E X such that 8 < e(" + (a, b)) < 1 - 8 -e( -<) -' where 8 > 0 is an absolute constant and -< + (a, b) stands for the transitive closure of the relation -< U { (a, b)}, that is, the partial ordering we obtain f rorn -< if we are told that a precedes b. How do we usc this for sorting -<? For the first comparison, we choose the two elements a, b as in the theorem. Depending on the outcome of this comparison, we pass either to the partial ordering -< +(a, b) or to -< +(b, a). · In both cases, the number of linear extensions has been reduced by the factor 1-8: For a < b this is clear by the theorem, and for a > b this follows from the equality e(-< + (a, b)) + e(-< + (b, a)) == e( -<). Hence, proceeding by induction, we can sort any partial ordering -< using at most flog1/(l-࢑) e(-<)1 . comparisons. The conjectured "right" value of 8 in Theorem 12.3.1 is ; 3 0.33; obvi­ ously, one cannot do any better for the poset (meaning that (a, b) is the only pair of distinct elements in the relation -<). The proof below gives 8 == 21e 3 0.184, and more complicated proofs yield better values, although ࢒ seems still elusive. Order polytopes. We assign certain convex polytopes to partial orderings. 12.3.2 Definition (Order polytope). Let (X, -<) be an n-element poset. Let tl1e coordinates in R n be indexed by the elernents of X. W e define a polytope P( -<), the order polytope of-<, as the set of all x E [0, 1]n satisf ying the following inequalities: Xa < Xb for every a, b E X with a -< b. Here is an alternative description of the order polytope: 304 Chapter 12: Two Applications of High-Dimensional Polytopes 12.3.3 Observation. The vertices of the order pol,ytope P( -<) are precisely the characteristic vectors of all up-sets in (X, --<), where an up-set is a subset U C X such that if a E U and a -< b, then b E U. Proof. It is easy to see that the characteristic vector of an up-set is in P( -<), and that any 0/1 vector in P( -<) determines an up-set. It remains to check that all vertices of P( -<) are integral. Any vertex is the intersection of some n facet hyperplanes. Since all potential facet hyperplanes have the form Xa ::;:;::: Xb, or Xa ::;:;::: 0, or Xa ::;:;::: 1, the integrality is obvious. D 12.3.4 Observation. Let X be an n-element set. (i) If < is a linear ordering on X, then P( <) is a simplex of volume 1/n!. (ii) For any partial ordering -< on X, the simplices of the form P( <), "vhere < is a linear extension of --<, cover P( -<) and have disjoint interiors. Hence vol( P( -<)) = ػ! e( -<). Here is the order polytope of a 3-element poset: It is subdivided into 3 tetrahedra corresponding to linear extensions. Proof of Observation 12.3.4. In (i), consider the ordering 1 < 2 < · · · < n. The characteristic vectors of up-sets have the form (0, 0, . . . , 0, 1, 1, . . . , 1). There are n+ 1 of them, and they are a:ffi nely independent, so P( <) is a simplex. Other linear orderings differ by a permutation of coordinates, so we get congruent simplices. The volume could be calculated directly, but it follows easily fron1 considerations below. As for ( ii), any point ( x 1 , . . . , Xn) E P(-<) with distinct coordinates de­ termines a unique linear extension of -<, namely the one given by the natural ordering of its coordinates as real numbers. Conversely, for any linear exten­ sion < E E( -<), we have P( <) C P( --<) by definition. Hence the congruent simplices corresponding to linear extensions subdivide P( -<). To see that the simplices have volume 1/n!, take the discrete ordering (no two distinct elements are comparable) for -<. The order polytope is the unit cube [0, 1]n, and it is subdivided into n! congruent simplices corresponding to the n! possible linear orderings. D Height and center of gravity. Let X be a finite set and < a linear ordering on it. For a E X, we define the height of a in <, denoted by h< (a), as I { x E X: x < a} I· For a poset (X, -<), the height of an element is defined as the average height over all linear extensions: 12.3 Sorting Partially Ordered Sets 1 h-< (a) = '"""' h< (a). -e(-<) Z -<EE( -<) 305 If -< is clear from context, we omit it in the subscript and we write just h(a). The "good" elements a, b in the efficient comparison theorem can be se­ lected using the height. Namely, we show that any two distinct a, b with jh(a) - h(b)l < 1 will do. (It is simple to check that if -< is not a linear or­ dering, then such a and b always exist; see Exercise 1.) We now relate the height to the order polytope. 12.3.5 Lemma. For any n-element poset (X, -<), the center of gravity of the order polytope P(--<() is c = (ca : a E X), where Ca = nÕl h-< (a). Proof. The center of gravity of P( -<) is the arithmetic average of centers of gravity of the simplices P( <) with <E E( -<). Hence it suffices to prove the lemma for a linear ordering <. By permuting coordinates, it suffices to calculate that for the simplex with vertices of the form (0, . . . , 0, 1, . . . , 1 ), the center of gravity is nÕl (1, 2, . . . , n) . This is left as Exercise 2. D Proof of the efficient comparison theorem. Given the poset (X, -<) , we consider two elements a, b E X with lh(a) - h(b)l < 1. We want to show that the number of linear extensions of both -< + (a, b) and -< + (b, a) is at least a constant fraction of e( -<). Consider the order polytopes P = P( -<), P< = P(-< + (a, b)), and P> = P(-< + (b, a)). Geometrically, P is sliced into P< and P> by the hyperplane h = {x E Rn: Xa = xb}· By Observation 12.3.4(ii), it suffices to show that the volumes of both P< and P> are at least a constant fraction of vol( P). For convenience, let us introduce a new coordinate system in R n, where the first coordinate y1 is xb-X a and the others complete it to an orthonormal coordinate system (y1, . . . , Yn) · Hence h is the hyperplane y1 = 0. Let c(P) denote the center of gravity of P, and let c1 = c1 ( P) be its y1-coordinate. What geometric information do we have about P? It is a convex body with the following properties: • The projection of P onto the y1-axis is the interval [-1, 1]. This is be­ cause there is an up-set of -< containing a and not b, and also an up-set containing b but not a, and thus P has a vertex with Xa = 1, Xb = 0 and a vertex with Xa = 0, Xb = 1. • We have - nÕl < Ct < nÕl , since c1 = nÖl (h(a) - h(b)) and lh(a) ­ h(b)l < 1. 306 Chapter 12: Two Applications of High-Dimensional Polytopes The proof of Theorem 12.3.1 is finished by showing that any compact convex body P C Rn with these two properties satisfies 1 1 vol(P< ) > - vol(P) and vol(P>) > -2 vol(P), -2e -e where P< is the part of P in the half-space {y1 < 0} and P> is the other part. For t E [-1, 1], let Pt be the ( n-1 )-dimensional slice of P by the hyper­ plane {y1 == t}, and let r(t) be the equivalent radius of Pt, i.e., the radius of an (n-1)-dimensional ball of volume voln_1(Pt)· By Brunn's inequality for slice volumes (Theorem 12.2.1), r(t) is concave on [-1, 1]. The Yt-coordinate of the center of gravity of P can be expressed as 1 {1 c1 (P) = vol(P) }_1 t voln-1 (Pt) dt (imagine P composed of thin plates perpendicular to the y1 -axis). Hence c1 is fully determined by the function r( t ). In other words, the shapes of the slices of P do not really matter; only their volumes do, and so we may imagine that P is a rotational body whose slice Pt is an ( n-1 )-dimensional ball of radius r(t) centered at (t, 0, . . . , 0). We want to show that if c1 (P) > - n6l ' then vol(P>) > 21e vol(P). The inequality for vol( P <) follows by symmetry. The key step is to pass to another, especially simple, rotational convex body K. The slice Kt of K has radius K(t); the functions K(t) and r(t) are schematically plotted below: w K(t) y u -1 0 1 u The graph of the function K( t) consists of two linear segments, and so /( is a double cone. First we construct the function K( t) for t positive. Here the graph is a segment starting at the point V == (0, r(O)) and ending at the point U == (u, O). The number u is chosen so that vol(K>) = vol(P>)· Since r(t) is concave and K(t) is linear on [0, u], we have u > 1. Moreover, as t grows from 0 to 1, we first have r(t) > K(t), and then from some point on r(t) < K,(t). This ensures that the center of gravity of K > is to the right of the center of gravity of P> (we can imagine that P> is transformed into K > by peeling off son1e mass in the region labeled "-" and n1oving it right, to the region labeled "+"). 12.3 Sorting Partially Ordered Sets 307 Next, we define K(t) for t < 0. We extend the segment UV to the left until the (unique) point W such that when YWV is the graph of K(t) for negative t, we have vol( K <) = vol( P <). As t goes from 0 down to -1, K( t) is first above T(t) and then below it. This is because at V, the segment WU decreases 1nore steeply than the function T ( t). Therefore, we also have c1 ( K <) > c1 ( P < ), and hence Ct (K) > c1 (P) > -nǯl. So, as was noted above, it ԣemains to show that vol(K>) > 2 ع vol(K), which is a more or less routine calculation. We fix the notation as in the following picture: K< K> -1 u We note that c1 ( K) is a weighted average of c1 ( K 1) and c1 ( K 2); the weights are the volumes of K1 and K2 whose ratio is h1 : h2. The center of gravity of an n-dimcnsional cone is at nǰ 1 of its height, and hence c1 ( K 1) == -n h; 1 - lj and c1 (K2) = n ';l - lj- Therefore, h ( hl ) + h ( h2 ) 1 -n+l 2 n+l h2 - h1 Ct(K) = h h - lj = - lj. 1 + 2 n + 1 We have lj = 1-ht, and so from the condition c1 (K} > - nÕI we obtain h2 + nh1 > n. We substitute h1 = u - h2 + 1 and rearrange, which yields u 1 - > 1 - -. h2 -n (12.2) We are interested in bounding vol( K > ) from below. The cone K > is similar to /(2, with ratio tt/ h2. So vol(K>) = (:J n vol(K2) = (:J n hl µ h2 vol(K) = u ( h u )n-1 vol(K). u + 1 2 Now we substitute for u/h2 from (12.2), obtaining vol(K >) > u (1-!) n-1 vol(K). -u + 1 n 308 Chapter 12: Two Applications of High-Dimensional Polytopes Finally, uHl > 5 (as u > 1) and (1 - !)n-1 > e-1 for all n, so vol(K>) > 21e vol(K) follows. D Bibliography and remarks. The statement of the efficient com­ parison theorem with Ʋ = , known as the " -Ȓ conjecture," was con­ jectured by Kislitsyn [Kis68] and, later but independently, by Fredman (unpublished) and by Linial [Lin84]. In this strongest possible form it remains a challenging open problem in the theory of partially ordered sets (see Trotter [Tho92], [Tho95] for overviews of this interesting area). The problem of sorting with partial information was considered by Fredrnan [Fre76), "\ 'ho proved that any n-element partially ordered set (X, -<) can be sorted by at most log2(e(-<)) + 2n comparisons. This is optimal unless e( -<) is only subexponential in n. The effi­ cient comparison theorem was first proved, with fJ = 131 Ƅ 0.2727, by Kahn and Saks [KS84]. Their proof is quite complicated, and in­ stead of the Brunn-Minkowski inequality it employs the more powerful Aleksandrov-Fenchel inequality. The constant 131 is optimal for their approach, in the sense that if a and b are elements of a poset such that !h(a) - h(b)l < 1, then the comparison of a and b generally need not reduce the number of linear extensions by any better ratio. The simpler proof presented in this section is due to Kahn and Linial [KL91], and a similar one, with a slightly worse 6, was found by Karzanov and Khachiyan; see [Kha89]. The method is inspired by proofs of a result about splitting a convex body by a hyperplane passing exactly through the center of gravity (Exercise 3), proved by Griinbaum [Grii60] (see [KL91] for more re1narks on the history). Ob­ servation 12.3.4, on which all the proofs of Theorem 12.3.1 are based, is from Linial [Lin84]. The current best value of fJ = (5-v's)/10 Ƅ 0.2764 was achieved by Brightwell, Felsner, and Thotter [BFT95]. They extend the Kahn-Saks method, and instead of two elements a and b with lh(a) - h(b) < 1, they consider three elements a, b, c with h(a) < h(b) < h(c) < h(a) + 2. Interestingly, they also construct an infinite (countable) poset for which their value of 6 is optimal (and so the natural infinite analogue of the ǫ-& conjecture is false). In order to formulate this result, one needs a probability measure on the set of all linear extensions of the considered poset. Their poset is thin, meaning that the maximum size of an antichain is bounded by a constant, and the probability measure is obtained by taking a limit over a sequence of finite intervals in the poset. The proofs of the efficient co1nparison theoren1 do not provide an efficient algorithm for actually computing suitable elements a, b. General methods for estimating the volume of convex bodies, men­ tioned in Section 13.2, yield a polynomial-time randomized algorithm. 12.3 Sorting Partially Ordered Sets Kahn and Kim [KK95] gave a deterministic polynomial-time adap­ tive sorting procedure that sorts any given n-element poset (X, -<) by O(log( e( -<))) comparisons. vVe at least mention some interesting concepts in their algorithm. Instead of the order polytope, they con­ sider the chain polytope; this the convex hull of the characteristic vectors of all antichains in (X, -<). Equivalently, it is the stable set polytope STAB(G) (see Section 12.1) of the comparability graph G of (X, -<), where G = G (-<) = (X, { { x, y}: x -< y or y -< x}). As was shown by Stanley (Sta86], the chain polytope has the san1e vol­ ume as the order polytope. The next key notion is the entropy of a graph. For a given graph G = (V, E) and a probability distribution p: V -+ [0, 1] on its vertices, the entropy H ( G, p) can be defined as minxESTAB(G) (- LvEV Pv log2 Xv) (there are several equivalent defi­ nitions). Graph entropy was introduced by Korner [Kor73], and he and his coworkers achieved remarkable results in extremal set theory and related fields using this concept (see, e.g., Gargano, Korner, and Vaccaro [GKV94]). The entropy can be approximated in deterministic polynomial time, and the adaptive sorting algorithm of Kahn and Kim chooses the next comparison as one that increases the entropy of the comparability graph as much as possible (this need not always be an "efficient comparison" in the sense of Theorem 12.3.1). Exercises 309 1. Let (X, -<) be a finite poset. Prove that if -< is not a linear ordering, then there always exist a, b E X with lh(a) - h(b)l < 1. IIJ 2. Show that the center of gravity of a simplex with vertices ao, a1, . . . , ad is the same as the center of gravity of its vertex set. m 3. Let K be a bounded convex body in Rn, h a hyperplane passing through the center of gravity of K, and K1 and K2 the parts into which K is divided by h. (a) Prove that vol(K1 ), vol(K2) > ( nظl )n vol(K). m (b) Show that the bound in (a) cannot be improved in general. 0 13 Volumes in High Dimension We begin with comparing the volume of the n-dimensional cube with the volume of the unit ball inscribed in it, in order to realize that volumes of "familiar" bodies behave quite differently in high dimensions from what the 3-dimensional intuition suggests. Then we calculate that any convex polytope in the unit ball Bn whose nun1ber of vertices is at most polynomial in n occupies only a tiny fraction of Bn in terms of volume. This has interesting consequences for deterministic algorithms for approximating the volume of a given convex body: If they look only at polynomially many points of the considered body, then they are unable to distinguish a gigantic ball from a tiny polytope. Finally, we prove a classical result, John's lemma, which states that for every n-dimensional symmetric convex body K there are two similar ellipsoids with ratio fo such that the smaller ellipsoid lies inside K and the larger one contains K. So, in a very crude scale where the ratio fo can be ignored, each symmetric convex body looks like an ellipsoid. Besides presenting nice and important results, this chapter could help the reader in acquiring proficiency and intuition in geometric computations, which are skills obtainable mainly by practice. Several calculations of non­ trivial length are presented in detail, and while some parts do not require any great ideas, they still contain useful small tricks. 13.1 Volumes, Paradoxes of High Dimension, and Nets In the next section we are going to estimate the volumes of various convex polytopes. Here we start, more modestly, with the volumes of the simplest bodies. The ball in the cube. Let V n denote the volume of the n-dimensional ball Bn of unit radius. A neat way of calculating V n is indicated in Exercise 2; 312 Chapter 13: Volumes in High Dimension the result, which can be verified in various other ways and found in many books of formulas, is 7rn/2 Vn = f(ƚ+l) 7r Ln/2J 2 r n/21 Tii: 0<2i 1, then x is discarded and the experiment is repeated, and if II x II < 1, then x is the desired random point in the unit ball. This works reasonably in dimensions below 10, say, but in dimension 20, we expect about 40 million discarded points for each accepted point, and the method is rather useless. Another way of comparing the ball and the cube is to picture the sizes of the n-dimensional ball having the same volume as the unit cube: q 0 0 n = 2 n = 10 n = 50 For large n, the radius grows approximately like 0.24fo. This indicates that the n-dimensional unit cube is actually quite a huge body; for example, its diameter (the length of the longest diagonal) is fo. Here is another example illustrating the largeness of the unit cube quite vividly. 1 3. 1 Volumes, Paradoxes of High Dimension, and Nets 313 Balls enclosing a ball. Place balls of radius e into each of the 2n vertices of the unit cube [0, 1 ]n so that they touch along the edges of the cube, and consider the ball concentric with the cube and just touching the other balls: Obviously, this ball is quite small, and it is fully contained in the cube, right? No: Already for n = 5 it starts protruding out through the facets. Proper pictures. If a planar sketch of a high-dimensional convex body should convey at least a partially correct intuition about the distribution of the mass, say for the unit cube, it is perhaps best to give up the convexity in the drawing! According to Milman [Mil98], a "realistic" sketch of a high­ dimensional convex body might look like this: Strange central sections: the Busemann-Petty problem. Let K and L be convex bodies in Rn symmetric about 0, and suppose that for every hyperplane h passing through 0, we have voln-t ( K n h) < voln_1 ( L n h). It seems very plausible that this should imply vol( K) < vol ( L); this conjecture of Busemann and Petty used to be widely believed (after all, it was known that if the volumes of the sections are equal for all h, then K = L). But as it turned out, it is true only for n < 4, while in dimensions n > 5 it can fail! In fact, for large dimensions, one of the counterexamples is the unit cube and the ball of an appropriate radius: It is known that all sections of the unit cube have volume at most J2, while in large dimensions, the unit-volume ball has sections of volume about vfe,. Nets in a sphere. We conclude this section by introducing a generally useful tool. Let sn-1 = {x E Rn: llxll = 1} denote the unit sphere in Rn (note that 82 is the 2-dimensional sphere living in R 3). We are given a number 'r/ > 0, and we want to place a reasonably small finite set N of points on sn-l in such a way that each X E sn-1 has some point of N at distance no larger than 'f/· Such an N is called ry-dense in sn-t. For example, the set N = { e1, -e 1, . . . , en, -en} of the 2n orthonormal unit vectors of the standard basis is J2-dense. But it is generally difficult to find good explicit constructions for arbitrary 'r/ and n. The following simple but clever existential 314 Chapter 13: Volumes in High Dimension argument yields an 7]-dense set whose size has essentially the best possible order of magnitude. Let us call a subset N c sn-t 1}-separated if every two distinct points of N have (Euclidean) distance greater than 'TJ· In a sense, this is opposite to being 7]-dense. In order to construct a small 7]-dense set, we start with the empty set and keep adding points one by one. The trick is that we do not worry about 7]-density along the way, but we always keep the current set 7]-separated. Clearly, if no more points can be added, the current set must be 17-dense. The result of this algorithm is called an 7]-net. 1 That is, N c sn-1 is an 7]-net if it is an inclusion-maximal 7]-Separated subset of sn-l; i.e., if N is 1]-Separated but N u {X} is not 1]-Separated for any X E sn-l \ N. (These definitions apply to an arbitrary metric space in place of sn-1.) A volume argument bounds the maximum size of an 17-net. 13.1.1 Lemma (Size of q-nets in the sphere). For each 17 E (0, 1], any 'T]-net N c sn-l satisfies Later on, we will check that for rJ small, no 7]-dense set can be much smaller (Exercise 14.1.3). Proof. For each x E N, consider the ball of radius ƃ centered at x. These balls are all disjoint, and they are contained in the ball B(O, 1 + 17) C B(O, 2). Therefore, vol(B(0, 2)) > INi vol(B(O, ƃ)), and since vol(B(O, r)) in Rn is proportional to rn' the lemma follows. 0 Bibliography and remarks. Most of the material of this section is well known and standard. As for the Busemann-Petty problern, which we are not going to pursue any further in this book, information can be found, e.g., in Gardner, Koldobski, and Schlumprecht [ G KS99] (recent unified solution for all dimensions), in Ball [Bal], or in the Handbook of Convex Geometry [GW93]. Exercises 1. Calculate the volume of the n-dimensional crosspolytope, i.e., the convex hull of { e1, -e1, . . . , en, -en}, where ei is the ith vector in the standard basis of R n. li1 2. (Ball volume via the Gaussian distribution) (a) Let In = f ad e-llxll2 dx, where llxll = (xf+· · ·+x;)112 is the Euclidean norm. Express In using !1. @J 1 Not to be confused with the notion of c-net considered in Chapter 10; unfortu­ nately, the same name is customarily used for two rather unrelated concepts. 13.2 Hardness of Volume Approximation 315 (b) Express In using V n = vol(Bn) and a suitable one-dimensional in­ tegral, by considering the contribution to In of the spherical shell with inner radius r and outer radius r + dr. 0 (c) Calculate In by using (b) for n = 2 and (a). l2J (d) Integrating by parts, set up a recurrence and calculate the integral appearing in (b). Compute VrƱ. 0 This calculation appears in Pisier [Pis89] (also see Ball [Bal97]). 3. Let X c sn-t be such that every two points of X have (Euclidean) distance at least J2. Prove that lXI < 2n. 0 13.2 Hardness of Volume Approximation The theorem in this section can be regarded as a variation on one of the "paradoxes of high dimension" mentioned in the previous section, namely, that the volume of the ball inscribed in the unit cube becomes negligible as the dimension grows. The theorem addresses a dual situation: the volume of a convex polytope inscribed in the unit ball. 13.2.1 Theorem. Let Bn denote the unit ball in R n, and let P be a convex polytope contained in Bn and having at most N vertices. Then vol(P) < (C ln´+l)) n/2 vol(Bn) with an absolute constant C. Thus, unless the number of vertices is exponential in n, the polytope is very tiny compared to the ball. For N > nen/C, the bound in the theorem is greater than 1, and so it makes little sense, since we always have vol(P) < vol(Bn). Thus, a reasonable range of N is n+ 1 < N < eN for some positive constant c > 0. It turns out that the bound is tight in this range, up to the value of C, as discussed in the next section. This may be surprising, since the elementary proof below makes seemingly quite rough estimates. Let us remark that the weaker bound vol(P) < ( C ln N) n/2 vol(Bn) -n (13.1) is somewhat easier to prove than the one in Theorem 13.2.1. The difference between these two bounds is immaterial for N > n2, say. It becomes signif­ icant, for example, for comparing the largest possible volume of a polytope in Bn with n log n vertices with the volume of the largest simplex in Bn. Application to hardness of volume approximation. Computing or estimating the volume of a given convex body in R n, with n large, is a 316 Chapter 13: V olumes in High Dimension fundamental algorithmic problem. Many combinatorial counting problems can be reduced to it, such as counting the number of linear extension of a given poset, as we saw in Section 12.3. Since many of these counting problems are computationally intractable, one cannot expect to compute the volume precisely, and so approximation up to some multiplicative factor is sought. It turns out that no polynomial-time deterministic algorithm can gener­ ally achieve approximation factor better than exponential in the dimension. A concrete lower bound, derived with help of Theorem 13.2.1, is (en/ logn)n. This can also be almost achieved: An algorith1n is known with factor (c'n)n . In striking contrast to this, there are randomized polynomial-time algo­ rithms that can approximate the volume within a factor of (l+e-) for each fixed E > 0 with high probability. Here "randomized" means that the algo­ rithm makes random decisions (like coin tosses) during its computation; it does not imply any randomness of the input. These are marvelous develop­ ments, but they are not treated in this book. We only briefly explain the relation of Theorem 13.2.1 to the deterministic volume approximation. To understand this connection, one needs to know how the input con­ vex body is presented to an algorithm. A general convex body cannot be exactly described by finitely Inany paran1eters, so caution is certainly neces­ sary. One way of specifying certain convex bodies, namely, convex polytopes, is to give them as convex hulls of finite point sets (V -presentation) or as in­ tersections of finite sets of half-spaces (H-presentation). But there are many other computationally important convex bodies that are not polytopes, or have no polynon1ial-size V-presentation or H-presentation. We will meet an example in Section 15.5, where the convex body lives in the space of n x n real matrices and is the intersection of a polytope with the cone consisting of all positive semidefinite matrices. In order to abstract the considerations from the details of the presentation of the input body, the oracle model was introduced for computation with convex bodies. If K C R n is a convex body, a membership oracle for K is, roughly speaking, an algorithm (subroutine, black box) that for any given input point x E Rn outputs YES if x E K and NO if x ¢ K. This is simplified, because in order to be able to compute with the body, one needs to assurne more. Namely, K should contain a ball B(O, r) and be contained in a ball B(O, R), where R and r > 0 are written using at most polynomially many digits. On the other hand, the oracle need not (and often cannot) be exact, so a wrong answer is allowed for points very close to the boundary. These are important but rather technical issues, and we will ignore them. Let us note that a polynomial-time membership oracle can be constructed for both V-presented and H-presented polytopes, as well as for many other bodies. Let us now assume that a deterministic algorithm approximates the vol­ ume of each convex body given by a suitable membership oracle. First we call the algorithm with K = Bn , the unit ball. The algorithm asks the or-13.2 Hardness of Volume Approximation 317 acle about some points { x1, x2, . . . , XN }, gets the correct answers, and out­ puts an estimate for vol(Bn). Next, we call the algorithm with the body K = conv( { x1, x2, . . . , x N} n Bn). The answers of the oracle are exactly the san1e, and since the algorithm has no other information about the body K and it is deterministic, it has to output the same volume estimate as it did for Bn. But by Theorem 13.2.1, vol(Bn)j vol(K) > (cn/ ln(N/n+1))nf2, and so the error of the approximation must be at least this factor. If N, the num­ ber of oracle calls, is polynomial in n, it follows that the error is at least (c'n/ log n)nf2. By more refined consideration, one can improve the lower bound to ap­ proximately the square of the quantity just given. The idea is to input the dual body K into the algorithm, too, for which it gets the same answers, and then use a deep result (the inverse Blaschke-Santal6 inequality) stating that vol(K) vol(K) > en jn! for any centrally symmetric n-dimensional convex body K, with an absolute constant c > 0 (some technical steps are omit­ ted here). This improvement is interesting because, as was remarked above, for symmetric convex bodies it almost matches the performance of the best known algorithm. Idea of the proof of Theorem 13.2.1. Let V be the set of vertices of the polytope P c Bn, lVI = N. We choose a suitable parameter k < n and prove that for every x E P, there is a k-tuple J of points of V such that x is close to conv(J). Then vol(P) is simply estimated as ({) times the maximum possible volume of the appropriate neighborhood of the convex hull of k points in Bn. Here is the first step towards realizing this program. 13.2.2 Lemma. Let S in Rn be an n-dimensional simplex, i.e., the convex hull of n+1 aJlinely independent points, and let R = R(S) and p = p(S) be the circumradius and inradius of S, respectively, that is, the radius of the smallest enclosing ball and of the largest inscribed ball. Then Ǫ > n. Proof. We first sketch the proof of an auxiliary claim: Among all simplices contained in Bn, the regular simplex inscribed in Bn has the largest volume. The volume of a simplex is proportional to the (n-1)-dimensional volume of its base times the corresponding height. It follows that in a maximum-volume simplex S inscribed in Bn, the hyperplane passing through a vertex v of S and parallel to the facet of S not containing v is tangent to Bn, for otherwise, v could be moved to increase the height: It can be easily shown (Exercise 2) that this property characterizes the regular simplex (so the regular simplex is even the unique maximum). 318 Chapter 13: V olumes in High Dimension Another, slightly more difficult, argument shows that if S is a simplex of minimum volume circumscribed about nn' then each facet of s touches Bn at its center of gravity (Exercise 3), and it follows that the volume is minimized by the regular simplex circumscribed about nn. Let 80 c nn be a simplex. We consider two auxiliary regular simplices 81 and 82, where 81 is inscribed in nn and S2 satisfies vol(S2) = vol(S0). Since vol(S1) > vol(S0) = vol(S2), 81 is at least as big as S2, and so p(S0) < p( 82) < p( 81). A calculation shows that p( 81) = 6 (Exercise 1 (a)). D Let F be a }-dimensional simplex in R n. We define the orthogonal p­ neighborhood Fp of F as the set of all x E R n for which there is a y E F such that the segment xy is orthogonal to F and llx - Yll < p. The next drawing shows orthogonal neighborhoods in R3 of a !-simplex and of a 2-simplex: The orthogonal p-neighborhood of F can be expressed as the Cartesian prod­ uct of F with a p-ball of dimension n-j, and so voln(Fp) = volj(F) · pn-j · voln-j (Bn-J). 13.2.3 Lemma. Let S be an n-dimensional simplex contained in Bn, let x E S, and let k be an integer parameter, 1 < k < n. Then there is a k-tuple J of aflinely independent vertices of S such that x lies in the orthogonal p­ neighborhood of conv( J), where ( n 1) 1/2 p = p(n, k) = 2: i2 . t=k Proof. We proceed by induction on n - k. For n = k, this is Lemma 13.2.2: Consider the largest ball centered at x and contained in S; it has radius at most 6, it touches some facet F of S at a point y, and the segment xy is perpendicular to F, witnessing x E Fl/n· For k < n, using the case k = n, let S' be a facet of S and x' E S' a point at distance at n1ost 6 from S' with xx' j_ S'. By the inductive assun1ption, we find a (k-1)-face F of S' and a point y E F with llx' - Yll < p(n-1, k) and x'y L F. Here is an illustration for n = 3 and k = 2: 13.2 Hardness of Volume Approximation 319 Then xx' l x' y (because the whole of S' is perpendicular to xx'), and so llx - Yll2 = llx - x'll2 + llx' - yJJ2 < p(n, k)2. Finally, xy l_ F, since both the vectors x' - y and x - x' lie in the orthogonal complement of the linear subspace generated by F - y. D Proof of Theorem 13.2.1. By Caratheodory's theorem and Lemma 13.2.3, P = conv(V) is covered by the union of all the orthogonal p-neighborhoods conv( J) P' J E (z), where p = p( n, k) is as in the lemma. The maximum ( k-1 )-dimensional volume of conv( J) is no larger than the ( k-l )-dimensional volume of the regular ( k-1 )-simplex inscribed in nk-1' which is ( k ) (k-1)/2 Vk M(k-l) = k - 1 (k - 1)! ; see Exercise l(b). (If we only want to prove the weaker estimate (13.1) and do not care about the value of C, then M(k-1) can also be trivially estimated by volk-1 (Bk-1) or even by 2k-1.) What remains is calculation. We have vol(P) < (N) . M(k-l) . (n k)n-k+l . voln-k+l(Bn-k+l) . vol(Bn) -k p ' vol(Bn) We first estimate (13.2) 2 _ - < -- --- - < n 1 n 1 n ( 1 1) 1 1 1 p(n, k) - ³ i2 - ³ i(i - 1) - ³ i - 1 i - k - 1 n - k - 1 · We now set k - l n J -ln(Dz+l) (for obtaining the weaker estimate (13.1), the simpler value k = l1; N J is more convenient). We may assume that ln N is much smaller than n, for otherwise, the bound in the theorem is trivially valid, and so k is larger than any suitable constant. In particular, we can ignore the integer part in the definition of k. For estimating the various terms in (13.2), it is convenient to work with the natural logarithm of the quantities. The logarithm of the bound we are heading for is ƚ (lnln( Dz +1) - ln n + 0(1)), and so terms up to O(n) can be ignored if we do not care about the value of the constant C. Further, we find that k in k = k ln n - k lnln(Dz+1) = k in n + O(n). This is useful for estimating ln(k!) = k in k - O(k) = k ln n - O(n). Now, we can bound the logarithms of the terms in (13.2) one by one. We have ln ({) < k ln N- ln(k!) = k(ln( Dz) + In n) - ln(k!) < n + k In n ­ k In n + O(n) = O(n); this term is negligible. Next, In M(k-1) contributes about - In( k!) = -k In n + 0( n). The main contribution comes from the term ln p(n, k)n-k+l < -(n-k) ln ..fk + O(n) = ƚ(- lnn + ln ln(Dz+l)) + Ԥ In n + 320 Chapter 13: V olumes in High Dimension O(n). Finally ln(voln-k+l (nn-k+1 )/ vol(Bn)) = ln(r(ƚ+1)jr(n-ԥ+1 +1)) + O(n) < Innk/2 + O(n) = ̵ In n + O(n). The term -k lnn originating from M(k-1) cancels out nicely with the two terms ̵ Inn, and altogether we obtain ƚ (- In n + ln In(̶ +1) + 0(1)) as clain1ed in the theorem. D Bibliography and remarks. Our presentation of Theorem 13.2.1 n1ostly follows Barany and Fiiredi [BF87]. They pursued the hardness of deterministic volume approximation, inspired by an earlier result of Elekes [Ele86] (see Exercise 5). They proved the weaker bound (13.1); the stronger bound in Theorem 13.2.1, in a slightly different form, was obtained in their subsequent paper [BF88]. Theorem 13.2.1 was also derived by Carl and Pajor [CP88] from a work of Carl [Car85] (they provide similar near-tight bounds for fp­ balls). A dual version of Theorem 13.2.1 was independently discovered by Gluskin [Glu89] and by Bourgain, Lindenstrauss, and Milman [BLM89]. The dual setting deals with the minimum volume of the intersection of N symmetric slabs in Rn. Namely, let u1, u2, . . . , UN E Rn be given (nonzero) vectors, and let K = nf 1 {x E Rn: l(ui, x)l < 1} (the width of the ith slab is ƂƂضطll ). The dual analogue of Theorem 13.2.1 is this: Whenever all lluill < 1, we have vol(Bn)/ vol(K) < (Ԧ In(̶ +l))n12. A short and beautiful proof can be found in Ball's handbook chap­ ter [Bal]. There are also bounds based on the sum of norms of the Ui . Namely, for all p E [l, oo), we have vol(K)11n > Ú , where pf2·R R = (! E| 1 lluiiiP)11P (Euclidean norms!), as was proved by Ball and Pajor [BP90]; it also follows from Gluskin's work [Glu89]. For p = 2, this result was established earlier by Vaaler. It has the follow­ ing nice reformulation: The intersection of the cube [-1, 1] N with any n-ftat through 0 has n-dimensionai volume at least 2n (see [Bal] for more information and related results). The setting with slabs and that of Theorem 13.2.1 are connected by the Blaschke-Santal6 inequalit'lf and the inverse Blaschke-Santal6 inequality. The former states that vol(K) vol(K) < vol(Bn)2 < cf jn! for every centrally symmetric convex body in R n (or, more gener­ ally, for every convex body K having 0 as the center of gravity). It allows one the passage from the setting with slabs to the setting of Theorem 13. 2.1: If the intersection of the slabs { x: I ( ui, x) I < 1} has large volume, then conv{ u1, . . . , UN} has small volume. The inverse Blaschke-Santalo inequality, as was mentioned in the text, asserts that vol( K) vol( K) > en j n! for a suitable c > 0, and it can thus be used 2 In the literature one often finds it as either Blaschke's inequality or Santal6's inequality. Blaschke proved it for n < 3 and Santal6 for all n; see, e.g., the chapter by Lutwak in the Handbook of Convex Geometry [GW93]. 13.2 Hardness of Volume Approximation for the reverse transition. It is much more difficult than the Blaschke--­ Santal6 inequality and it was proved by Bourgain and Milman; see, e.g., [Mil98] for discussion and references. Let us remark that the weaker bound ( ǩ (ln N)) n/2 is relatively easy to prove in the dual setting with slabs (Exercise 14.1.4), which together with the Blaschke-Santal6 inequality gives (13.1). Theorem 13.2.1 concerns the situation where vol(P) is small com­ pared to vol(Bn). The smallest number of vertices of P such that vol(P) > (1-e) vol(Bn) for a small E > 0 was investigated by Gor­ don, Reisner, and Schlitt [GRS97]. In an earlier work they constructed polytopes with N vertices giving e = O(nN-2/(n-1)), and in the pa­ per mentioned they proved that this is asymptotically optimal for N > (Cn) 0, was given by Betke and Henk [BH93] (the geometric idea goes back at least to Macbeath [Mac50]). The algorithm chooses an arbitrary direction v1 and finds the supporting hyperplanes hi and h! of K perpendicular to v1. Let Pt and p} be contact points of ht and h} with K. The next direction v2 is chosen perpendicular to the affine hull of {Pt, p}}, etc. -Pt l Vt . t • P2 I h+ 1 After n steps, the n pairs of hyperplanes determine a parallelotope P ::) K, while Q = conv{pi ,p}, . . . ,p٤ ,Pn } C K, and it is not hard to show that vol(P)/ vol(Q) < n! (the extra factor (l+e)n arises because the oracle is not exact). The first polynomial-time randomized algorithm for approximating the volume with arbitrary precision was discovered by Dyer, Frieze, and Kannan [DFK91]. Its parameters have been improved many times 321 322 Chapter 13: Volumes in High Dimension since then; see, e.g., Kannan, Lovasz, and Simonovits (KLS97]. A re­ cent success of these methods is a polynomial-time approximation al­ gorithm for the permanent of a nonnegative matrix by Jerrum, Sin­ clair, and Vigoda [JSV01]. By considerations partially indicated in Exercise 4, Barany and Fiiredi [BF87] showed that in deterministic polynomial time one can­ not approximate the width of a convex body within a factor better than 0(/n/ logn ). Brieden, Gritzmann, Kannan, Klee, Lovasz, and Sin1onovits [BGK+99] provided a n1atching upper bound (up to a con­ stant), and they showed that in this case even randomized algorithms are not more powerful. They also considered a variety of other parame­ ters of the convex body, such as diameter, inradius, and circumradius, attaining similar results and improving many previous bounds from [GLS88}. Lemma 13.2.2 appears in Fejes T6th (T6t65]. Exercises 1. (a) Calculate the inradius and circumradius of a regular n-dimensional simplex. li1 (b) Calculate the volume of the regular n-dimensional simplex inscribed in the unit ball Bn. 0 2. Suppose that the vertices of an n-dimensional simplex S lie on the sphere sn-l and for each vertex v' the hyperplane tangent to sn-l at v is parallel to the facet of S opposite to v. Check that S is regular. 0 3. Let S c an be a simplex circumscribed about Bn and let F be a facet of S touching Bn at a point c. Show that if c is not the center of gravity of F, then there is another simplex S' (arising by slightly moving the hyperplane that determines the facet F) that contains Bn and has volume smaller than vol ( S). 111 4. The width of a convex body K is the minimum distance of two parallel hyperplanes such that K lies between them. Prove that the convex hull of N points in Bn has width at most 0( /(In N)/n ). li1 5. (A weaker but simpler estimate) Let V c an be a finite set. Prove that conv(V) C UvEv B(•v, • llvll), where B(x, r) is the ball of radius r centered at x. Deduce that the convex hull of N points contained in Bn has volume at most fn vol(Bn). [!] This is essentially the argument of Elekes [Ele86]. 13.3 Constructing Polytopes of Large Volume For all N in the range 2n < N < 4n, we construct a polytope P c Bn with N vertices containing a ball of radius r = n ( ( (ln y) jn) 112) . This shows that 13.3 Constructing Polytopes of Large Volume 323 the bound in Theorem 13.2 .1 is tight for N > 2n, since vol( P) j vol( Bn) > rn. We begin with two extreme cases. First we construct a k-dimensional polytope Po c Bk with 4k vertices containing the ball • Bk. There are several possible ways; the simplest is based on 17-nets. We choose a 1-net V c sk-l and set Po = conv(V). According to Lemma 13.1.1, we have N = lVI < 4k. If there were an x with llxll = • not lying in Po, then the separating hyperplane passing through x and avoiding Po would define a cap (shaded) whose center y would be at distance at least 1 from V. Another extreme case is with N = 2q vertices in dimension n = q. Then we can take the cross polytope, i.e., the convex hull of the vectors e1, -e1 , . . . , eq, -eq, where ( e1, • • . , eq) is the standard orthonorntal basis. The radius of the inscribed ball is r = )q, which matches the asserted formula. Next, suppose that n = qk for integers q and k and set N = q4k. From N = q4k = n 4k we have N = 4k/k > ek and k < In N and so q > n/ ln N . k n --n ' -n Hence it suffices to construct an N-vertex polytope P c Bn containing the ball rBn with r = 2-Jq . The construction of P is a combination of the two constructions above. We interpret Rn as the product Rk x Rk x · · · x Rk (q factors). In each of the copies of Rk, we choose a polytope P0 with 4k vertices as above, and we let P be the convex hull of their union. More formally, P = conv{ (9, 0, .:,.· . , Ǩ· x1, x2, . . . , Xk, 0, 0, . . . , 0): (xt, . . . , xk) E V, (i-l)kx i = 1, 2, . . . , q}, where V is the vertex set of Po. We want to show that P contains the ball r Bn, r = 2-Jq . Let x be a point of norm llxll < r and let x(i) be the vector obtained from x by retaining the coordinates in the ith block, i.e., in positions (i-1)k+l, . . . , ik, and setting all the other coordinates to 0. These x(i) are pairwise orthogonal, and x lies in the q-dimensional subspace spanned by them. Let y(i) = 21;;> 11 be the vector of length e in the direction of x< i). Each y( i) is contained in P, since P0 contains the ball of radius Ø. The convex hull of the y(i) is a q-dimensional 324 Chapter 13: V olumes in High Dimension crosspolytope of circumradius •, and so it contains all vectors of norm 2ǧ in the subspace spanned by the x(i), including x. This construction assumes that n and N are of a special form, but it is not difficult to extend the bounds to all n > 2 and all N in the range 2n < N < 4n by monotonicity considerations; we omit the details. This proves that the bound in Theorem 13.2.1 is tight up to the value of the constant C for 2n < N < 4n. D --Bibliography and remarks. Several proofs are known for the lower bound almost matching Theorem 13.2.1 (Baniny and Fiiredi [BF87], Carl and Pajor [CP88], Kochol [Koc94]). In Barany and Fiiredi [BF87], the appropriate polytope is obtained essentially as the convex hull of N random points on sn-t (for .technical reasons, d special vertices are added), and the volume estimate is derived from an exact formula for the expected surface measure of the convex hull of N random points on sn-t due to Buchta, Miiller, and Tichy [BMT95]. The idea of the beautifully simple construction in the text is due to Kochol [Koc94]. His treatment of the basic case with exponentially large N is different, though: He takes points of a suitably scaled integer lattice contained in Bk for V, which yields an efficient construction (unlike the argument with a 1-net used in the text, which is only existential). Exercises 1. (Polytopes in nn with polynomially many facets) (a) Show that the cube inscribed in the unit ball Bn, which is a convex polytope with 2n facets, has volume of a larger order of magnitude than any convex polytope in Bn with polynon1ially many vertices (and so, concerning volume, "facets are better than vertices"). [I] (b) Prove that the inradius of any convex polytope with N facets con­ tained in nn is at most 0 ( y' (In ( N / n + 1)) / n ) (and so, in this respect, facets are not better than vertices). ill These observations are from Brieden and Kochol [BKOO]. 13.4 Approximating Convex Bodies by Ellipsoids One of the most important issues in the life of convex bodies is their ap­ proximation by ellipsoids, since ellipsoids are in many respects the simplest imaginable compact convex bodies. The following result tells us how well they can generally be approximated (or how badly, depending on the point of view). 13.4 Approximating Convex Bodies by Ellipsoids 325 13.4.1 Theorem (John's lemma). Let K c Rn be a bounded closed convex body with nonempty interior. Then there exists an ellipsoid Ein such that Ein C K C Eout , where Eout is Ein expanded from its center by the factor n. If K is symmetric about the origin, then we have the improved approximation Ein C K C Eout = Vn · Ein · Thus, K can be approximated from outside and from inside by similar ellipsoids with ratio 1 : n, or 1 : fo for the centrally symmetric case. Both these ratios are the best possible in general, as is shown by K being the regular simplex in the general case and the cube in the centrally symmetric case. In order to work with ellipsoids, we need a rigorous definition. A suitable one is to consider ellipsoids as affine images of the unit ball: If Bn denotes the unit ball in R n, an ellipsoid E is a set E = f ( Bn), where f: R n ---t R n is an affine map of the form f: x 1---t Ax + c. Here x is regarded as a column vector, c E R n is a translation vector, and A is a nonsingular n x n matrix. A very simple case is that of c = 0 and A a diagonal matrix with positive entries a1, a2, . . . , an on the diagonal. Then { 2 2 2 } n xl x2 xn E = x E R : - + - + · · · + - < 1 , a2 a2 a2 -1 2 n (13.3) as is easy to check; this is an ellipsoid with center at 0 and with semiaxes a1, a2, . . . , an. In this case we have vol(E) = a1a2 · · · an · vol(Bn). An arbi­ trary ellipsoid E can be brought to this form by a suitable translation and rotation about the origin. In the language of linear algebra, this corresponds to diagonalizing a positive definite matrix using an orthonormal basis con­ sisting of its eigenvectors; see Exercise 1. Proof of Theorem 13.4.1. In both cases in the theorem, Ein is chosen as an ellipsoid of the largest possible volume contained in K. Easy compactness considerations show that a maximum-volume ellipsoid exists. In fact, it is also unique, but we will not prove this. (Alternatively, the proof can be done starting with the smallest-volume ellipsoid enclosing K, but this has some technical disadvantages. For example, its existence is not so obvious.) 326 Chapter 13: Volumes in High Dimension We prove only the centrally symmetric case of John's lemma. The non­ symmetric case follows the same idea, but the calculations are different and more complicated, and we leave them to Exercise 2. So we suppose that K is symn1etric about 0, and we fix the ellipsoid Ein of maximum volume contained in K. It is easily seen that Ein can be assumed to be symmetric, too. We make a linear transformation so that Ein becomes the unit ball Bn. Assuming that the enlarged ball fo · Bn does not contain K, we derive a contradiction by exhibiting an ellipsoid E' C K with vol( E') > vol( Bn). We know that there is a point x E K with llxll > fo. For convenience, we may suppose that x = ( s, 0, 0, . . . , 0), s > fo. To finish the proof, we check that the region R = conv( Bn U {-x, x}) -x X contains an ellipsoid E' of volume larger than vol( Bn). The calculation is a little unpleasant but not so bad, after all. The region R is a rotational body; all the sections by hyperplanes perpendicular to the x1-axis are balls. We naturally also choose E' with this property: The semiaxis in the x1-direction is some a > 1, while the slice with the hyperplane { x1 = 0} is a ball of a suitable radius b < 1. We have vol(E') = abn-l vol(Bn), and so we want to choose a and b such that abn-l > 1 and E' C R. By the rotational symmetry, it suffices to consider the planar situation and make sure that the ellipsis with semiaxes a and b is contained in the planar region depicted above. In order to avoid direct computation of a tangent to the ellipsis, we mul­ tiply the x1-coordinate of all points by the factor ! . This turns our ellipsis into the dashed ball of radius b: . . . . . . . . 0 . . . . . . I Ș ' # # . . . .. ,•' · .. ·· . . .. A bit of trigonometry yields 13.4 Approximating Convex Bodies by Ellipsoids 327 8 bs t = ' Js2 - 1 This leads to a2 = 82 ( 1 -b2) + b2. We now choose b just a little smaller than 1; a suitable parameterization is b2 = 1-c for a small c > 0. We want to show that abn-l > 1, and for convenience, we work with the square. We have The Maclaurin series of the right-hand side in the variable c is 1 + ( 82 - n )c + O(e-2). Since 82 > n, the expression indeed exceeds 1 for all sufficiently small c > 0. Theorem 13.4.1 is proved. D Bibliography and remarks. Theorem 13.4.1 was obtained by John [Joh48]. He actually proved a stronger statement, which can be quite useful in many applications. Roughly speaking, it says that the maximum-volume inscribed ellipsoid has many points of contact with K that "fix" it within K. The statement and proof are nicely explained in Ball [Bal97]. As was remarked in the text, the maximum-volume ellipsoid con­ tained in K is unique. The same is true for the minimum-volume enclosing ellipsoid of K; a proof of the latter fact is outlined in Exer­ cise 3. The uniqueness was proved independently by several authors, and the oldest such results seem to be due to Lowner (see Danzer, Griinbaum, and Klee (DGK63] for references). The minimum-volume enclosing ellipsoid is sometimes called the Lowner-John ellipsoid, but in other sources the same name refers to the maximum-volume in­ scribed ellipsoid. The exact computation of the smallest enclosing ellipsoid for a given convex body K is generally hard. For example, it is NP-hard to compute the smallest enclosing ellipsoid of a given finite set if the dimension is a part of input (there are linear-time algorithms for ev­ ery fixed dimension; see, e.g., Matousek, Sharir, and Welzl (MSW96]). But under suitable algorithmic assumptions on the way that a convex body K is given (weak separation oracle), it is possible to compute in polynomial time an enclosing ellipsoid such that its shrinking by a factor of roughly n312 (roughly n in the centrally symmetric case) is contained in K (if K is given as an H-polytope, then these factors can be improved to the nearly worst-case optimal n+1 and Jn+T, respec­ tively). Finding such approximating ellipsoids is a basic subroutine in other important algorithms; see Grotschel, Lovasz, and Schrijver [GLS88] for more information. There are several other significant ellipsoids associated with a given convex body that approximate it in various ways; see, e.g., Linden­ strauss and Milman [LM93] and Tomczak-Jaegermann [TJ89]. 328 Chapter 1 3: Volumes in High Dimension Exercises 1. Let E be the ellipsoid f(Bn), where f: x – Ax for an n x n nonsingular matrix A. (a) Show that E = {x E Rn: xTBx < 1}. What is the matrix B? 0 (b) Recall or look up appropriate theorems in linear algebra showing that there is an orthonormal matrix T such that B' = T BT-1 is a diagonal matrix with the eigenvalues of B on the diagonal (check and use the fact that B is positive definite in our case). ill (c) What is the geometric meaning of T, and what is the relation of the entries of T BT-1 to the semiaxes of the ellipsoid E? ill 2. Prove the part of Theorem 13.4.1 dealing with not necessarily symmetric convex bodies. 0 3. (Uniqueness of the smallest enclosing ellipsoid) Let X c Rn be a bounded set that is not contained in a hyperplane (i.e., it contains n+l affinely independent points). Let £, (X) be the set of all ellipsoids in R n contain­ ing X. (a) Prove that there exists an Eo E £(X) with vol(E0) = inf{vol(E): E E £ (X)}. (Show that the infimum can be taken over a suitable compact subset of £(X).) ill (b) Let E1 , E2 be ellipsoids in R n; check that after a suitable affine trans-2 formation of coordinates, we may assume that E1 = { x E R n: I:— 1 ص < Y 2 1 n x2 1} and E2 = {x E Rn: llx - ell < 1}. Define E = {x E Rn: 2 Li=l ² + Z • I:— 1 (Xi - ci )2 < 1}. Verify that E1 n E2 C E, that E is an ellipsoid, and that vol(E) > min(vol(EI), vol(E2)), with equality only if Et = E2. Conclude that the smallest-volume enclosing ellipsoid of X is unique. 0 4. (Uniqueness of the smallest enclosing ball) (a) In analogy with Exercise 3, prove that for every bounded set X c Rn, there exists a unique minimum-volume ball containing X. 0 (b) Show that if X c R n is finite then the smallest enclosing ball is determined by at most n+ 1 points of X; that is, there exists an at most (n+l)-point subset of X whose smallest enclosing ball is the same as that of X. 0 5. (a) Let P c R2 be a convex polygon with n vertices. Prove that there are three consecutive vertices of P such that the area of their convex hull is at most O(n-3) times the area of P. 0 (b) Using (a) and the fact that every triangle with vertices at integer points has area at least 5 (check!), prove that every convex n-gon with integral vertices has area O(n3). m Remark. Renyi and Sulanke [RS64] proved that the worst case in (a) is the regular convex n-gon. 14 Measure Concentration and Almost Spherical Sections In the first two sections we are going to discuss measure concentration on a high-dimensional unit sphere. Roughly speaking, measure concentration says that if A c sn-1 is a set occupying at least half of the sphere, then almost all points of sn-1 are quite close to A, at distance about O(n-112). Measure concentration is an extremely useful technical tool in high-dimen­ sional geometry. From the point of view of probability theory, it provides tail estimates for random variables defined on sn-l, and in this respect it resembles Chernoff-type tail estimates for the sums of independent random variables. But it is of a more general nature, more like tail estimates for Lipschitz functions on discrete spaces obtained using martingales. The second main theme of this chapter is almost-spherical sections of convex bodies. Given a convex body K C R n, we want to find a k-dimen­ sional subspace L of R n such that K n L is almost spherical; i.e., it contains a ball of some radius r and is contained in the concentric ball of radius (l+c-)r. A remarkable Ramsey-type result, Dvoretzky's theorem, shows that with k being about c-2 log n, such a k-dimensional almost-spherical section exists for every K. We also include an application concerning convex polytopes, showing that a high-dimensional centrally symmetric convex polytope cannot have both a small number of vertices and a small number of facets. Both measure concentration and the existence of almost-spherical sections are truly high-dimensional phenomena, practically meaningless in the familiar dimensions 2 and 3. The low-dimensional intuition is of little use here, but perhaps by studying many results and examples one can develop intuition on what to expect in high dimensions. We present only a few selected results from an extensive and well­ developed theory of high-dimensional convexity. Most of it was built in the so-called local theory of Banach spaces, which deals with the geometry of 330 Chapter 14: Measure Concentration and Almost Spherical Sections finite-dimensional subspaces of various Banach spaces. In the literature, the theorems are usually formulated in the language of Banach spaces, so instead of symmetric convex bodies, one speaks about norms, and so on. Here we introduce some rudimentary terminology concerning nornted spaces, but we express most of the notions in geometric language, hoping to make it more accessible to nonspecialists in Banach spaces. So, for example, in the formu­ lation of Dvoretzky's theorem, we do not speak about the Banach-Mazur distance to an inner product norm but rather about almost spherical convex bodies. On the other hand, for a more serious study of this theory, the lan­ guage of normed spaces seems necessary. 14.1 Measure Concentration on the Sphere Let P denote the usual surface measure on the unit Euclidean sphere sn-1, scaled so that all of sn-1 has measure 1 (a rigorous definition will be men­ tioned later). This P is a probability measure, and we often think of sn-1 as a probability space. For a set A c sn-I, P[A] is the P-measure of A and also the probability that a random point of sn-1 falls into A. The letter P should suggest "probability of," and the notation P [AJ is analogous to Prob [A] used elsewhere in the book. Measure concentration on the sphere can be approached in two steps. The first step is the observation, interesting but rather easy to prove, that for large n, most of sn-1 lies quite close to the "equator." For example, the following diagram shows the width of the band around the equator that contains 90% of the measure, for various dimensions n: ..... .. ... . ..... ... . . ... ... .. _ ... ... . -,....,; / n = 3 n = 11 n = 101 That is, if the width of the gray stripe is 2w, then P [{x E sn-1: -w < Xn < w}] = 0.9. As we will see later, w is of order n -112 for large n. (Of course, one might ask why the measure is concentrated just around the ''equator" Xn = 0. But counterintuitive as it n1ay sound, it is concentrated around any equator, i.e., near any hyperplane containing the origin.) The second, considerably deeper, step shows that the measure on sn-1 is concentrated not only around the equator, but near the boundary of any (measurable) subset A c sn-l covering half of the sphere. Here is a precise quantitative formulation. 1 4 . 1 Measure Concentration on the Sphere 331 14.1.1 Theorem (Measure concentration for the sphere). Let A C sn-l be a measurable set with P [A] > d , and let At denote the t-neighbor­ hood of A, that is, the set of all X E sn-1 whose Euclidean distance to A is at rnost t. Tl1en Thus, if A occupies half of the sphere, almost all points of the sphere lie at distance at most 0( n -112) from A; only extremely small reserves can vegetate undisturbed by the nearness of A. (There is nothing very special about measure 5 here; see Exercise 1 for an analogous result with P [A] == a E ( 0, 5).) To recover the concentration around the equator, it suffices to choose A as the northern hemisphere and then as the southern hemisphere. We present a simple and direct geometric proof of a slightly weaker version of Theorem 14.1.1, with -t2n/4 in the exponent instead of -t2nf2. It deals with both the steps mentioned above in one stroke. It is based on the Brunn-Minkowski inequality: vol(A)1/n + vol(B)1/n < vol(A + B)11n for any nonempty compact sets A, B C Rn (Theorem 12.2.2). We actually use a slightly different version of the inequality, which resembles the well known inequality between the arithmetic and geometric means, at least optically: vol(d(A + B)) > J vol(A) vol(B). (14.1) This is easily derived from the usual version: We have vol(d (A + B))11n > vol(fA)1/n + vol(dB)1/n == ! (vol(A)1/n + vol(B)11n) > (vol(A) vol(B))112n by the inequality • (a + b) > JQh. Proof of a weaker version of Theorem 14.1.1. For a set A c sn-l, -we define A as the union of all the segments connecting the points of A to 0: A == {ox: X E A, Q E [0, 1]} c nn. Then we have -P [A] == JL(A)m \vhere JL(A) = vol(A)/ vol(Bn) is the normalized volume of A; in fact, this can be taken as the definition of P [A]. Let t E [0, 1], let P[A] > !, and let B == sn-1 \ At. Then lla - bll > t for all a E A, b E B. 14.1.2 Lemma. For any x E A and fJ E B, we have II xشy II < 1 - t2 /8. Proof of the lemma. Let x == ax, y = {3y, x E A, y E B: 332 Chapter 14: Measure Concentration and Almost Spherical Sections First we calculate, by the Pythagorean theorem and by elementary calculus, x + y g 2 t2 < 1 - - < 1 - -. 2 -4 -8 For passing to x and y, we may assume that {3 = 1. Then x + :Y 2 ax + y x + y (I ) y < a + - a -2 -2 2 = a(l - t ;) + (1 - a)(1 - ! ) < 1 - t ;. The lemma is proved. By the lemma, the set ¾(A + B) is contained in the ball of radius 1 - t2 /8 -a !ound the origin. Applying Brunn-Minkowski in the form (14.1) to A and B, we have So Bibliography and remarks. The simple proof of the slightly weaker measure concentration result for the sphere shown in this sec­ tion is due to Arias-de-Reyna, Ball, and Villa [ABV98]. More about the history of measure concentration and related results will be men­ tioned in the next section. Exercises D 1. Derive the following from Theorem 14.1.1: If A C sn-1 satisfies P[A] > a, 0 < a < ¾, then 1 - P[At] < 2e-(t-to)2n/2, where to is such that 2e-t6nl2 < a. 0 2. Let A, B c sn-1 be measurable sets with distance at least 2t. Prove that min(P[AJ , P[B]) < 2e-t2n/2• ư 3. Use Theorem 14.1.1 to show that any 1-dense set in the unit sphere sn-1 has at least D en/B points. Á 4. Let K = nf 1 {X E R n: I ( Ui ' X) I < 1} be the intersection of symmetric slabs determined by unit vectors Ul ' . . . ' UN E an. Using Theorem 14.1.1, prove that vol(Bn)/ vol(K) < (¿ InN)n12 for a suitable constant C. 0 The relation to Theorem 13.2.1 is explained in the notes to Section 13.2. 14.2 Isoperimetric Inequalities and More on Concentration 333 14.2 Isoperimetric Inequalities and More on Concentration The usual proof of Theorem 14.1.1 (measure concentration) has two steps. First, P[At] is bounded for A the hemisphere (which is elementary calculus), and second, it is shown that among all sets A of measure d, the hemisphere has the smallest P[At]· The latter result is an example of an isoperimetric inequality. Before we formulate this inequality, let us begin with the mother of all isoperimetric inequalities, the one for planar geometric figures. It states that among all planar geometric figures with a given perimeter, the circular disk has the largest possible area. (This is well known but not so easy to prove rigorously.) More general isoperimetric inequalities are usually formulated using the volume of a neighborhood instead of "perimeter." They claim that among all sets of a given volume in some metric space under consideration, a ball of that volume has the smallest volume of the t-neighborhood: (In the picture, assuming that the dark areas are the same, the light gray area is the smallest for the disk.) Letting t ---t 0, one can get a statement involving the perimeter or surface area. But the formulation with t-neighborhood makes sense even in spaces where "surface area" is not defined; it suffices to have a metric and a measure on the considered space. Here is this "neighborhood" form of isoperimetric inequality for the Eu­ clidean space Rn with Lebesgue measure. 14.2.1 Proposition. For any compact set A c Rd and any t > 0, we have vol(At) > vol(Bt), where B is a ball of the same volume as A. Although we do not need this particular result in the further development, let us digress and mention a nice proof using the Brunn-Minkowski inequality (Theorem 12.2.2). Proof. By rescaling, we may assume that B is a ball of unit radius. Then At = A + tB, and so vol(At) = vol(A + tB) > ( vol(A)1/n + t vol(B)1fn r = (1 + t)n vol(B) = vol(Bt)· D For the sphere sn-1 with the usual Euclidean metric inherited from Rn, an r-ball is a spherical cap, i.e., an intersection of sn-1 with a half-space. The 334 Chapter 14: Measure Concentration and Almost Spherical Sections isoperimetric inequality states that for all measurable sets A C sn-1 and all t > 0, we have P[At] > P[Ct], where C is a spherical cap with P[CJ == P[A]. We are not going to prove this; no really simple proof seems to be known. The measure concentration on the sphere (Theorem 14.1.1) is a rather direct consequence of this isoperimetric inequality, by the argument already indicated above. If P[A] == º, then P[At] > P[Ct], where C is a cap with P [ C] = À, i.e., a hemisphere. Thus, it suffices to estimate the measure of the complementary cap sn-1 \ Ct .1 Gaussian concentration. There are many other metric probability spaces with measure concentration phenomena analogous to Theorem 14.1.1. Per­ haps the most important one is Rn with the Euclidean metric and with the n-dimensional Gaussian measure 1 given by This is a probability measure on Rn corresponding to the n-dimensional normal distribution. Let Zt, Z2, . . . , Zn be independent real random variables, each of them with the standard normal distribution N(O, 1), i.e., such that Prob [Z· < z] = -1 - jz e-t212dt 't -΃ -oo for all z E R. Then the vector ( Z1 , Z2, . . . , Zn) E R n is distributed accord­ ing to the measure 1. This 1 is spherically symmetric; the density function (27r)-nf2e-llxll2 12 depends only on the distance of x from the origin. The dis­ tance of a point chosen at random according to this distribution is sharply concentrated around fo, and in many respects, choosing a random point according to 1 is similar to choosing a random point from the uniforn1 dis­ tribution on the sphere Vn sn-1. The isoperimetric inequality for the Gaussian measure claims that among all sets A with given 1( A), a half-space has the smallest possible measure of the t-neighborhood. By simple calculation, this yields the corresponding theorem about measure concentration for the Gaussian measure: 14.2.2 Theorem (Gaussian measure concentration). Let a measurable set A C Rn satisf y !(A) > º· Then 1(At) > 1 - e-t212. 1 Theorem 14. 1 . 1 provides a good upper bound for the measure of a spherical cap, but sometimes a lower bound is useful, too. Here are fairly precise estimates; for convenience they are expressed with a different parameterization. Let C ( r) = {X E sn-1 : X1 > T} denote the spherical cap of height 1 - T. Then for 0 < T < ffn, we have 112 < P(C(r)] < ԧ ' and for ffn < T < 1, we have 1 (1 - T2)(n-1)/2 < P [C(T)} < 1 (1 - T2)(n-1)/2. 6Tjn 2Tjn These formulas are taken from Brieden et al. [BGK+99]. 14.2 Isoperimetric Inequalities and More on Concentration 335 Note that the dimension does not appear in this inequality, and indeed the Gaussian concentration has infinite-dimensional versions as well. Measure concentration on sn-1, with slightly suboptimal constants, can be proved as an easy consequence of the Gaussian concentration; see, for example, Milman and Schechtman [MS86] (Appendix V) or Pisier [Pis89]. lVIost of the results in the sequel obtained using measure concentration on the sphere can be derived from the Gaussian concentration as well. In more advanced applications the Gaussian concentration is often technically prefer­ able, but here we stick to the perhaps more intuitive measure concentration on the sphere. Other important "continuous" spaces with concentration results similar to Theorem 14.1.1 include the n-dimensional torus (the n-fold Cartesian product 81 x · · · x 81 c R2n) and the group SO(n) of all rotations around the origin in Rn (see Section 14.4 for a little more about SO(n)). Discrete metric spaces. Similar concentration inequalities also hold in many discrete metric spaces encountered in combinatorics. One of the sim­ plest examples is the n-dimensional Harnming cube Cn = {0, 1 }n. The points are n-component vectors of O's and 1 's, and their Hamming distance is the number of positions where they differ. The "volume" of a set A C {0, 1 }n is defined as P [A] = 2 D I AI. An r-ball B is the set of all 0/1 vectors that differ from a given vector in at most r coordinates, and so its volume is P [ BJ = 2-n ( 1 + (ز) + (Ì) + · · · + (س)). The isoperimetric inequality for the Hamming cube, due to Harper, is exactly of the form announced above: If A C Cn is any set with P [A] > P[B], then P[At] > P[Bt]· Of course, if A is an r-ball, then At is an (r+t)-ball and we have equality. Suitable estimates (tail estimates for the binomial distribution in probability theory) then give an analogue of Theorem 14.1.1: 14.2.3 Theorem (Measure concentration for the cube). Let A C Cn satisf y P [A] > ¾· Then 1 - P[At] < e-t2/2n. This is very similar to the situation for sn-1, only the scaling is different: While the Hamming cube Cn has diameter n, and the interesting range of t is from about Vn to n, the sphere sn-l has diameter 2, and the interesting t are in the range from about Jn to 2. Another significant discrete metric space with similar measure concentra­ tion is the space Sn of all permutations of {1, 2, . . . , n} (i.e., bijective map­ pings { 1, 2, . . . , n} ---t { 1, 2, . . . , n}). The distance of two permutations p1 and P2 is I { i: P1 ( i) i= P2 ( i)} I, and the measure is the usual uniform probability measure on Sn, where every single permutation has measure Ԩ. Here a mea-n. sure concentration inequality reads 1 - P [At] < e- ;. The expander graphs, to be discussed in Section 15.5, also offer an example of spaces with measure concentration; see Exercise 15.5. 7. 336 Chapter 14: Measure Concentration and Almost Spherical Sections Bibliography and remarks. A modern treatment of measure con­ centration is the book Ledoux [LedOlJ, to which we refer for more material and references. A concise introduction to concentration of Lipschitz functions and discrete isoperimetric inequalities, including some very recent material and combinatorial applications, is contained in the second edition of the book by Alon and Spencer [ASOOdJ. Older material on measure concentration in discrete metric spaces, with mar­ tingale proofs and several combinatorial examples, can be found in Bollobas's survey [Bol87]. For isoperimetric inequalities and measure concentration on manifolds see also Gromov [Gro98] (or Gromov's ap­ pendix in [MS86]). The Euclidean isoperimetric inequality (the ball has the smallest surface for a given volume) has a long and involved history. It has been "known" since antiquity, but full and rigorous proofs were obtained only in the nineteenth century; see, e.g., Talenti [Tal93] for references. The quick proof via Brunn-Minkowski is taken from Pisier [Pis89]. The exact isoperimetric inequality for the sphere was first proved (according to [FLM77]) by Schmidt (Sch48]. Figiel, Lindenstrauss, and Milman [FLM77] have a 3-page proof based on symmetrization. Measure concentration on the sphere and on other spaces was first recognized as an important general tool in the local theory of Banach spaces, and its use was mainly pioneered by Milman. Several nice surveys with numerous applications, mainly in Banach spaces but also elsewhere, are available, such as Lindenstrauss [Lin92], Lindenstrauss and Milman [LM93], Milman [Mil98], and some chapters of the book Benyamini and Lindenstrauss [BL99). The Gaussian isoperimetric inequality was obtained by Borell [Bor75] and independently by Sudakov and Tsirel'son [ST74). A proof can also be found in Pisier [Pis89]. Ball [Bal97J derives a slightly weaker version of the Gaussian concentration directly using the Prekopa­ Leindler inequality mentioned in the notes to Section 12.2. The ex­ act isoperimetric inequality for the Hamming cube is due to Harper [Har66]. We will indicate a short proof of measure concentration for product spaces, including the Hamming cube, in the notes to the next section. More recently, very significant progress was made in the area of measure concentration and similar inequalities, especially on product spaces, mainly associated with the name of Talagrand; see, for in­ stance, [Tal95] or the already mentioned book [LedOl]. Talagrand's proof method, which works by establishing suitable one-dimensional inequalities and extending them to product spaces by a clever induc­ tion, also gives most of the concentration results previously obtained with the help of martingales. 14.3 Concentration of Lipschitz FUnctions 1v1any new isoperimetric and concentration inequalities, as well as new proofs of known results, have been obtained by a function theo­ retic (as opposed to geometric) approach. Here concentration inequal­ ities are usually derived from other types of inequalities, such as loga­ rithmic Sobolev inequalities (estimating the entropy of a random vari­ able). One advantage of this is that while concentration inequalities usually do not behave well under products, entropy estimates extend to products automatically, and so it suffices to prove one-dimensional . versions. Reverse isoperimetric inequality. The smallest possible surface area of a set with given volume is determined by the isoperimetric inequal­ ity. In the other direction, the surface area can be arbitrarily large for a given volume, but a meaningful question is obtained if one consid­ ers affine-equivalence classes of convex bodies. The following reverse isoperimetric inequality was proved by Ball (see [Bal97] or [Bal]): Fo: every n-dimensional convex body C there exists an affine image C of unit volume whose surface area is no larger than the surface area of the n-dimensional unit-volume regular simplex. Among symmetric convex bodies, the extremal body is the cube. 14.3 Concentration of Lipschitz Functions 337 Here we derive a form of the measure concentration that is very suitable for applications. It says that any Lipschitz function on a high-dimensional sphere is tightly concentrated around its expectation. (Any measurable real function f: sn-l ---+ R can be regarded as a random variable, and its expectation is given by E[f] = fsn-1 f(x) dP(x).) We recall that a mapping f between metric spaces is C -Lipschitz, where C > 0 is a real number, if the distance of f(x) and f(y) is never larger than C times the distance of x and y. We first show that a 1-Lipschitz function f: sn-l ---+ R is concentrated around its median. The median of a real-valued function f is defined as med(f) = sup{t E R: P[f < t] < dž}. Here P is the considered probability measure on the domain of f; in our case, it is the normalized surface measure on sn-I. The notation P [! < t] is the usual probability-theory shorthand for p [ {X E sn -l: f (X) < t}] . The following lemma looks obvious, but an actual proof is perhaps not completely obvious: 14.3.1 Lemma. Let f: 0 ---+ R be a measurable function on a space 0 with a probability measure P. Then P[f < med(f)J < dž and P [f > med(f)] < Ú. 338 Chapter 14: Measure Concentration and Almost Spherical Sections Proof. The first inequality can be derived from the a-additivity of the measure P: 00 P[/ < med(f)] = L P [med(f) - k 1 1 < f < med(f) - ×] k=l = sup P [f < med(f) - k] < º· k>l The second inequality follows similarly. D We are ready to prove that any 1-Lipschitz function sn-1 ----+ R is con­ centrated around its median: 14.3.2 Theorem (Levy's lemma). Let f: sn-l  R be !-Lipschitz. Then for all t E [ 0, 1], P [f > med(f) + t] < 2e -t2n/2 and P [f < med(f) - t] < 2e -t2n/2. For example, on 99% of sn-1, the function J attains values deviating from med(f) by at most 3.5n-112. Proof. We prove only the first inequality. Let A = {x E sn-1: f(x) < med(f) }. By Lemma 14.3.1, P[A] > º. Since f is !-Lipschitz, we have f(x) < med(f) + t for all x E At. Therefore, by Theorem 14.1.1, we get P [f > med(f) + t] < 1 - P [At] < 2e-t2n/2. D The median is generally difficult to compute. But for a 1-Lipschitz func­ tion, it cannot be too far from the expectation, which is usually easier to estimate: 14.3.3 Proposition. Let f: sn-l ----+ R be !-Lipschitz. Then I med(f) - E[f] I < 12n-1/2. Proof. 00 k + 1 I med(f) - E[f] I < E[l/ - med(f) l] < L fo · P [11 - med(f)l > Jn] k=O 00 < n-1/2 L(k+l) . 4e-k2/2 < 12n-1/2 k=O (the numerical estimate of the last sum is not important; it is important that it converges to some constant, which is obvious). D We derive a consequence of Levy's lemma on finding k-dimensional sub­ spaces where a given Lipschitz function is almost constant. But first we need some notions and results. 14.3 Concentration of Lipschitz Functions 339 Random rotations and random subspaces.. We want to speak about a random k-dimensional (linear) subspace of R n. We thus need to specify a probability measure on the set of all k-dimensional linear subspaces of Rn (so-called Grassmann manifold or Grassmannian). An elegant way of doing this is via random rotations. A rotation p is an isometry of R n fixing the origin and preserving the orientation. In algebraic terms, p is a linear mapping x H Ax given by an orthonormal matrix A with determinant 1. The result of performing the rotation p on the standard orthonormal basis ( e1 , . . . , en) in R n is an n-tuple of orthonormal vectors, and these vectors are the columns of A. The group of all rotations in R n around the origin with the operation of composition (corresponding to multiplication of the matrices) is denoted by SO(n), which stands for the special orthogonal group. With the natu­ ral topology (obtained by regarding the corresponding matrices as points in Rn2), it is a compact group. By a general theorem in the theory of topologi­ cal groups, there is a unique Borel probability measure on SO(n) (the Haar measure) that is invariant under the action of the elements of SO ( n). Here is a more concrete description of this probability measure. To obtain a random rotation p, we first choose a vector a1 E sn-1 uniformly at random. Then we pick a2 orthogonal to a1; this a2 is drawn from the uniform distribution on the ( n-2)-dimensional sphere that is the intersection of sn-1 with the hyperplane perpendicular to a1 and passing through 0. Then a3 is chosen from the unit sphere within the (n-2)-dimensional subspace perpendicular to a1 and a2, and so on. In the sequel we need only the following intuitively obvious fact about a random rotation p E SO ( n): For every fixed u E sn-1, p( u) is a random vector of sn-1. Therefore, if u E sn-1 is fixed, A C sn-1 is measurable, and p E SO(n) is random, then the probability of p(tt) E A equals P[A]. Let £0 be the k-dimensional subspace spanned by the first k coordinate vectors e1, e2, . . . , ek. A random k-dimensional linear subspace L C R n can be defined as p(L0), where p E SO(n) is a random rotation. By Levy's lemma, a 1-Lipschitz function on sn-l is "almost constant" on a subset A occupying almost all of sn-1. Generally we do not know anything about the shape of such an A. But the next proposition shows that the almost-constant behavior can be guaranteed on the intersection of sn-l with a linear subspace of R n of relatively large dimension. 14.3.4 Proposition (Subspace where a Lipschitz function is almost constant). Let j: sn-1 ---7 R be a 1-Lipschitz function and let 6 E (0, 1]. Then there is a linear subspace L C an such that all values of f restricted to sn-1 n L are in the interval [med(/) - 6, med(/) + 6] and 82 dim L > S log(S/<5) · n - 1. 340 Chapter 1 4: Measure Concentration and Almost Spherical Sections Proof. Let L0 be the subspace spanned by the first k = r n62 /8 log $ - 11 coordinate vectors. Fix a %-net N (as defined above Lemma 13.1.1) in sn-l n Lo. Let p E SO(n) be a random rotation. For x E N, p(x) is a random point, and so by Levy's lemma, the probability that lf(p(x)) - med(f)l > % for at least one point x E N is no more than INI · 4e-<52n/B. Using the bound INI < ($)k from Lemma 13.1.1, we calculate that with a positive probability, lf(y) - med(f)l < % for all y E p(N). We choose a p with this property and let L = p(Lo). For each X E sn-lnL, there is some y E p(N) with llx- yJI < %, and since f is 1-Lipschitz, we obtain lf(x) - med(f)l < lf(x) - f(y)l + lf(y) - med(/)1 < 6. D Bibliography and remarks. Levy's lemma and a measure concen­ tration result similar to Theorem 14.1.1 were found by Levy [Lev51]. Analogues of Levy's lemma for other spaces with measure concen­ tration follow by the same argument. On the other hand, a measure concentration inequality for sets follows from concentration of Lips­ chitz functions (a Levy's lemma) on the considered space (Exercise 1). For some spaces, concentration of Lipschitz functions can be proved directly. Often this is done using martingales (see [Led01], [ASOOd], (MS86], (Bol87]). Here we outline a proof without martingales (follow­ ing [Led01]) for product spaces. Let 0 be a space with a probability measure P and a metric p. The Laplace functional E = En,P,p is a function (0, oo) ---+ R defined by E(A) = sup {E [ e.Af] : f: n ---+ R is 1-Lipschitz and E[f] = 0} . First we show that a bound on E(A) implies concentration of Lipschitz functions. Assume that E(A) < ea>.? 12 for some a > 0 and all A > 0, and let f: n ---+ R be 1-Lipschitz. We may suppose that E[f] = 0. Using Markov's inequality for the random variable Y = e.Af, we have P [f > t] = P [Y > et.A] < E[Y] jet.A < E(A)jet.A < ea.A2/2-.At, and setting A = ! yields P [! > t] < e-t2f2a. Next, for some spaces, E(A) can be bounded directly. Here we show that if (n, p) has diameter at most 1, then E(A) < e-.A2/2• This can be proved by the following elegant trick. First we note that eE[f] < E [ ef] for any f, by Jensen's inequality in integral form, and so if E [!] = 0, then E [e-I] > 1. Then, for a 1-Lipschitz f with E[f] = 0, we calculate E [ e.>.f] = in e.>.f(x) dP(x) < (/ e-Af(y) dP(y)) (/ e.>.f(x) dP(x)) = J J e.>.Cf(x)-f(y)) dP(x) dP(y) 14.4 Alinost Spherical Sections: The First Steps = f: j j (>.(f(x) Ö f(y)))i dP(x) dP(y). t=O For i even, we can bound the integrand by Ai /i!, since !f(x)-f(y)j < 1. For odd i, the integral vanishes by symmetry. The resulting bound is 2:r 0 .A2k /(2k)! < eA2 12. (If the diameter is D, then we obtain E(.A) < eD2 A2 /2.) Finally, we prove that the Laplace functional is submultiplica­ tive. Let (Ol , Pl, Pl) and (02, P2, P2) be spaces, let 0 = 01 xn2, P = P1 xP2, and p = Pl + P2 (that is, p((x, y), (x', y')) = Pl (x, x') + P2(y, y')). We claim that En,P,p(.A) < Erh ,Pt,p1 (.A) · En2,P2,p2 (.A). To verify this, let f: n ---t R be !-Lipschitz with E[f] = 0, and set g(y) = Ex [f(x, y)] = f n1 J(x, y) dP1(x). We observe that g, being a weighted average of 1-Lipschitz functions, is 1-Lipschitz. We have The function x H f(x, y) -g(y) is !-Lipschitz and has zero expectation for each y, and the inner integral is at most En 1 ,P1 ,p1 (A). Since g is !-Lipschitz and E[g] = 0, we have f 02 eAg(y) dP2(y) < Eo.2,P2,p2 (.A) and we are done. By combining the above, we obtain, among others, that if each of n spaces (Oi, Pi, Pi) has diameter at most 1 and (0, P, p) is the product, then P [f > E(/] + t] < e-t2 12n for all !-Lipschitz f: 0 ---t R. In particular, this applies to the Hamming cube. Proposition 14.3.4 is due to Milman [Mil69], (Mil71]. Exercises 341 1. Derive the measure concentration on the sphere (Theorem 14.1.1) from Levy's lemma. ǣ 14.4 Almost Spherical Sections: The First Steps For a real number t > 1, we call a convex body K t-almost spherical if it contains a (Euclidean) ball B of some radius r and it is contained in the concentric ball of radius tr. 342 Chapter 1 4: Measure Concentration and Almost Spherical Sections Given a centrally symmetric convex body K C Rn and c > 0, we are in­ terested in finding a k-dimensional (linear) subspace L, with k as large as possible, such that the "section" K n L is (1 +c)-almost spherical. Ellipsoids. First we deal with ellipsoids, where the existence of large spher­ ical sections is not very surprising. But in the sequel it gives us additional freedom: Instead of looking for a ( 1 +c)-spherical section of a given convex body, we can as well look for a (1+c)-ellipsoidal section, while losing only a factor of at most 2 in the dimension. This means that we are free to trans­ form a given body by any (nonsingular) affine map, which is often convenient. Let us remark that in the local theory of Banach spaces, almost-ellipsoidal sections are usually as good as almost-spherical ones, and so the following lemma is often not even mentioned. 14.4.1 Lemma (Ellipsoids have large spherical sections). For any (2k-1)-dinlensional ellipsoid E, there is a k-flat L passing through the center of E such that E n L is a Euclidean ball. Proof. Let E = { x E R2k-l : I;;k³l Õ < 1} with 0 < a1 < a2 < · · · < a2k-l· We define the k-dimensional linear subspace L by a system of k - 1 linear equations. The ith equation is i = 1, 2, . . . , k-1. It is chosen so that 2 x2 1 xi 2k-i ( 2 2 ) 2 + 2 == 2 xi + x2k-i a. a2k . ak t , - 1, for x E L. It follows that for x E £, we have x E E if and only if llxll < ak, and so E n L is a ball of radius ak. The reader is invited to find a geometric meaning of this proof and/ or express it in the language of eigenvalues. D To make formulas simpler, we consider only the case E = 1 (2-almost spherical sections) in the rest of this section. An arbitrary c > 0 can always be handled very similarly. 14.4 Almost Spherical Sections: The First Steps 343 The cube. The cube [-1, 1]n is a good test case for finding almost-spherical sections; it seems hard to imagine how a cube could have very round slices. In some sense, this intuition is not totally wrong, since the almost-spherical sections of a cube can have only logarithmic dimension, as we verify next. (But the n-dimensional crosspolytope has ( 1 +c)-spherical sections of dimen­ sion as high as c( c )n, and yet it does not look any rounder than the cube; so much for the intuition.) The intersection of the cube with a k-dimensional linear subspace of R n is a k-dirnensional convex polytope with at rnost 2k facets. 14.4.2 Lemma. Let P be a k-dirnensional 2-alrnost spherical convex poly­ tope. Then P has at least º ek/B facets. Therefore, any 2-almost spherical section of the cube has dimension at most O(log n). Proof of Lemma 14.4.2. After a suitable affine transform, we may assume ¾ Bk c p c Bk. Each point X E sk-l is separated from p by one of the facet hyperplanes. For each facet F of P, the facet hyperplane hp cuts off a cap Cp of sk× 1 ' and these caps together cover all of sk-l. The cap c F is at distance at least º from the hemisphere defined by the hyperplane h'p parallel to hp and passing through 0. ,. -- hp . ..... - h'p By Theorem 14.1.1 (measure concentration), we have P[CF] < 2e-kf8. D Next, we show that the n-dimensional cube actually does have 2-almost spherical sections of dimension !1 (log n). First we need a k-dimensional 2-almost spherical polytope with 4k facets. We note that if P is a convex polytope with Bk c P c tBk, then the dual polytope P satisfies Ɓ Bk c P c Bk (Exercise 1 ) . So it suffices to construct a k-dimensional 2-almost spherical polytope with 4 k vertices, and this was done in Section 13.3: We can take any 1-net in sk-l as the vertex set. (Let us remark that an exponential lower bound for the number of vertices also follows from Theorem 13.2.1.) By at most doubling the number of facets, we may assume that our k­ dimensional 2-almost spherical polytope is centrally symmetric. It remains to observe that every k-dimensional centrally symmetric convex polytope P with 2n f acets is an affine i1nage of the section [-1, 1] n n L for a sui table k-di­ mensional linear subspace L C Rn. Indeed, such a P can be expressed as the 344 Chapter 14: Measure Concentration and Almost Spherical Sections intersection nß 1 {x E Rk: J(ai, x)J < 1}, where ±a1, . . . , ±an are suitably normalized normal vectors of the facets of P. Let f: R k ---+ R n be the linear map given by f ( x) = ( (a 1 , X) , ( a2, x) , . . . , (an , x) ) . Since P is bounded, the ai span all of R k, and so f has rank k. Consequently, its image L = f(Rk) is a k-dimensional subspace of Rn. We have P = f-1 ( [-1, 1 ]n), and so the intersection [-1, 1] n n L is the affine in1age of P. We see that the n-dimensional cube has 2-almost ellipsoidal sections of dimension ԩԪ(logn) (as well as 2-almost spherical sections, by Lemma 14.4.1). Next, we make preparatory steps for finding almost-spherical sections of arbitrary centrally symmetric convex bodies. These considerations are most conveniently formulated in the language of norms. Reminder on norms. We recall that a norm on a real vector space Z is a mapping that assigns a nonnegative real number llxllz to each x E Z such that llxllz = 0 implies x = 0, llaxllz = lal · llxllz for all a E R, and the triangle inequality holds: llx + Yllz < llxllz + IIYIIz. (Since we have reserved II · II for the Euclidean norm, we write other norms with various subscripts, or occasionally we use the symbol I · J.) Norms are in one-to-one correspondence with closed bounded convex bod­ ies symmetric about 0 and containing 0 in their interior. Here we need only one direction of this correspondence: Given a convex body K with the listed properties, we assign to it the norm II · IlK given by llx II K = min { t > 0: Ô E K} ( x =/= 0). Here is an illustration: IIYIIK = 3 ΂ y X llxiiK = 1 . . ' . ' ' ' ' . ' . . It is easy to verify the axioms of the norm (the convexity of K is needed for the triangle inequality). The body K is the unit ball of the norm II · IIK· The norm of points decreases by blowing up the body K. General body: the first attempt. Let K C Rn be a convex body defining a norm (i.e., closed, bounded, symmetric, 0 in the interior). Let us define the function fK: sn-l ---t R as the restriction of the norm II . IlK on sn-l; that is, !K(x) = llxiiK· We note that K is t-almost spherical if (and only if) there is a number a > 0 such that a < f(x) < ta for all X E sn-l. So for finding a large almost-spherical section of K, we need a linear subspace L such that 14.4 Almost Spherical Sections: The First Steps 345 f does not vary too much on sn-l n L, and this is where Proposition 14.3.4, about subspaces where a Lipschitz function is almost constant, comes in. Of course, that proposition has its assumptions, and one of them is that f K is 1-Lipschitz. A sufficient condition for that is that K should contain the unit ball: 14.4.3 Observation. Suppose that the convex body K contains the R-ball B(O, R). Then llxiiK < k llxll for all x, and the function x t-t llxiiK is k-Lip­ schitz with respect to the Euclidean metric. D Then we can easily prove the following result. 14.4.4 Proposition. Let K C Rn be a convex body defining a norm and such that Bn C K, and let m = med(f K), where f K is as above. Then there exists a 2-almost-spherical section of K of dimension at least ( nm2 ) n log(24/m) . Proof. By Observation 14.4.3, f K is 1-Lipschitz. Let us set 6 = r; (note that Bn c K also implies m < 1}. Proposition 14.3.4 shows that there is a subspace L such that fK E (&m, jm] on sn-l n L, where ( n62 ) ( nm2 ) dim £ = n log(8/8) = n log(24/m) . (14.2) The section K n L is 2-almost spherical. D A slight improvement. It turns out that the factor log(24/m) in the result just proved can be eliminated by a refined argument, which uses the fact that fK comes from a norm. 14.4.5 Theorem. With the assumptions as in Proposition 14.4.4, a 2-almost spherical section exists of dimension at least {3nm2, where {3 > 0 is an absolute constant. Proof. The main new observation is that for our JK, we can afford a much less dense net N in the proof of Proposition 14.3.4. Namely, it suffices to let N be a Ǭ-net in sk-·1, where k = f(3m2nl . If (3 > 0 is sufficiently srrtall, Levy's lemma gives the existence of a rotation p such that iim < /K(Y) < im for all y E p(N); this is exactly as in the proof of Proposition 14.3.4. It remains to verify &m < !K(x) < ǭm for all X E sn-1 n L, where L = p(Lo). This is implied by the following claim with a = im and I · I = II · IlK: Claim. Let N be a !-net in sk-l with respect to the Euclidean metric, and let I · I be a norm on Rk satisf ying Ⱦa < IYI < a for all y E N and for some number a > 0. Then ϡa < lxl < Ϣa for all X E sk-1. 346 Chapter 1 4: Measure Concentration and Almost Spherical Sections To prove the claim, we begin with the upper bound (this is where the new trick lies). Let M = max{lxl: X E sk-1} and let Xo E sk-l be a point where M is attained. Choose a y0 E N at distance at most ر from x0, and let t) z = (xo - Yo)/llxo - Yo II be the unit vector in the direction of x0 - YO· Then M = lxol < IYol + lxo - Yol < a + llxo - Yo II · lzl < a + PM. The resulting inequality M < a + i M yields M < ȿa. The lower bound is now routine: If X E sk-1 and y E N is at distance at most P from it, then lxl > IYI - lx - Yl > Ⱦa - ; · ȿa > ϝa. The claim, as well as Theorem 14.4.5, is proved. 0 Theorem 14.4.5 yields almost-spherical sections of K, provided that we can estimate med(/K) (after rescaling K so that Bn C K). We must warn that this in itself does not yet give almost spherical sections for every K (Dvoretzky's theorem), and another twist is needed, shown in Section 14.6. But in order to reap some benefits from the hard work done up until now, we first explain an application to convex polytopes. Bibliography and remarks. As was remarked in the text, almost­ spherical and almost-ellipsoidal sections are seldom distinguished in the local theory of Banach spaces, where symmetric convex bodies are considered up to isomorphism, i.e., up to a nonsingular linear trans­ form. If K1 and K2 are symmetric convex bodies in Rn, their Banachǝ· Mazur distance d(K1, K2) is defined as the smallest positive t for which there is a linear transform T such that T(K1) C K2 C t · T(K2). So a symmetric convex body K is t-almost ellipsoidal if and only if d(K, Bn) < t. It turns out that every two symmetric cornpact convex bodies K1, K2 C Rn satisfy d(K1 , K2) < fo. The logarithm of the Banach-Mazur distance is a metric on the space of compact symmet­ ric convex bodies in R n. Lernma 14.4.1 appears in Dvoretzky [Dvo61] . Theorem 14.4.5 is from Figiel, Lindenstrauss, and J\1ilman [FLM77}. There are several ways of proving that the n-dimensional crosspoly­ tope has almost spherical sections of dimension 0( n) (but, perhapH surprisingly, no explicit construction of such a section seems to be known). A method based on Theorem 14.4.5 is indicated in Exer­ cise 14.6.2. A somewhat more direct way, found by Schechtman, is to let the section L be the image of the linear map f: Rcn --+ Rn whose matrix has entries ±1 chosen uniformly and independently at random ( c > 0 is a suitable small constant). The proof uses martin­ gales ( Azuma's inequality); see, e.g., J\1ilman and Schechtman [MS86]. The existence of a C-almost spherical section of dimension ×, with a suitable constant C, is a consequence of a theorem of Kashin: If Br de­ notes the crosspolytope and p is a random rotation, then Bl_J np( BiJ) is 32-almost spherical with a positive probability; sec Ball [Bal97] for an insightful exposition. The previously mentioned methods do not pro-14.5 Many Faces of Symmetric Polytopes vide a dimension this large, but Kashin's result does not give (!+c)­ almost spherical sections for small £ . Exercises 347 1. Let K be a convex body containing 0 in its interior. Check that K C Bn if and only if Bn C K (recall that K = {x E Rk: (x, y) < 1 for all y E K}). Derive that if Bk c K c tBk, then tBk c K c Bk. ITJ 14.5 Many Faces of Symmetric Polytopes Can an n-dimensional convex polytope have both few vertices and few facets? Yes, an n-si1nplex has n+1 vertices and n+l facets. What about a centrally symmetric polytope? The n-dimensional cube has only 2n facets but 2n ver­ tices. Its dual, the crosspolytope (regular octahedron for n == 3), has few vertices but many facets. It turns out that every centrally symmetric poly­ tope has many facets or many vertices. 14.5.1 Theorem. There is a constant a > 0 such that for any centrally sym­ Inetric n-di1nensional convex polytope P, we l1ave log fo ( P) ·log f n-1 ( P) > an (recall that fo ( P) denotes the number of vertices and f n-1 ( P) the number of facets). For the cube, the expression log fo(P) ·log fn-1 (P) is about n logn, which is even slightly larger than the lower bound in the theorem. However, poly­ topes can be constructed with both log fo(P) and log fn-1 (P) bounded by 0 ( y'n) (Exercise 1). Proof of Theorem 14.5.1. We use the dual polytope P with fo(P) = fn-1(P), and we prove the theorem in the equivalent form log fn-1 (P) · log fn-1 (P) > an. John's lemma (Theorem 13.4.1) claims that for any symmetric convex body K, there exists a ( nonsingular) linear map that transforms K into a y'n-almost spherical body. We can thus assume that the considered n-dimen­ sional polytope P is y'n-almost spherical (this is crucial for the proof). After rescaling, we may suppose Bn C P C y'n Bn. Letting m = rned(fp ), where fp is the restriction of II · IIP on sn-1 as usual, Theorem 14.4.5 tells us that there is a linear subspace L of Rn with P n L being 2-almost spherical and with din1(L) = O(nm2). Thus, since any k-dimensional 2-almost spherical polytope has en(k) facets, we have log fn-1 (P) = O(nm2). Now, we look at P. Since Bn C P C y'n Bn, by Exercise 14.4.1 we have n-112 Bn C P C Bn. In order to apply Theorem 14.4.5, we set P = y'n P, and obtain a 2-almost spherical section L of P of dimension O(nm2), where rh = n1ed(j_p). This implies log fn-1 (P) = O(nm2). It remains to observe the following inequality: 348 Chapter 14: Measure Concentration and Almost Spherical Sections 14.5.2 Lemma. Let P be a polytope in Rn defining a norm and let P be the dual polytope. Then we have med(fp) med(fp• ) > 1. We leave the easy proof as Exercise 2. Since m = med(fP )/ fo, we finally obtain log fn-1 (P) · log fn-1 (P) = !1(n2m2m2) = O(n med(fp)2 med(fP )2) = O(n). This concludes the proof of Theorem 14.5.1. Bibliography and remarks. Theorem 14.5.1, as well as the exam­ ple in Exercise 1, is due to Figiel, Lindenstrauss, and Milman [FLM77]. Most of the tools in the proof come from earlier papers of Milman [Mil69], [Mil71] . Exercises D 1. Construct an n-dimensional convex polytope P with log fo(P) = 0( fo) and log fn-1 (P) = 0( fo}, thereby demonstrating that Theorem 14.5.1 is asymptotically optimal. Start with the interval [0, 1] c R 1, and alter­ nate the operations ( ·) (passing to the dual polytope) and x (Cartesian product) suitably; see Exercise 5.5.1 for some properties of the Cartesian product of polytopes. 0 The polytopes obtained from [0, 1] by a sequence of these operations are called Hammer polytopes, and they form an important class of examples. 2. Let K be a bounded centrally symmetric convex body in Rn containing 0 in its interior, and let K be the dual body. (a) Show that llxiiK . llxiiK· > 1 for all X E sn-l. [!] (b) Let /, g: sn-1 -4 R be (measurable) functions ,vith f(x)g(x) > 1 for all X E sn-1 . Show that med(f) med(g) > 1. [II 14.6 Dvoretzky's Theorem Here is the remarkable Ramsey-type result in high-dimensional convexity promised at the beginning of this chapter. 14.6.1 Theorem (Dvoretzky's theorem). For any natural number k and an.Y real c > 0, there exists an integer n == n(k, c) with the f ollowing property. For any n-dimensional centrally symmetric convex body K C Rn, there exists a k-dimensional linear subspace L C Rn such that the section K n L is (!+c)-almost spl1erical. Tl1e best known estimates give n(k , c) = e0(k/e2). 14.6 Dvoretzky's Theorem 349 Thus, no matter how "edgy" a high-dimensional K may be, there is always a slice of not too small dimension that is almost a Euclidean ball. Another way of expressing the statement is that any normed space of a sufficiently large dimension contains a large subspace on which the norm is very close to the Euclidean norm (with a suitable choice of a coordinate system in the subspace). Note that the Euclidean norn1 is the only norm with this universal property, since all sections of the Euclidean ball are again Euclidean balls. As we saw in Section 14.4, the n-dimensional cube shows that the largest dirnension of a 2-almost spherical section is only O(log n) in the worst case. The assumption that K is syrnmetric can in fact be omitted; it suffices to require that 0 be an interior point of K. The proof of this more general version is not much more difficult than the one shown below. We prove Dvoretzky's theorem only for c = 1, since in Section 14.4 we prepared the tools for this particular setting. But the general case is not very different. Preliminary considerations. Since affine transforms of K arc practically for free in view of Lenuna 14.4.1, we Inay assu1ne that Bn C K C fo Bn by John's lemma (Theorern 13.4.1). So the norm induced by K satisfies n-1/2llxll < llxiiK < llxll for all x. If !K is the restriction of II · IlK to sn-l' we have the obvious bound med(fK) > n -112 . Immediate applica­ tion of Theorem 14.4.5 shows the existence of a 2-almost spherical section of K of dimension O(n Ined(fK )2) = 0(1), so this approach gives nothing at all! On the other hand, it just fails, and a small improvement in the order of magnitude of the lower bound for med(fK) already yields Dvoretzky's theo­ rem. We will not try to improve the estimate for rned(/K) directly. Instead, we find a relatively large subspace Z C R n such that the section K n Z can be enclosed in a not too large parallelotope P. Then we estimate, by direct cornputation, med(fp) (over the unit sphere in Z). The selection of the subspace Z is known as the Dvoretzky-Rogers lemma. We present a version with a particularly simple proof, where dim Z Ú n/ log n. (For our purposes, we would be satisfied with even rnuch weaker estin1ates, say dim Z > n6 for some fixed 8 > 0, but on the other hand, another proof gives even dim Z == ƚ .) 14.6.2 Lemma {A version of the Dvoretzky-Rogers lemma). Let K C Rn be a centrally symmetric convex body. Then there exist a lin­ ear subspace Z C R n of dimension k == l1 g n J , an orthonormal basis o 2n u 1 , u2 , . • . , uk of Z, and a nonsingular linear transform T of R n such that if we let k = T(K) n Z, then llxllk < llxll for all x E Z and lluillk > Ù f or all i == 1, 2, . . . , k. -Geometrically, the lemma asserts that K is sandwiched between the unit ball Bk and a parallelotope P as in the picture: 350 Chapter 14: Measure Concentration and Almost Spherical Sections p (The lemma claims that the points 2ui are outside of K or on its boundary, and P is obtained by Reparating theRe points from K by hyperplanes.) Proof. By John's lemma, we may assume Bn C K C tBn, where t = fo. Interestingly, the full power of John's lemma is not needed here; the sarne proof works with, say, t = n or t = n10, only the bound for k would become worse by a constant factor. Let X0 = Rn and K0 = K. Here is the main idea of the proof. The current body Ki is enclosed between an inner ball and an outer ball. Either Ki approaches the inner ball sufficiently closely at "many" places, and in this case we can construct the desired u1 , . . . , uk, or it stays away from the inner ball on a "large" subspace. In the latter case, we can restrict to that subspace and inflate the inner ball. But since the outer ball remains the same, the inflation of the inner ball cannot continue indefinitely. A precise argument follows; for notational reasons, instead of inflating the inner ball, we will shrink the body and the outer ball. We consider the following condition: ( ) Each linear subspace Y C X0 with dim(X0) - dim(Y) < k con­ tains a vector u with ]]u]] = 1 and ]]u]]Ko > Ø· This condition may or may not be satisfied. If it holds, we construct the orthonormal basis u1, u2, . . . , uk by an obvious induction. If it is not satisfied, we obtain a subspace X1 of dimension greater than n - k such that ]]x]]Ko < Ø llxll for all X E xl · Thus, KonXl is twice "more spherical" than Ko. Setting Kl = Ø (Ko n XI), we have ; 11 · 11 < II · IIKI < II . IJ. We again check the condition () with X1 and K1 instead of X0 and K0. If it holds, we find the ui within X 1, and if it does not, we obtain a subspace X2 of dimension greater than n - 2k, etc. After the ith step, we have 2'l T II . II < II . II Kj < II . IJ. ThiR conRtruction cannot proceed all the way to step i = i0 = llog2 n J , since 2io > t = fo. Thus, the condition () must hold for Xio-l at the latest. We have dim Xio -1 > n - (io - l)k > k, and SO the required basis U } , . . . , Uk Can be constructed. D 14.6 Dvoretzky's Theorem 351 The parallelotope is no worse than the cube. From now on, we work within the Rnbspace Z as in Lemma 14.6.2. For convenient notation, we as­ suine that Z is all of Rn and K is as K in the above lemma, i.e., Bn C K and lluiiiK > º, i = 1, 2, . . . , n, where u1 , . . . , Un is an orthonormal basis of Rn. (Note that the reduction of the dimension from n to n/ log n is nearly insignificant for the estimate of n(k, c) in Dvoretzky's theorem.) The goal is to show that med(fK) = n(J(log n)/n ) , where !K is II · IIK restricted to sn-1. Instead of estimating med(fK ), we bound the expectation E[fK]· Since !K is 1-Lipschitz (we have Bn C K), the difference I med(fK) ­ E[fK] I is O(n-112) by Proposition 14.3.3, which is negligible compared to the lower bound we are heading for. We have 11 · 11 K > 11 · 11 p, where P is the parallelotope as in the illustration to Lemma 14.6.2. So we actually bound E[fp] from below. First we show, by an averaging trick, that E[fp] > E[fc], where fc(x) = Ø II x II oo = v maxi !xi I is the norm induced by the cube C of side 4. The idea of the averaging is to consider' together with a point X = EÈ 1 O:iUi E sn-l ' the 2n points of the form Eà 1 aio:iui, where a E { -1, 1 }n iR a vector of signs. For any measurable function fp: sn-1 ---+ R, we have = 2n { fp(x) dP(x) = 2n E[fp] . }sn-1 The following lemma with Vi = o:iui and 1 · 1 = 11 · 11 p implies that the integrand on the left-hand side is always at least 2n maxi llaiuiiiP > 2n · Ø maxi !ail, and so indeed E[!P] > E[fc]. 14.6.3 Lemma. Let Vt, v2, . . . , Vn be arbitrary vectors in a normed space with nor1n I · j. Then n L L aivi aE{ -l,l}n i=l > 2n m9-x !vi I· t The proof is left as Exercise 1. It remains to estimate E[fc] from below. 14.6.4 Lemma. For a suitable positive constant c and for all n we have E[fc] = º { llxlloo dP(x) > c ǥ' lsn-1 v Ǧ where llxlloo = maxi lxil is the foo (or maximum) norm. Note that once this lemma is proved, Dvoretzky's theorem (with € = 1) follows from what we have already done and from Theorem 14.4.5. 352 Chapter 14: Measure Concentration and A1I11ost Spherical Sections Proof of Lemma 14.6.4. There are various proofs; a neat way is based on the generally useful fact that the n-dimensional normal distribution is spherically symmetric around the origin. We use probabilistic terminology. Let Z1, Z2, . . . , Zn be independent random variables, each of them with the standard normal distribution N(O, 1). As was mentioned in Section 14.1, the random vector Z = (Z1 , Z2, . . . , Zn) has a spherically symmetric (Gaussian) distribution, and consequently, the random variable llذll is uniformly dis-tributed in sn-l. Thus We show first, that we have IIZII < ffn with probability at least ' and second, that for a suitable constant c1 > 1, IIZIIoo > c1 Jlog n holds with probability at least . It follows that both these events occur simultaneously with probability at least , and so E[/c] > cJlognfn as clain1ed. As for the Euclidean norm liZ]], we obtain E [JIZII2] = nE [Zr] == n, since an N(O, 1) random variable has variance 1. By Markov's inequality, Prob [IIZII > ffn] = Prob [IIZII2 > 3E [JIZJI2)] < j. Further, by the independence of the Zi we have Prob [IIZIIoo < z] == Prob [IZil < z for all i == 1, 2, . . . , n] = Prob[IZl l < zt = (1 - J,r loo e-t2/2 dt) n . We can estimate fz oo e-t2 12 dt > fzz+l e-t2 12 dt > e-(z+I)2 12. Thus, setting z = ϣ-1, we have Prob []]Z]]oo < z] < (1 - J,r n-112)n, which is below for sufficiently large n. Lemma 14.6.4 is proved. D Bibliography and remarks. Dvoretzky and Rogers [DR50] in­ vestigated so-called unconditional convergence in infinite-dimensional Banach spaces, and as an auxiliary result, they proved a staten1ent similar to Lemma 14.6.2, with the dimension of the subspace about fo. They used the largest inscribed ellipsoid and a variational argu­ ment (somewhat similar to the proof of John's le1nma). The leinina actually holds with an ƚ-dimensional subspace; for a proof due to Johnson, again using the largest inscribed ellipsoid, see Benyamini and Lindenstrauss [BL99]. The proof of Lemma 14.6.2 presented in this section is from Figiel, Lindenstrauss, and Milman [FLM77]. Dvoretzky's theorem was conjectured by Grothendieck (Gro56] and first proved by Dvoretzky [Dvo59], [Dvo61]. His proof was quite com­ plicated, and the estin1ate for the dirnension of the ahnost spheri­ cal section was somewhat worse than that in Theorem 14.6.1. Since then, several other proofs have appeared; see Lindenstrauss [Lin92] 14.6 Dvoretzky's Theorem 353 for an insightful summary. The proof shown above essentially follows Figiel et al. [FLM77], who improved and streamlined Milman's proof [Mil71] based on measure concentration. A modern proof using mea­ sure concentration for the Gaussian measure instead of that for the sphere can be found in Pisier [Pis89]. Gordon [Gor88] has a proof with more probability-theoretic flavor, using certain inequalities for Gaus­ sian random variables (an extension of the so-called Slepian's lemma). The dependence of the dimension of the almost spherical section on n is of order log n, which is tight, as we have seen. In terms of E, the proof presented gives a bound proportional to c2 / log !, and the best known general bound is proportional to c2 (Gordon [Gor88]). A version of Dvoretzky's theorem for not necessarily symmetric convex bodies was established by Larman and Mani [LM75], and Gor­ don's proof [Gor88] is also formulated in this setting. For x E Rn, let llxiiP = (lxt iP + · · · + lxnlp)lfp denote the fp­ norm of x. Here p E [1, oo ), and for the limit case p = oo we have llxlloo = maxi lxil· For not too large p, the unit balls of lp-norms have much larger almost spherical sections than is guaranteed by Dvoret­ zky's theorem. For p E [1, 2), the dimension of a {l+c)-almost spherical section is Cgn, and for p > 2, it is cen21P. These results are obtained by the probabilistic method, and no explicitly given sections with compa­ rable dimensions seem to be known; see, e.g., [MS86]. There are many other estimates on the dimension of almost spherical sections, for ex­ ample in tern1s of the so-called type and cotype of a Banach space, as well as bounds for the dimension of almost spherical projections. For example, by a result of Milman, for any centrally symmetric n-dimen­ sional convex body K there is a section of an orthogonal projection of K that is ( 1 +c)-almost spherical and has dimension at least c( E )n (which is surprising, since both for sections alone and for projections alone the dimension of an almost spherical section can be only loga­ rithmic). Such things and much more information can be found in the books Milman and Schechtman [MS86], Pisier (Pis89], and Tomczak­ Jaegermann [TJ89]. Exercises 1. Prove Lemma 14.6.3. 8J 2. (Large almost spherical sections of the crosspolytope) Use Theorem 14.4.5 and the method of the proof of Lemma 14.6.4 for proving that the n-di­ mensional unit ball of the t'1-norm has a 2-almost spherical section of dimension at least en, for a suitable constant c > 0. [I] 15 Embedding Finite Metric Spaces into Normed Spaces 15.1 Introduction: Approximate Embeddings We recall that a metric space is a pair (X, p), \vhere X is a set and p: X x X ---+ [0, oo) is a metric, satisfying the following axioms: p(x, y) == 0 if and only if x == y, p(x, y) == p(y, x), and p(x, y) + p(y, z) > p(x, z). A metric p on an n-point set X can be specified by an nxn matrix of real numbers (actually (ƽ) nurnbers suffice because of the syrnmetry). Such tables really arise, for example, in microbiology: X is a collection of bacterial strains, and for every two strains, one can obtain their dissimilarity, which is some measure of how much they differ. Dissimilarity can be computed by assessing the reaction of the considered strains to various tests, or by corn paring their DNA, and so on.1 It is difficult to see any structure in a large table of nurnbers, and so we would like to represent a given metric space in a more comprehensible way. For example, it would be very nice if we could assign to each x E X a point f ( x) in the plane in such a way that p( x, y) equals the Euclidean distance of f(x) and f(y). Such representation would allow us to see the structure of the metric space: tight clusters, isolated points, and so on. Another advantage would be that the metric would now be represented by only 2n real numbers, the coordinates of the n points in the plane, instead of (ƽ) numbers as be­ fore. Moreover, rnany quantities concerning a point set in the plane can be cornputed by efficient geometric algorithms, which are not available for an arbitrary metric space. 1 There are various measures of dissimilarity, and not all of them yield a metric, but many do. 356 Chapter 1 5: Embedding Finite Metric Spaces into Nor1ned Spaces This sounds very good, and indeed it is too good to be generally true: It is easy to find examples of small metric spaces that cannot be represented in this way by a planar point set. One example is 4 points, each two of them at distance 1; such points cannot be found in the plane. On the other hand, they exist in 3-dimensional Euclidean space. Perhaps less obviously, there are 4-point metric spaces that cannot be represented (exactly) in any Euclidean space. Here are two examples: The metrics on these 4-point sets are given by the indicated graphs; that is, the distance of two points is the number of edges of a shortest path connecting them in the graph. For example, in the second picture, the center has distance 1 from the leaves, and the mutual distances of the leaves are 2. So far we have considered isometric embeddings. A mapping f: X dž Y, where X is a metric space with a metric p and Y is a metric space with a metric a, is called an isometric embedding if it preserves distances, i.e., if a(f(x), f(y)) = p(x, y) for all x, y E X. But in many applications we need not insist on preserving the distance exactly; rather, we can allow some distortion, say by 10%. A notion of an approximate embedding is captured by the following definition. 15.1.1 Definition (D-embedding of metric spaces). A mapping J: X ---+ Y, where X is a metric space with a metric p and Y is a metric space with a metric a, is called a D-ernbedding, where D > 1 is a real number, if tl1ere exists a number r > 0 such that f or all x, y E X, r · p(x, y) < a(f(x), f(y)) < D · r · p(x, y). The infimum of the numbers D such that f is a D-embedding is called the distortion of f. Note that this definition permits scaling of all distances in the same ratio r, in addition to the distortion of the individual distances by factors between 1 and D. If Y is a Euclidean space (or a normed space), we can rescale the image at will, and so we can choose the scaling factor r at our convenience. Mappings with a bounded distortion are sometirnes called hi-Lipschitz mappings. This is because the distortion of f can he equivalently defined using the Lipschitz constants of f and of the inverse mapping f-1. Namely, if we define the Lipschitz norm of f by II fil Lip = sup{ a(f(x), f(y) )/ p(x, y): x, y E X, x # y}, then the distortion of f equals 11/IILip · II/-1 IILip· We are going to study the possibility of D-ernbedding of n-point Inetric spaces into Euclidean spaces and into various normed spaces. As usual, we cover only a small sample of results. Many of them are negative, showing that certain metric spaces cannot be embedded too well. But in Section 15.2 15.1 Introduction: Approximate Embeddings 357 we start on an optimistic note: We present a surprising positive result of considerable theoretical and practical importance. Before that, we review a few definitions concerning lp-spaces. The spaces lp and t:. For a point x E Rd and p E [1, oo) , let denote the lp-norm of x. Most of the time, we will consider the case p = 2, i.e., the usual Euclidean norm llxll2 = llxll. Another particularly important case is p = 1, the l1-norm (sometimes called the Manhattan distance). The i00-norm, or maximum norm, is given by llxlloo := maxi lxil· It is the limit of the lp-norms as p --1- oo. Let l$ denote the space Rd equipped with the lp-norm. In particular, we write .eg in order to stress that we mean Rd with the usual Euclidean norm. Sometimes we are interested in embeddings into some space lÄ, with p given but without restrictions on the dimension d; for example, we can ask whether there exists some Euclidean space into which a given metric space embeds isometrically. Then it is convenient to speak about lp, which is the space of all infinite sequences x = (xt, x2, ... ) of real numbers with llxiiP < oo, where llxllv = (I;Ó 1 I xi IP) l/p. In particular, £2 is the (separable) Hilbert space. The space lp contains each l$ isometrically, and it can be shown that any finite metric space isometrically embeddable into lp can be isometrically embedded into f$ for some d. (In fact, every n-point subspace of lp can be isometrically embedded into l$ with d < (ƽ); see Exercise 15.5.2.) Although the spaces lp are interesting mathematical objects, we will not really study them; we only use embeddability into lp as a convenient short­ hand for embeddability into lԅ for some d. Bibliography and remarks. This chapter aims at providing an overview of important results concerning low-distortion embeddings of finite metric spaces. The scope is relatively narrow, and we almost do not discuss even closely related areas, such as isometric embed­ dings. Another recent survey, with fewer proofs and mainly focused on algorithmic aspects, is Indyk [In dOl]. For studying approximate embeddings, it may certainly be help­ ful to understand isometric embeddings, and here extensive theory is available. For example, several ingenious characterizations of isometric embeddability into f2 can be found in old papers of Schoenberg (e.g., [Sch38], building on the work of mathematicians like Menger and von Neumann). A recent book concerning isometric embeddings, and em­ beddings into l 1 in particular, is Deza and Laurent [D L97]. Another closely related area is the investigation of hi-Lipschitz maps, usually (l+e)-embeddings with e > 0 small, defined on an open 358 Chapter 15: Embedding Finite Metric Spaces into Nor1ned Spaces subset of a Euclidean space (or a Banach space) and being local home­ omorphisms. These mappings are called quasi-isometries (the defini­ tion of a quasi-isometry is slightly more general, though), and the main question is how close to an isometry such a mapping hať to be, in terms of the dimension and c-; see Benyamini and Lindenstrauss [BL99], Chapters 14 and 15, for an introduction. Exercises 1. Consider the two 4-point examples presented above (the square and the star); prove that they cannot be isometrically embedded into t'%. Ġ Can you determine the minimum necessary distortion for embedding into £$? 2. (a) Prove that a bijective mapping f between metric spaces iť a D­ embedding if and only if 11/IILip · II/-111Lip < D. II1 (b) Let (X, p) be a metric space, I X I > 3. Prove that the distortion of an embedding f: X -+ Y, where (Y, a) is a metric space, equals the supremum of the factors by which f "spoils'' the ratios of distances; that . lS, sup { a(f(x ), f(y)) / a(f(z ), f( t)) : x, y, z, t E X, x =1- y, z =1- t} . p(x, y)jp(z, t) . 15.2 The Johnson-Lindenstrauss Flattening Lemma It is easy to show that there is no isometric embedding of the vertex set V of an n-dimensional regular simplex into a Euclidean space of dimension k < n. In this sense, the ( n+ 1 )-point set V C £2 is truly n-dimensional. The situation changes drastically if we do not insist on exact isometry: As we will see, the set V, and any other ( n+ 1 )-point set in 1!2, can be almost isometrically embedded into £ŭ with k == O(log n) only! 15 .. 2 .. 1 Theorem (Johnson-Lindenstrauss flattening lemma) .. Let X be an n-point set in a Euclidean space (i.e., X c 1!2), and let E E (0, 1] be given. Then there exists a (l+c)-embedding of X into 1!ŭ, where k == O(s-2 log n). This result shows that any metric question about n points in £2 can be considered for points in £x(log n) , if we do not mind a distortion of the distances by at most 10%, say. For example, to represent n points of £2 in a computer, we need to store n2 numbers. To store all of their distances, we need about n2 numbers as well. But by the flattening lemma, we can store only O(nlog n) numbers and still reconstruct any of the n2 distances with error at most 10%. 15.2 The Johnson-Lindenstrauss Flattening Lemma 359 Various proofs of the flattening lemma, including the one below, provide efficient randomized algorithms that find the almost isometric embedding into fT quickly. Numerous algorithmic applications have recently been found: in fast clustering of high-dimensional point sets, in approximate searching for nearest neighbors, in approximate multiplication of matrices, and also in purely graph-theoretic problems, such as approximating the bandwidth of a graph or multicommodity flows. The proof of Theorem 15.2.1 is based on the following lemma, of inde­ pendent interest. 15.2.2 Lemma {Concentration of the length of the projection). For a unit vector X E sn-1, let j(x) = Jxi + xϫ + · · · + xϬ be the length of the projection of x on the subspace Lo spanned by the first k coordinates. Consider X E sn-1 chosen at random. Tllen f(x) is sllarply concentrated around a suitable number m = m(n, k): P[f(x) > m + t] < 2e-t2n/2 and P[f(x) < m - t] < 2e-t2n/2, where P is tl1e uniform probability measure on sn-l . For n larger than a suitable constant and k > 10 ln n, we have m > ¾ /F_. In the lemma, the k-dimensional subspace is fixed and x is random. Equiv­ alently, if x is a fixed unit vector and L is a random k-dimensional subspace of f2 (as introduced in Section 14.3), the length of the projection of x on L obeys the bounds in the lemma. Proof of Lemma 15.2.2. The orthogonal projection p: fU --+ fT given by (x1 , . . . , Xn) t-+ (x1, . . . , xk) is !-Lipschitz, and so f is !-Lipschitz as well. Levy's lemma (Theorem 14.3.2) gives the tail estimates as in the lemma with rn = med(f). It remains to establish the lower bound for rn. It is not impossibly difficult to do it by elementary calculation (we need to find the measure of a simple region on sn-l ) . But we can also avoid the calculation by a trick combined with a general measure concentration result. For random X E sn-1' we have 1 = E [llxll2] = L& 1 E [x;] . By symme­ try, E [ xr] = . د ٶ,, and so E [/2] = خ. We now show that, since f is tightly concentrated, E [!2] cannot be much larger than m2, and so m is not too small. For any t > 0, we can estimate k = E [f2] < P[f < m + t] · (m + t)2 + P[f > m + t] · max(f(x}2} n x < (m + t)2 + 2e-t2n/2. 360 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces Let us set t = Jkl5ii . Since k > 10 ln n, we have 2e-t2n/2 < Ǣ, and from the above inequality we calculate m > J(k-2)/n - t > º y'k]n. Let us remark that a more careful calculation shows that m = y'kJn + 0( )n) for all k. 0 Proof of the flattening lemma (Theorem 15.2.1}. We may assume that n is sufficiently large. Let X c iʍ be a given n-point set. We set k = 200c-2 ln n (the constant can be improved). If k > n, there is nothing to prove, so we assume k < n. Let L be a random k-dimensional linear subspace of fʍ (obtained by a random rotation of L0). The chosen L is a copy offT. We let p: f2 --+ L be the orthogonal projection onto L. Let m be the number around which llp(x)ll is concentrated, as in Lemma 15.2.2. We prove that for any two distinct points x, y E £ʍ, the condition (1 - ƒ)m llx - Yll < llp(x) - p(y)ll < (1 + ƒ)m llx - Yll (15.1) is violated with probability at most n -2• Since there are fewer than n2 pairs of distinct x, y E X, there exists some L such that (15.1) holds for all x, y E X. In such a case, the mapping p is a D-embedding of X into fT with D < l+c:/3 < 1 +c (for c < 1). 1-c:/3 -Let x and y be fixed. First we reformulate the condition (15.1). Let u = x - y; since p is a linear mapping, we have p(x) -p(y) = p(u), and {15.1) can be rewritten as (1- ƒ )m llull < liP( u) II < (1 + ƒ )m llull· This is invariant under scaling, and so we may suppose that !lull = 1. The condition thus becomes llp(u)ll - m < ƒm. (15.2) By Lemma 15.2.2 and the remark following it, the probability of violating {15.2), for u fixed and L random, is at most This proves the Johnson-Lindenstrauss flattening lemma. 0 Alternative proofs. There are several variations of the proof, which are more suitable from the computational point of view (if we really want to produce the embedding into fx(log n)). In the above proof we project the set X on a random k-dimension­ al subspace L. Such an L can be chosen by selecting an orthonormal ba­ sis (b1, b2, • • • , bk), where b1, • • • , bk is a random k-tuple of unit orthogo­ nal vectors. The coordinates of the projection of x to L are the scalar products {b1, x), . . . , {bk, x). It turns out that the condition of orthogonal­ ity of the bi can be dropped. That is, we can pick unit vectors b1, . . . , bk E sn-l independently at random and define a mapping p: X --+ RT by X I--t 15.2 The Johnson-Lindenstrauss Flattening Lemma 361 ( {b1, x) , . . . , {bk, x) ). Using suitable concentration results, one can verify that p is a (!+c)-embedding with probability close to 1. The procedure of picking the bi is computationally much simpler. Another way is to choose each component of each bi from the normal distribution N(O, 1), all the nk choices of the components being independent. The distribution of each bi in Rn is rotationally symmetric (as was mentioned in Section 14.1). Therefore, for every fixed u E sn-1, the scalar product (bi, u) also has the normal distribution N(O, 1) and llp(u)ll2, the squared length of the image, has the distribution of L:7 1 z;, where the zi are independent N(O, 1). This is the well known Chi-Square distribution with k degrees of freedom, and a strong concentration result analogous to Lemma 15.2.2 can be found in books on probability theory (or derived from general measure­ concentration results for the Gaussian measure or from Chernoff-type tail estimates). A still different method, particularly easy to implement but with a more difficult proof, uses independent random vectors bi E {-1, 1 }n. Bibliography and remarks. The flattening lemma is from John­ son and Lindenstrauss [JL84]. They were interested in the following question: Given a metric space Y, an n-point subspace X C Y, and a !-Lipschitz mapping /: X Dž £2, what is the smallest C = C(n) such that there is always a C-Lipschitz rnapping /: Y £2 extending f? They obtained the upper bound C = 0( y'Iog n ), together with an almost matching lower bound. The alternative proof of the flattening lemma using independent normal random variables was given by Indyk and Motwani [IM98]. A streamlined exposition of a similar proof can be found in Dasgupta and Gupta [DG99]. For more general concentration results and techniques using the Gaussian distribution see, e.g., [Pis89], [MS86]. Achlioptas [AchOl] proved that the components of the bi can also be chosen as independent uniform ±1 random variables. Here the dis­ tribution of (bi, u) does depend on u but the proof shows that for every u E sn-l, the concentration of liP( u) 112 is at least as strong as in the case of the normally distributed bi. This is established by analyzing higher moments of the distribution. The sharpest known upper bound on the dimension needed for a ( 1 +c)-embedding of an n-point Euclidean metric is ! ( 1 + o( 1)) In n, where o(l) is with respect to c 0 [IM98), [DG99], [AchOl]. The main term is optimal for the current proof method; see Exercises 3 and 15.3.4. The Johnson-Lindenstrauss flattening lemma has been applied in many algorithms, both in theory and practice; see the survey [IndO I] or, for example, Kleinberg [Kle97], Indyk and Motwani [IM98], Borodin, Ostrovsky, and Rabani [BOR99]. 362 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces Exercises 1. Let x, y E sn-1 be two points chosen independently and uniformly at random. Estimate their expected (Euclidean) distance, assuming that n is large. 0 2. Let L C Rn be a fixed k-dimensional linear subspace and let x be a random point of sn-1• Estimate the expected distance of X from L, as­ suming that n is large. @J 3. (Lower bound for the flattening lemma) (a) Consider the n+l points 0, e1, e2, . . . , en E Rn (where the ei are the vectors of the standard orthonormal basis). Check that if these points with their Euclidean distances are ( 1 +c: )-embedded into £G, then there exist unit vectors 1JI, v2, . . . , Vn E R k with I ( 1Ji, Vj) I < lOOt: for all i -:1 .i (the constant can be improved). õ (b) Let A be an nxn symmetric real matrix with aii = 1 for all i and I aij I < n -I/2 for all j, j, i -:1 j. Prove that A has rank at least G. [II (c) Let A be an n x n real matrix of rank d, let k be a positive integer, and let B be the nxn matrix with bij = afj· Prove that the rank of B is at most ( kkd) . 8J (d) Using (a)-( c), prove that if the set as in (a) is (l+c:)-embedded into £G, where lOOn -l/2 < c: < v, then 0 k = n( 1 1 ) c:2 Iog ! og n . This proof is due to Alon (unpublished manuscript, Tel Aviv University). 15.3 Lower Bounds By Counting In this section we explain a construction providing many "essentially dif­ ferent" n-point metric spaces, and we derive a general lower bound on the Ininimum distortion required to embed all these spaces into a d-din1ensional normed space. The key ingredient is a construction of graphs without short cycles. Graphs without short cycles. The girth of a graph G is the length of the shortest cycle in G. Let m(£, n) denote the maximum possible number of edges of a simple graph on n vertices containing no cycle of length R or shorter, i.e., with girth at least t'+l. We have m(2, n) = (ˈ) , since the complete graph Kn has girth 3. Next, m(3, n) is the maximum number of edges of a triangle-free graph on n vertices, and it equals l G J · IG l by Turan's theorem; the extremal example is the complete bipartite graph K l n/ 2 J , r n/ 21 . Another simple observation is that for all k, m(2k+l, n) > v m(2k, n). This is because any graph G has a bipartite 15.3 Lower Bounds By Counting 363 subgraph H that contains at least half of the edges of G. 2 So it suffices to care about even cycles and to consider £ even, remembering that the bounds for £ = 2k and f == 2k+ 1 are almost the same up to a factor of 2. Here is a simple general upper bound on m(£, n). 15.3.1 Lemma. For all n and £, m(£, n) < nl+l/LT/2J + n. Proof. It suffices to consider even £ = 2k. Let G be a graph with n vertices and Tn = ·rrt(2k, n) edges. The average degree is d = 2m. There iť a subgraph -n H C G with minimum degree at least 6 = ºd. Indeed, by deleting a vertex of degree smaller than 8 the average degree does not decrease, and so H can be obtained by a repeated deletion of such vertices. Let v0 be a vertex of H. The crucial observation is that, since H has no cycle of length 2k or shorter, the subgraph of H induced by all vertices at distance at most k from v0 is a tree: The number of vertices in this tree is at least 1+J+J(8-1)+· · ·+8(8-1)k-l > (8 -1)k, and this is no more than n. So 8 < n11k+1 and m = ddn < 8n < nl+I/k + n. D This simple argument yields essentially the best known upper bound. But it was asymptotically matched only for a few small values of £, namely, for f E { 4, 5, 6, 7, 10, 11}. For m( 4, n) and m( 5, n), we need bipartite graphs without K2,2; these were briefly discussed in Section 4.5, and we recall that they can have up to n312 edges, as is witnessed by the finite projective plane. The remaining listed cases use clever algebraic constructions. For the other £, the record is also held by algebraic constructions; they are not difficult to describe, but proving that they work needs quite deep mathematics. For all £ 1 (mod 4) (and not on the list above), they yield m(f, n) = f!(n1+4/(3f-7)), while for £ - 3 (mod 4), they lead to m(£, n) = ح1( n 1+4/(3T-9)). Here we prove a weaker but simple lower bound by the probabilistic method. 2 To see this, divide the vertices of G into two classes A and B arbitrarily, and while there is a vertex in one of the classes having more neighbors in its class than in the other class, move such a vertex to the other class; the number of edges between A and B increases in each step. For another proof, assign each vertex randomly to A or B and check that the expected number of edges between A and B is ج IE(G)I. 364 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces 15.3.2 Lemma. For all f > 3 and n > 2, we have m(f, n) > z nl+l/(l-1) . Of course, for odd f we obtain an O(n1+1/(R-2)) bound by using the lemma for l-1. Proof. First we note that we may assume n > 4£-1 > 16, for otherwise, the bound in the lemma is verified by a path, say. We consider the random graph G( n, p) with n vertices, where each of the (R) possible edges is present with probability p, 0 < p < 1, and these choices are mutually independent. The value of p is going to be chosen later. Let E be the set of edges of G(n,p) and let F C E be the edges contained in cycles of length f or shorter. By deleting all edges of F from G ( n, p), we obtain a graph with no cycles of length .e or shorter. If we manage to show, for some m, that the expectation E[IE \ FIJ is at least m, then there is an instance of G(n, p) with IE \ Fl > m, and so there exists a graph with n vertices, m edges, and of girth greater than e. We have E[IEIJ = (R)p. What is the probability that a fixed pair e = { u, v} of vertices is an edge of F? First, e must be an edge of G( n, p ), which has probability p, and second, there must be path of length between 2 and l-1 connecting u and v. The probability that all the edges of a given potential path of length k are present is pk, and there are fewer than nk-1 possible paths from u to v of length k. Therefore, the probability of e E F is at most Li-Ŕpk+1nk-1, which can be bounded by 2plnl-2, provided that np > 2. Then E[IFI] < (R) · 2plnl-2, and by the linearity of expectation, we have E[IE \ Fl] = E[IEI] - E[IFI] > (R)p (1 - 2p£-1n£-2) . Now, we maximize this expression as a function of p; a somewhat rough but 1/(f-1) /( simple choice is p = n 2n , which leads to E[IE \ Fl] > z n1+1 £-l) (the constant can be improved somewhat). The assumption np > 2 follows from n > 4£-I. Lemma 15.3.2 is proved. 0 There are several ways of proving a lower bound for m(f, n) similar to that in Lemma 15.3.2, i.e., roughly n1+I/R; one of the alternatives is indicated in Exercise 1 below. But obtaining a significantly better bound in an elementary way and improving on the best known bounds (of roughly n1+4/3l) remain challenging open problems. We now use the knowledge about graphs without short cycles in lower bounds for distortion. 15.3.3 Proposition (Distortion versus dimension). Let Z be a d-di­ mensional normed space, such as some fÄ, and suppose that all n-point metric spaces can be D-embedded into Z. Let f be an integer with D < I! < 5D (it is essential that e be strictly larger than D, while the upper bound is only for technical convenience). Then 15.3 Lower Bounds By Counting 1 m(£, n) d > . ---- log t6De n 2 f-D 365 Proof. Let G be a graph with vertex set V = { v1, v2, . . . , Vn} and with nDŽ = m(£, n) edges. Let g denote the set of all subgraphs H C G obtained from G by deleting some edges (but retaining all vertices). For each H E Q, we define a metric PH on the set V by PH(u, v) = min(f, dH(u, v)), where d H ( u, v) is the length of a shortest path connecting u and v in H. The idea of the proof is that g contains many essentially different metric spaces, and if the dimension of Z were small, then there would not be suffi­ ciently many essentially different placements of n points in Z. Suppose that for every H E g there exists a D-embedding fn: (V, PH) ---+ Z. By rescaling, we make sure that b PH(u, v) < 11/H(u) - fn(v)llz < pH ( u, v) for all u, v E V. We may alťo aťťume that the images of all pointť are contained in the £-ball Bz(O, f) = {x E Z: llxllz < £}. Set ʎ = !(_i>-1). We have 0 < ʎ < 1. Let N be a ʎ-net in Bz(O, i). The notion of /3-net was defined above Lemma 13.1.1, and that lemma showed that a ,8-net in the ( d-1 )-dimensional Euclidean sphere has cardinality at most (/)d. Exactly the same volume argument proveť that in our caťe INI < (0)d. For every H E Q, we define a new mapping 9H= V ---+ N by letting 9H(v) be the nearest point to fn(v) in N (ties resolved arbitrarily). We prove that for distinct Ht, H2 E Q, the mappings 9H1 and 9H2 are distinct. The edge sets of H1 and H2 differ, so we can choose a pair u, v of vertices that form an edge in one of them, say in H1, and not in the other one (H2). We have PH1 (u, v) = 1, while PH2 (u, v) = £, for otherwise, a u-v path in H2 of length smaller than f and the edge { u, v} would induce a cycle of length at most i in G. Thus and f IIYH2(u) - 9H2 (v)llz > llfH2 (u) - !H2(v)llz - 2ɡ > D - 2/3 = 1 + 2(3. Therefore, 9H1 (u) i- 9H2(u) or 9H1 (v) =/= 9H2 (v). We have shown that there are at least IQI distinct mappings V ---+ N. The number of all mappings V ---+ N is IN In, and so The bound in the proposition follows by calculation. 0 15.3.4 Corollary ("Incompressibility" of general metric spaces). If Z is a normed space such that all n-point metric spaces can be D-embedded into Z, where D > 1 is considered fixed and n ---+ oo, tl1en we l1ave 366 Chapter 15: Embedding Finite A1etric Spaces into Norined Spaces • dim Z = O(n) for D < 3, • dim Z = f2(fo ) for D < 5, • dim Z = O(n113) for D < 7. This follows from Proposition 15.3.3 by substituting the asymptotically optimal bounds for rn(3, n), rn(5, n), and rn(7, n). The constant of propor­ tionality in the first bound goes to 0 as D --+ 3, and similarly for the other bounds. The corollary shows that there is no normed space of dimension signifi­ cantly smaller than n in which one could represent all n-point metric spaces with distortion smaller than 3. So, for example, one cannot save much space by representing a general n-point metric space by the coordinates of points in some suitable normed space. It is very surprising that, as we will see later, it is possible to 3-embed all n-point metric spaces into a particular normed space of dimension close to fo. So the value 3 for the distortion is a real threshold! Similar thresholds occur at the values 5 and 7. Most likely this continues for all odd integers D, but we cannot prove this because of the lack of tight bounds for the number of edges in graphs without short cycles. Another consequence of Proposition 15.3.3 concerns embedding into Eu­ clidean spaces, without any restriction on diinension. 15.3.5 Proposition (Lower bound on embedding into Euclidean spaces). For all n, there exist n-point metric spaces that cannot be em­ bedded into £2 (i.e., into any Euclidean space) with distortion smaller than clog n / log log n, where c > 0 is a suitable positive constant. Proof. If an n-point metric space is D-embedded into £2, then by the Johnson-Lindenstrauss flattening lemma, it can be (2D)-embedded into £X with d < C log n for some specific constant C. For contradiction, suppose that D < c1 log n/ log log n with a sufficiently small c1 > 0. Set .e = 4D and assume that e is an integer. By Lemma 15.3.2, we have rn( .e, n) > ث n 1+1/(f-1) > C1 n log n, where C1 can be made as large as we wish by adjusting c1 . So Proposition 15.3.3 gives d > Ǡ· log n. If C1 > 5C, we have a contradiction. D In the subsequent sections the lower bound in Proposition 15.3.5 will be improved to O(log n) by a completely different method, and then we will see that this latter bound is tight. Bibliography and remarks. The proble1n of constructing small graphs with given girth and minimum degree has a rich history; see, e.g., Bollobas [Bol85] for most of the earlier results. In the proof of Lemma 15.3.1 we have derived that any graph of minimum degree 8 and girth 2k+1 has at least 1 + 8 LZ (8-1)i ver-tices, and a similar lower bound for girth 2k is 2 2:: ǡ ( 8-1) i. Graphs 15.3 Lower Bou11ds By Cou11ti11g attaining these bounds (they are called Moore graphs for odd girth and generalized polygon graphs for even girth) are known to exist only in very few cases (see, e.g., Biggs [Big93] for a nice exposition). Alon, Hoory, and Linial [AHL01] proved by a neat argument using random walks that the same formulas still bound the number of vertices from below if 8 is the average degree (rather than minimum degree) of the graph. But none of this helps improve the bound on m( 1!, n) by any substantial amount. The proof of Lemma 15.3.2 is a variation on well known proofs by Erdos. The constructions mentioned in the text attaining the asymptot­ ically optimal value of m( f; n) for several small f are due to Benson (Ben66) (constructions with similar properties appeared earlier in Tits [Tit59], where they were investigated for different reasons). As for the other R, graphs with the parameters given in the text were constructed by Lazebnik, Ustimenko, and Woldar [LUW95], [LUW96] by algebraic methods, improving on earlier bounds (such as those in Luhotzky, Phillips, Sarnak [LPS88]; also see the notes to Section 15.5). Proposition 15.3.5 and the basic idea of Proposition 15.3.3 were invented by Bourgain [Bou85] . The explicit use of graphs without short cycles and the detection of the "thresholds" in the behavior of the dimension as a function of the distortion appeared in Matousek [Mat96b). Proposition 15.3.3 implies that a normed space that should accom­ modate all n-point metric spaces with a given small distortion must have large dimension. But what if we consider just one n-point metric space M, and we ask for the minimum dimension of a normed space Z such that M can be D-embedded into Z? Here Z can be "customized" to M, and the counting argument as in the proof of Proposition 15.3.3 cannot work. By a nice different method, using the rank of certain matrices, Arias-de-Reyna and Rodriguez-Piazza [AR92] proved that for each D < 2, there are n-point metric spaces that do not D-embed into any normed space of dimension below c(D)n, for some c(D) > 0. In [Mat96b] their technique was extended, and it was shown that for any D > 1, the required dimension is at least c( l D J )n 112LD J , so for a fixed D it is at least a fixed power of n. The proof again uses graphs without short cycles. An interesting open problem is whether the pos­ sibility of selecting the norm in dependence on the metric can ever help substantially. For example, we know that if we want one normed space for all n-point metric spaces, then a linear dimension is needed for all distortions below 3. But the lower bounds in [AR92], [Mat96b] for a customized normed space force linear dimension only for distor­ tion D < 2. Can every n-point metric space M be 2.99-embedded, say, into some normed space Z = Z(M) of dimension o(n)? 367 368 Chapter 15: Embedding Finite Metric Spaces into Nor1ned Spaces We have examined the tradeoff between dimension and distortion when the distortion is a fixed number. One may also ask for the min­ imum distortion if the dimension d is fixed; this was considered in Matousek [Mat90b]. For fixed d, all lp-norms on R d are equivalent up to a constant, and so it suffices to consider embed dings into £̷. Considering the n-point metric space with all distances equal to 1, a simple volume argument shows that an embedding into eg has dis­ tortion at least O(n1fd). The exponent can be improved by a factor of roughly 2; more precisely, for any d > 1, there exist n-point met­ ric spaces requiring distortion 0 (n1/L(d+l)/2J ) for embedding into £̷ (these spaces are even isometrically embeddable into .eØ+ 1). They are obtained by taking a q-dimensional simplicial complex that cannot be embedded into R2q (a Van Kampen-Flores complex; for modern treatment see, e.g., [Sar91] or [Ziv97]), considering a geometric real­ ization of such a complex in R2q+I, and filling it with points uniformly (taking an 17-net within it for a suitable 1], in the metric sense); see Exercise 3 below for the case q = 1. For d = 1 and d = 2, this bound is asymptotically tight, as can be shown by an inductive argument [Mat90b]. It is also almost tight for all even d. An upper bound of O(n2fd log312 n) for the distortion is obtained by first embedding the considered metric space into f2 (Theorem 15.7.1), and then project­ ing on a random d-dimensional subspace; the analysis is similar to the proof of the Johnson-Lindenstrauss flattening lemma. It would be interesting to close the gap for odd d > 3; the case d = 1 suggests that perhaps the lower bound might be the truth. It is also rather puz­ zling that the ( ťuspected) bound for the distortion for fixed dimension, D × n1/L(d+l)/2J , looks optically similar to the (suspected) bound for dimension given the distortion (Corollary 15.3.4), d Û n1/L(D+l)/2J . Is this a pure coincidence, or is it trying to tell us something? Exercises 1. (Erdos-Sachs construction) This exercise indicates an elegant proof, by Erdos and Sachs [ES63], of the existence of graphs without short cycles whose number of edges is not much smaller than in Lemma 15.3.2 and that are regular. Let £ > 3 and 8 > 3. (a) (Starting graph) For all 8 and f, construct a finite 8-regular graph G ( 8, f) with no cycles of length R or shorter; the number of vertices does not matter. One possibility is by double induction: Construct G( 8+ 1, f) using G(8, f) and G(8', £-1) with a suitable 8'. 0 (b) Let G be a 8-regular graph of girth at least f + 1 and let u and v be two vertices of G at distance at least £+2. Delete them together with their incident edges, and connect their neighbors by a matching: 15.4 A Lower Bound for the Hamming Cube 369 u v Check that the resulting graph still does not contain any cycle of length at IllOSt f. 0 (c) Show that starting with a graph as in (a) and reducing it by the operations as in (b), we arrive at a 8-regular graph of girth R+ 1 and with at most 1 + 8 + 8(8-1) + · · · + 8(o-1)e vertices. What is the resulting asymptotic lower bound for m( n, £)' with e fixed and n ---+ 00? IT1 2. (Sparse spanners) Let G be a graph with n vertices and with positive real weights on edges, which represent the edge lengths. A subgraph H of G is called a t-spanner of G if the distance of any two vertices u, v in H is no n1ore than t times their distance in G (both the distances are measured in the shortest-path metric). Using Lemma 15.3.1, prove that for every G and every integer t > 2, there exists a t-spanner with 0 (n1+1/Lt/2J ) edges. 0 3. Let Gn denote the graph arising from K5, the complete graph on 5 ver­ tices, by subdividing each edge n-1 times; that is, every two of the orig­ inal vertices of K5 are connected by a path of length n. Prove that the vertex set of Gn, considered as a metric space with the graph-theoretic distance, cannot be embedded into the plane with distortion smaller than const · n. 1 4. (Another lower bound for the flattening lemma) (a) Given c E (0, dž) and n sufficiently large in terms of c, construct a collection V of ordered n-tuples of points of £2 such that the distance of every two points in each V E V is between two suitable constants, no two V -=/= V' E V can have the same (1+c-)-embedding (that is, there are i, .i such that the distances between the ith point and the jth point in V and in V' differ by a factor of at least l+c-), and log lVI = O(c--2n log n). 0 (b) Use (a) and the method of this section to prove a lower bound of 0( 2 1 1 1 log n) for the dimension in the Johnson-Lindenstrauss flat ten-t og g ing lemma. m 15.4 A Lower Bound f or the Hrunming Cube We have established the existence of n-point metric spaces requiring the distortion close to log n for embedding into /!2 (Proposition 15.3.5), but we have not constructed any specific metric space with this property. In this section we prove a weaker lower bound, only n( y'log n ), but for a specific and very simple space: the Hamming cube. Later on, we extend the proof 370 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces method and exhibit metric spaces with O(log n) lower bound, which turns out to be optimal. We recall that Cm denotes the space {0, 1 }m with the Hamming (or £1) metric, where the distance of two 0/1 sequences is the number of places where they differ. 15.4.1 Theorem. Let m > 2 and n = 2m . Then there is no D-embedding of the Hamming cube Cm into £2 with D < y'rii = Jlog2 n. That is, the natural embedding, where we regard { 0, 1} m as a subspace of f't, is optimal. The reader may remember, perhaps with some dissatisfaction, that at the beginning of this chapter we mentioned the 4-cycle as an example of a metric space that cannot be isometrically embedded into any Euclidean space, but we gave no reason. Now, we are obliged to rectify this, because the 4-cycle is just the 2-dimensional Hamming cube. The intuitive reason why the 4-cycle cannot be embedded isometrically is that if we embed the vertices so that the edges have the right length, then at least one of the diagonals is too short. We make this precise using a slightly more complicated notation than necessary, in anticipation of later developments. Let V be a finite set, let p be a metric on V, and let E, F C () be nonempty sets of pairs of points of V. As our running example, V = { v1, • . • , v4} is the set of vertices of the 4-cycle, p is the graph metric on it, E = {{v1, v2}, {v2, v3}, {v3, v4}, {v4, v1}} are the edges, and F = {{vi, v3}, {v2, v4}} are the diagonals. V4 • . -. ----. ,' . , . , E ·,:· . . , . , . # ȗ .... -------------- F ,' · .. . . Vt w ·' •• Let us introduce the abbreviated notation We consider the ratio p2(E) = L p(u, v)2. {u,v}EE RE,F(P) = the subscripts E, F will be omitted unless there is danger of confusion. For our 4-cycle, R(p) is a kind of ratio of "diagonals to edges" but with quadratic averages of distances, and it equals J2 (right?). Next, let f: V --+ f,X be a D-embedding of the considered metric space into a Euclidean space. This defines another metric u on V: u( u, v) = II f( u) -f(v)ll· With the same E and F, let us now look at the ratio R(u). If f is a D-embedding, then R( u) > R(p) /D. But according to the idea mentioned above, in any embedding of the 4-cycle into a Euclidean space, the 1 5.4 A Lower Bound for the Hamming Cube 371 diagonals are always too short, and so R( a) can be expected to be smaller than J2 in this case. This is confirmed by the following lemma, which (with xi = .f(vi)) shows that R(a) < 1 and therefore D > J2. 15.4.2 Lemma (Short diagonals lemma). Let x1, x2, x3, x4 be arbitrary points in a Euclidean space. Then Proof. Four points can be assumed to lie in R 3, so one could start some stereon1etric calculations. But a better way is to observe that it suffices to prove the lemma for points on the real line! Indeed, for the Xi in some R d we can write the !-dimensional inequality for each coordinate and then add these inequalities together. (This is the reason for using squares in the definition of the ratio R( a): Squares of Euclidean distances split into the contributions of individual coordinates, and so they are easier to handle than the distances themselves.) If the Xi are real numbers, we calculate (x1 - x2)2 + (x2 - x3)2 + (x3 - x4)2 + (x4 - x1)2 - (x1 - x3)2 - (x2 - x4)2 = (x1 - x2 + x3 - x4)2 > 0, and this is the desired inequality. D Proof of Theorem 15.4.1. We proceed as in the 2-dimensional case. Let V = {0, 1}rn be the vertex set of Cm, let p be the Hamming metric, let E be the set of edges of the cube (pairs of points at distance 1), and let F be the set of the long diagonals. The long diagonals are pairs of points at distance m, or in other words, pairs { u, u}, u E V, where u is the vector arising from u by changing O's to 1 's and 1 's to O's. We have lEI = m2m-l and IFI = 2rn-l ' and we calculate RE,F(P) = rm. If a is a metric on V induced by some embedding f: V .eX, we want to show that RE,F(a) < 1; this will give the theorem. So we need to prove that a2 (F) < a2 (E). This follows from the inequality for the 4-cycle (Lemma 15.4.2) by a convenient induction. The basis for m = 2 is directly Lemma 15.4.2. For larger m, we divide the vertex set V into two parts V 0 and V1, where V 0 are the vectors with the last component 0, i.e., of the form uO, u E {0, l}m-l. The set V0 induces an ( rn-1 )-dimensional subcube. Let Eo be its edge set and F0 the set of its long diagonals; that is, F0 = { { uO, uO}: u E { 0, 1} m-l}, and similarly for E1 and F1. Let E01 = E \ (Eo U E1) be the edges of the m-dimensional cube going between the two subcubes. By induction, we have For u E { 0, 1} rn-l, we consider the quadrilateral with vertices uO, uO, u1, u1; for u = 00, it is indicated in the picture: 372 Chapter 15: E1nbedding Finite Metric Spaces into Norined Spaces 000 Its sides are two edges of E01, one diagonal from F0 and one from F1 , and its diagonals are from F. If we write the inequality of Lemma 15.4.2 for this quadrilateral and sum up over all such quadrilaterals (they are 2m-2, since u and u yield the same quadrilaterals), we get By the inductive assumption for the two subcubes, the right-hand side is at rnost a2(Eol) + a2(Eo) + a2(E1) = a2(E). D Bibliography and remarks. Theorem 15.4.1, found by Enfto [Enf69], is probably the first result showing an unbounded distortion for embeddings into Euclidean spaces. Enflo considered the problem of uniform embeddability among Banach spaces, and the distortion was an auxiliary device in his proof. Exercises 1. Consider the second graph in the introductory section, the star with 3 leaves, and prove a lower bound of ǟ for the distortion required to embed into a Euclidean space. Follow the method used for the 4-cycle. 0 2. (Planar graphs badly embeddable into R2) Let G0, G1, . . . be the following graphs: . . <> Go Gi+l is obtained from Gi by replacing each edge by a square with two new vertices. Using the short diagonals lemma and the method of this section, prove that any Euclidean embedding of Gm (with the graph metric) requires distortion at least Jm+1. CTI This result is due to Newman and Rabinovich [NR01]. 15.5 A Tight Lower Bound via Expanders 373 3. (Almost Euclidean subspaces) Prove that for every k and c > 0 there exists n ;:;;;; n(k, c) such that every n-point metric space (X, p) contains a k-point subspace that is (!+c)-embeddable into £2• Use Ramsey's theo­ rem. ԫ This result is due to Bourgain, Figiel, and Milman [BFM86]; it is a kind of analogue of Dvoretzky's theorem for metric spaces. 15.5 A Tight Lower Bormd via Expanders Here we provide an explicit example of an n-point metric space that requires distortion O(log n) for embedding into any Euclidean space. It is the vertex set of a constant-degree expander G with the graph metric. In the proof we are going to use bounds on the second eigenvalue of G, but for readers not familiar with the important notion of expander graphs, we first include a little wider background. Roughly speaking, expanders are graphs that are sparse but well con­ nected. If a model of an expander is made with vertices being little balls and edges being thin strings, it is difficult to tear off any subset of vertices, and the more vertices we want to tear off, the larger effort that is needed. More formally, we define the edge expansion (also called the conductance) <P(G) of a graph G = (V, E) as . { e(A, V \ A) I 1 } mm IAI : A C V, 1 < AI < 2 JVI , where e(A, B) is the number of edges of G going between A and B. One can say, still somewhat imprecisely, that a graph G is a good expander if 3, say r = 3. We need r-regular graphs with an arbitrary large number n of vertices and with edge expansion bounded below by a positive constant independent of n. Such graphs are usually called constant-degree expanders. 3 It is useful to note that, for example, the edge expansion of the n x n planar square grid tends to 0 as n ---+ oo . More generally, it is known that constant­ degree expanders cannot be planar; they must be much more tangled than planar graphs. The existence of constant-degree expanders is not difficult to prove by the probabilistic method; for every fixed r > 3, random r-regular graphs provide very good expanders. With considerable effort, explicit constructions have been found as well; see the notes to this section. 3 A rigorous definition should be formulated for an infinite family of graphs. A family { G 1 , G2, . . . } of r-regular graphs with IV ( Gi) I --+ oo as i --+ oo is a family of constant-degree expanders if the edge expansion of all Gi is bounded below by a positive constant independent of i. 374 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces Let us remark that several notions similar to edge expansion appear in the literature, and each of them can be used for quantifying how good an expander a given graph is (but they usually lead to an equivalent notion of a family of constant-degree expanders). Often it is also useful to consider nonregular expanders or expanders with larger than constant degree, but regular constant-degree expanders are probably used most frequently. Now, we pass to the second eigenvalue. For our purposes it is most con­ venient to talk about eigenvalues of the Laplacian of the considered graph. Let G = (V, E) be an r-regular graph. The Laplacian matrix Lc of G is an nxn matrix, n = lVI, with both rows and columns indexed by the vertices of G, defined by (Lc)uv = r for u = v, -1 if u -:f. v and { u , v} E E (G), 0 otherwise. It is a symmetric positive semidefinite real n1atrix, and it has n real eigen­ values /.11 = 0 < 1-12 < · · · < J.ln. The second eigenvalue ft2 = /.12 (G) is a fundamental parameter of the graph G.4 Somewhat similar to edge expansion, J-L2 (G) describes how much G "holds together," but in a different way. The edge expansion and J..t 2( G) are related but they do not determine each other. For every 1·-regular graph G, we have J.l2 (G) > <PŃ) 2 (see, e.g., Lovasz [Lov93], Exercise 1 1.31 for a proof) and ft2 (G) < 2<1> (G) (Exercise 6). Both the lower and the upper bound can almost be attained for some graphs. For our application below, we need the following fact: There are constants r and f3 > 0 such that for sufficiently many values of n (say for at least one n between 10k and 10k+l ), there exists an n-vertex r-regular graph G with J..t 2 (G) > /3. This follows from the existence results for constant-degree expanders mentioned above (random 3-regular graphs will do, for example), and actually most of the known explicit constructions of expanders bound the second eigenvalue directly. We are going to use the lower bound on jj2 (G) via the following fact: For all real vectors (x,u)vEV with LvEV x," = 0, we have xT Lex > J..t 2 llxll· (15.3) To understand what is going on here, we recall that every symmetric real n x n matrix has n real eigenvalues (not necessarily distinct), and the corresponding n unit eigenvectors b1, b2, . . . , bn form an orthonormal basis of Rn . For the 4 The notation J.Lt for the eigenvalues of La is not standard. We use it in order to distinguish these eigenvalues from the eigenvalues A1 > A΁ > · · · > An of the adjacency matrix Ac usually considered in the literature, where ( Ac )uv = 1 if { u, v} E E( G) and ( Ac )1Lv = 0 otherwise. Here we deal exclusively with regular graphs, for which the eigenvalues of Ac are related to those of La in a very simple way: At = r-J.Lt , i = 1 , 2 . . . , n, for any r-regular graph. 15.5 A Tight Lower Bound via Expanders 375 matrix La, the unit eigenvector b1 belonging to the eigenvalue J.LI 0 is n-112(1, 1, . . . , 1 ). So the condition L:vEV Xv = 0 means the orthogonality of x to b1, and we have x = L· 1 aibi for suitable real ai with a1 = 0. We calculate, using xTbi = ai, n n n n xT Lax = L xT(aiLabi) = L aiJ.LiXTbi = L aTJ.Li > J.L2 L aT = J.L2IIxll2. i=2 i=2 i=2 i=2 This proves (15.3), and we can also see that x = b2 yields equality in (15.3). So we can write J.L2 = min{xT Lex: llxll = 1, LvEV Xv = 0} (this is a special case of the variational definition of eigenvalues discussed in many textbooks of linear algebra). Now, we are ready to prove the main result of this section. 15.5.1 Theorem (Expanders are badly embeddable into £2). Let G be an r-regular graph on an n-element vertex set V with J-L2(G) > (3, where r > 3 and p > 0 are constants, and let p be the shortest-path metric on V. Then the metric space (V, p) cannot be D-embedded into a Euclidean space for D < clog n, where c = c( r, {3) > 0 is independent of n. Proof. We again consider the ratios RE,F(P) and RE,F(a) as in the proof for the cube (Theorem 15.4.1). This time we let E be the edge set of G, and F = () are all pairs of distinct vertices. In the graph metric all pairs in E have distance 1 ĕ while most pairs in F have distance about log n, as we will check below. On the other hand, it turns out that in any embedding into £2 such that all the distances in E are at most 1, a typical distance in F is only 0(1). The calculations follow. We have p2(E) = IEJ = n ; · . To bound p2(F) from below, we observe that for each vertex v0, there are at most 1 +r+r(r-1) + · · · +r(r-1)k-1 < rk+1 vertices at distance at most k from v0 . So for k = logr n 2 1 , at least half of the pairs in F have distance more than k, and we obtain p2(F) = O(n2k2) = O(n2 log2 n). Thus RE,F(P) = n ( y'n · logn) . Let f: V ---+ f6 be an embedding into a Euclidean space, and let a be the rnetric induced by it on V. To prove the theorem, it suffices to show that RE,F(a) = 0( fo); that is, By the observation in the proof of Lemma 15.4.2 about splitting into coordi­ nates, it is enough to prove this inequality for a one-dimensional embedding. So for every choice of real numbers (xv)vEV, we want to show that L (xu - Xv)2 = O(n) L (xu - Xv)2. {u,v}EF {u,v}EE ( 15.4) 376 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces By adding a suitable number to all the Xv, we may assume that I:vE v Xv = 0. This does not change anything in (15.4), but it allows us to relate both sides to the Euclidean norm of the vector x. We calculate, using L:vEV Xv = 0, 2: (xu-xv)2 = (n-1) 2: x¸-2: XuXv = n 2: x¸-( 2: Xv) 2 = nllxll2. {u,v}EF vEV u#v vEV vEV For the right-hand side of (15.4), the Laplace matrix enters: 2: (xu - Xv)2 = r L x¸ - 2 2: XuXv = XTLcx > Jl2llxll2, {u,v}EE vEV {u,v}EE the last inequality being (15.3). This establishes (15.4) and concludes the proof of Theorem 15.5.1. 0 The proof actually shows that the maximum of RE ,F (a) over all Euclidean metrics a equals ة (which is an interesting geometric interpretation of JJ2). The maximum is attained for the a induced by the mapping V ---+ R specified by b2, the eigenvector belonging to JL2. The cone of squared £2-metrics and universality of the lower-bound method. For the Hamming cubes, we obtained the exact minimum distor­ tion required for a Euclidean embedding. This was due to the lucky choice of the sets E and F of point pairs. As we will see below, a "lucky" choice, leading to an exact bound, exists for every finite metric space if we allow for sets of weighted pairs. Let (V, p) be a finite metric space and let ry, r.p: () ---+ [0, oo) be weight functions. We define p2(ry) = 2: ry(u, v)p( u, v)2 { u,v }E (ت) and similarly for p2 ( r.p), and we let 15.5.2 Proposition. Let (V, p) be a finite metric space and let D > 1 be the smallest number such tllat (V, p) can be D-embedded into f2. Then there are weight functions 7J, c.p: () ---+ [0, oo) such that R11, D and R11,<p (a) < 1 for any metric a induced on V by an embedding into £2. Thus, the exact lower bound for the embeddability into Euclidean spaces always has an "easy" proof, provided that we can guess the right weight functions 7J and r.p. (As we will see below, there is even an efficient algorithm for deciding D-embeddability into £2.) 15.5 A Tight Lower Bound via Expanders 377 Proposition 15.5.2 is included mainly because of generally useful concepts appearing in its proof. Let V be a fixed n-point set. An arbitrary function 0, and so it suffices to verify that if x, y E £2, then x + y E £2. Let x, y E £2 correspond to embed dings j: V ---+ fG and g: V ---+ f'2, respectively. We define a new embedding h: V ---+ eǣ+m by concatenating the coordinates of f and g; that . IS, h(v) == (j(v)l, . . . , j(v)k, g(v)l, . . . , g(v)m) E fǤ+m. The point of £2 corresponding to h is x + y. D Proof of Proposition 15.5.2. Suppose that (V, p) cannot be D-embedded into any Euclidean space. We are going to exhibit TJ and c.p with R11,r.p(p) > D and R11,<p ( o') < 1 for every Euclidean a. The claim of the proposition is easily derived from this by a compactness argument. Let £2 C RN be the cone of squared Euclidean metrics on V as above and let K = { (x,.,){u,v}E(ϟ) E RN: there exists an r > 0 with r2 p( u, v )2 < Xuv < D2r2 p( u, v)2 for all u, v } · This 1C includes all squares of metrics arising by D-embeddings of ( V, p). But not all elements of 1C are necessarily squares of metrics, since the triangle inequality may be violated. Since there is no Euclidean D-embedding of (V, p), we have 1C n £2 == 0. Both 1C and £2 are convex sets in RN, and so they can be separated by a hyperplane, by the separation theorem (Theorem 1.2.4). Moreover, since £2 is a cone and K is a cone minus the origin 0, the separating hyperplane has to pass through 0. So there is an a E R N such that (a, x) > 0 for all x E JC and (a, x) < 0 for all x E £2• (15.5) Using this a, we define the desired TJ and 0, 0 otherwise; { -auv if auv < 0, 0 otherwise. First we show that RTJ,v; (p) > D. To this end, we employ the property (15.5) for the following x E IC: X _ { D2p(u, v)2 if auv > 0, uv -p(u, v)2 1.£ a < 0 uv . Then (a, x) > 0 boils down to D2 p2 ( ry) - p2 ( cp) > 0, which means that RTJ,cp(P) > D. Next, let a be a metric induced by a Euclidean embedding of V. This time we apply (a, x) < 0 with the x E £2 corresponding to aL i.e., Xuv = a( u, v )2. This yields a2 ( rJ) - a2( cp) < 0, and so RTJ,cp( a) < 1 . This proves Proposition 15.5.2. D Algorithmic remark: Euclidean embeddings and semidefinite pro­ gramming. The problem of deciding whether a given n-point metric space (V, p) admits a D-embedding into £2 (i.e., into a Euclidean space without re­ striction on the dimension), for a given D > 1, can be solved by a polynomial­ time algorithm. Let us stress that the dimension of the target Euclidean space cannot be prescribed in this method. If we insist that the embedding be into fn, for some given d, we obtain a different algorithmic problem, and it is not known how hard it is. Many other similar-looking embedding problems are known to be NP-hard, such as the problem of D-embedding into R 1 . The algorithm for D-embedding into f2 is based on a powerful technique called semidefinite programming, where the problem is expressed as the exis­ tence of a positive semidefinite matrix in a suitable convex set of matrices. Let (V, p) be an n-point metric space, let f: V -t Rn be an embedding, and let X be the n x n matrix whose columns are indexed by the elements of V and such that the vth column is the vector f ( v) E R n . The matrix Q = xr X has both rows and columns indexed by the points of V, and the entry Quv is the scalar product (f ( u), f ( v)). The matrix Q is positive semidefinite, since for any x E Rn , we have xTQx = (xTXT)(Xx) = 11Xxll2 > 0. (In fact, as is not too difficult to check, a real symmetric n x n matrix P is positive semidefinite if and only if it can be written as xr X for some real n x n matrix X.) Let a ( u, v) = II f ( u) - f ( v) II = ( f ( u) - f ( v), f ( u) - f ( v)) 112. We can ex­ press a(u, v)2 = (f(u), f(u)) + (f(v), f(v)) - 2(/(u), f(v)) = Quu + Qvv - 2Quv· Therefore, the space (V, p) can be D-embedded into £2 if and only if there exists a symmetric real positive semidefinite matrix Q whose entries satisfy 15.5 A Tight Lower Bound via Expanders the following constraints: ( . 2 2 ( )2 p u, v) < quu + qvv - 2quv < D p u, v 379 for all u, v E V. These are linear inequalities for the unknown entries of Q. The problem of finding a positive semidefinite matrix whose entries sat­ isfy a given system of linear inequalities can be solved efficiently, in time polynomial in the size of the unknown matrix Q and in the number of the linear inequalities. The algorithm is not simple; we say a little more about it in the remarks below. Bibliography and remarks. Theorem 15.5.1 was proved by Linial, London, and Rabinovich [LLR95]. This influential paper introduced methods and results concerning low-distortion embeddings, developed in local theory of Banach spaces, into theoretical computer science, and it gave several new results and algorithmic applications. It is very in­ teresting that using low-distortion Euclidean embeddings, one obtains algorithmic results for certain graph problems that until then could not be attained by other methods, although the considered problems look purely graph-theoretic without any geometric structure. A simple but important example is presented at the end of Section 15. 7. The bad embeddability of expanders was formulated and proved in [LLR95] in connection with the problem of multicommodity flows in graphs. The proof was similar to the one shown above, but it es­ tablished an O(log n) bound for embedding into f 1 . The result for Euclidean spaces is a corollary, since every finite Euclidean metric space can be isometrically embedded into £1 (Exercise 5). An inequal­ ity similar to (15.4) was used, but with squares of differences replaced by absolute values of differences. Such an inequality was well known for expanders. The method of [LLR95] was generalized for embeddings to Rp-spaces with arbitrary p in [Mat97]; it was shown that the mini­ mum distortion required to embed all n-point metric spaces into fp is of order log n , and a matching upper bound was proved by the method shown in Section 15. 7. The proof of Theorem 15.5.1 given in the text can easily be ex­ tended to prove a lower bound for f 1-embeddability as well. It ac­ tually shows that distortion O(log n) is needed for approximating the expander metric by a squared Euclidean metric, and every £1-metric is a squared Euclidean metric. Squared Euclidean metrics do not gener­ ally satisfy the triangle inequality, but that is not needed in the proof. Those squared Euclidean metrics that do satisfy the triangle inequal­ ity are sometimes called the metrics of negative type. Not all of these metrics are f 1-metrics, but a challenging conjecture (made by Linial and independently by Goemans) states that perhaps they are not very far from f1-metrics: Each metric of negative type might be embeddable 380 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces into f 1 with distortion bounded by a universal constant. If true, this would have significant algorithmic consequences: Many problems can be formulated as optimization over the cone of all f 1-metrics, which is computationally intractable, and the metrics of negative type would provide a good and algorithmically manageable approximation. The formulation of the minimum distortion problem for Euclidean em beddings as semidefinite programming is also due to [LLR95], as well as Proposition 15.5.2. These ideas were further elaborated and applied in examples by Linial and Magen [LMOO]. The proof of Propo­ sition 15.5.2 given in the text is simpler than that in [LLR95], and it extends to fp-embeddability (Exercise 4), unlike the formulation of the D-embedding problem as a semidefinite program. It was commu­ nicated to me by Yuri Rabinovich. A further significant progress in lower bounds for £2-embeddings of graphs was made by Linial, Magen, and Naor [LMNOI]. They proved that the metric of every r-regular graph, r > 2, of girth g requires distortion at least n( y'g) for embedding into £2 (an O.(g) lower bound was conjectured in [LLR95]). They give two proofs, one based on the concept of Markov type of a metric space due to Ball (Bal92] and another that we now outline (adapted to the notation of this section). Let G = (V, E) be an r-regular graph of girth 2t+l or 2t+2 for sorne integer t > 1, and let p be the metric of G. We set F = { { u, v} E (): p( u, v) = t}; note that the graph H = (V, F) is s-regular for s = r(r-I)t-1 • Calculating RE,F(P) is trivial, and it remains to bound RE,F(a) for all Euclidean metrics a on V, which amounts to finding the largest (3 > 0 such that a2 (E) - (3 · a2 (F) > 0 for all a. Here it suffices to consider line metrics a; so let xv E R be the image of v in the embedding V -t R inducing a. We may assume EvEV Xv = 0 and, as in the proof in the text, a2(E) = E{u,v}EE(xu - xv)2 = xT Lex = xr(rl-Aa)xr, where I is the identity matrix and Ac is the adjacency matrix of G, and similarly for a2 (F). So we require xTCx > 0 for all x with EvEV Xv = 0, where C = ( r-j3s )I-Ac + /3An . It turns out that there is a degree-t polynomial Pt(x) such that AH = Pt(Ac) (here we need that the girth of G exceeds 2t). This Pt ( x) is called the Geronimus polynomial, and it is not hard to derive a recurrence for it: P0(x) = 1, P1 (x) = x, P2(x) = x2 - r, and Pt(x) = xPt-l (x) - (r-l)P t-2(x) for t>2. So C = Q(A) for Q(x) = r - (3s - x+ P t(x). As is well known, all the eigenvalues of A lie in the interval [-r, r], and so if we make sure that Q ( x) > 0 for all x E [-r, r], all eigenvalues of C are nonnegative, and our condition holds. This leaves us with a nontrivial but doable calculus problem whose discussion we omit. Semidefinite programming. The general problem of semidefinite pro­ gramming is to optimize a linear function over a set of positive definite n x n matrices defined by a system of linear inequalities. This is a con-15.5 A Tight Lower Bound via Expanders 381 vex set in the space of all real n x n matrices, and in principle it is not difficult to construct a polynomial-time membership oracle for it (see the explanation following Theorem 13.2.1 ). Then the ellipsoid rnethod can solve the optimization problern in polynomial time; see Grotschel, Lovasz and Schrijver [GLS88]. More practical algorithms are based on interior point methods. Semidefinite programming is an extremely powerful tool in combinatorial optimization and other ar­ ea ". For example, it provides the only known polynomial-time algo­ rithms for computing the chromatic number of perfect graphs and the best known approximation algorithms for several fundamental NP­ hard graph-theoretic problems. Lovasz's recent lecture notes [Lov] are a beautiful concise introduction. Here we outline at least one lovely application, concerning the approximation of the maximum cut in a graph, in Exercise 8 below. The second eigenvalue. The investigation of graph eigenvalues consti­ tutes a well established part of graph theory; see, e.g., Biggs [Big93] for a nice introduction. The second eigenvalue of the Laplace matrix as an important graph parameter was first considered by Fiedler (Fie73] (who called it the algebraic connectivity). Tanner [Tan84 J and Alon and Milman [AM85] gave a lower bound for the so-called vertex ex­ pansion of a regular graph (a notion similar to edge expansion) in terms of JL2( G), and a reverse relation was proved by Alon [Alo86a]. There are many useful analogies of graph eigenvalues with the eigenvalues of the Laplace operator lj on manifolds, whose theory is classical and well developed; this is pursued to a considerable depth in Chung [Chu97]. This point of view prefers the eigenvalues of the Lapla­ cian matrix of a graph, as considered in this section, to the eigenvalues of the adjacency matrix. In fact, for nonregular graphs, a still closer correspondence with the setting of manifolds is obtained with a differ­ ently normalized Laplacian matrix £c: (£c )v,v = 1 for all v E V( G), (Cc )uv = -( degc( u) degc( v) )-1/2 for { u, v} E E( G), and (.Cc )uv = 0 otherwise. Expanders have been used to address many fundamental problems of computer science in areas such as network design, theory of compu­ tational complexity, coding theory, on-line computation, and crypto­ graphy; see, e.g., [RVWOO] for references. For random graphs, parameters such as edge expansion or vertex expansion are usually not too hard to estimate (the technical difficulty of the arguments depends on the chosen model of a random graph). On the other hand, estimating the second eigenvalue of a random r-regular graph is quite challenging, and a satisfactory answer is known only for r large (and even); see Friedman, Koml6s, and Szemeredi [FKS89] or Friedman [Fri91]. Namely, with high probability, a random r-regular graph with r even has A2 < 2yr=I + O(logr). Here the number of 382 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces vertices n is assumed to be sufficiently large in terms of r and the 0( ·) notation is with respect to r --+ oo. At the same time, for every fixed r > 3 and any r-regular graph on n vertices, A2 > 2Vr=l- o(l), where this time o( ·) refers to n --+ oo. So random graphs are almost optimal for large r. For many of the applications of expanders, random graphs are not sufficient, and explicit constructions are required. In fact, explic­ itly constructed expanders often serve as substitutes for truly random graphs; for example, they allow one to convert sorne probabilistic algo­ rithms into deterministic ones (derandomization) or reduce the num­ ber of random bits required by a probabilistic algorithm. Explicit construction of expanders was a big challenge, and it has led to excellent research employing surprisingly deep results fron1 classical areas of mathematics (group theory, number theory, har­ monic analysis, etc.). In the analysis of such constructions, one usually bounds the second eigenvalue (rather than edge expansion or vertex expansion). After the initial breakthrough by Margulis in 1973 and several other works in this direction (see, e.g., [Mor94] or [RVWOO] for references), explicit fan1ilies of constant-degree expanders matching the quality of random graphs in several parameters (and even super­ seding them in some respects) were constructed hy Lubotzky, Phillips, and Sarnak [LPS88) and independently by Margulis [Mar88]. Later Morgenstern [Mor94] obtained similar results for many more values of the parameters (degree and number of vertices). In particular, these constructions achieve A2 < 2Vr=T, which is asymptotically optimal, as was mentioned earlier. For illustration, here is one of the constructions (from [LPS88]). Let p =I= q be primes with p, q - 1 (mod 4) and such that p is a quadratic nonresidue modulo q, let i be an integer with i2 - -1 (mod q), and let F denote the field of residue classes modulo q. The vertex set V(G) consists of all 2x 2 nonsingular matrices over F. Two matrices A, B E V(G) are connected by an edge iff AB-1 is a matrix of the c ( ao+ia1 a2+ia3) h · · h 2 10rm + . . , w ere ao, a1, a2, a3 are Integers w1t a0 + -a2 ΅a3 ao -tat af + a³ + aĹ == p, ao > 0, ao odd, and a1, a2, a3 even. By a theorem of Jacobi, there are exactly p+l such vectors (ao, a1, a2, a3), and it follows that the graph is (p+ 1 )-regular with q( q2 -1) vertices. A family of constant-degree expanders is obtained by fixing p, say p == 5, and letting q --+ oo. Reingold, Vadhan, and Wigderson [RVWOO] discovered an ex­ plicit construction of a different type. Expanders are obtained from a constant-size initial graph by iterating certain sophisticated prod­ uct operations. Their parameters are somewhat inferior to those from [Mar88}, [LPS88], [l\t1or94], but the proof is relatively short, and it uses only elementary linear algebra. 15.5 A Tight Lower Bound via Expanders 383 Exercises 1. Show that every real symmetric positive semidefinite n x n matrix can be written as xr X for a real n x n matrix X. 0 2. (Dimension for isometric fp-embeddings) (a) Let V be an n-point set and let N = (ˈ) . Analogous to the set £2 defined in the text, let £ifin) C R N be the set of all metrics on V induced by em beddings f: V  fب, k ! 1, 2, . . . . Show that £DZfin) is the convex hull of line pseudometrics, 5 i.e., pseudometrics induced by mappings f: V  fi. 0 (b) Prove that any metric from £ifin) can be isometrically embedded into Rf. That is, any n-point set in some fࢄ can be realized in Rf. 0 (Examples show that one cannot do much better and that dimension O(n2) is necessary, in contrast to Euclidean embeddings, where dimension n-1 always suffices.) (c) Let £1 C RN be all metrics induced by embeddings of V into £1 (the space of infinite sequences with finite £1-norm). Show that £1 = £ifin), and thus that any n-point subset of £1, can be realized in if. 0 (d) Extend the considerations in (a)-( c) to fp-metrics with arbitrary p E [1, oo). II1 See Ball [Bal90] for more on the dimension of isometric fp-embeddings. 3. With the notation as in Exercise 2, show that every line pseudometric v on an n-point set V is a nonnegative linear combination of at most n-1 cut pseudometrics: v C 2:::: & / O'.iTi , a1 , . . . , O'.n-1 > 0, where each Ti is a cut pseudometric, i.e., a line pseudo metric induced by a mapping 'l/Ji: V  {0, 1 }. (Consequently, by Exercise 2(a), every finite metric iso­ metrically embeddable into £1 is a nonnegative linear combination of cut pseudometrics.) II1 4. (An fp-analogue of Proposition 15.5.2) Let p E [1, oo) be fixed. Using Exercise 2, formulate and prove an appropriate lp-analogue of Proposi­ tion 15.5.2. II1 5. (Finite £2-metrics embed isometrically into fp) (a) Let p be fixed. Check that if for all c > 0, a finite metric space (V, p) can be (1+c)-embedded into some t;, k C k(c), then (V, p) can be isometrically embedded into e:, where N C (11). Use Exercise 2. 0 (b) Prove that every n-point set in £2 can be isometrically embedded into e:. 0 6. (The second eigenvalue and edge expansion) Let G be an r-regular graph with n vertices, and let A, B C V be disjoint. Prove that the number of edges connecting A to B is at least e( A, B) > p,2 (G) · I AIǞ Bl (use ( 15.3) with a suitable vector x), and deduce that ࢅ(G) > º J-L2(G). 0 5 A pseudometric v satisfies all the axioms of a metric except that we may have v(x, y) = 0 even for two distinct points x and y. 384 Chapter 15: E1nbedding Finite Metric Spaces into Normed Spaces 7. (Expansion and measure concentration) Let us consider the vertex set of a graph G as a metric probability space, with the usual graph metric and with the uniform probability measure P (each vertex has measure ا, n = IV (G) I). Suppose that = (G) > 0 and that the rr1axirrturr1 degree of G is Ll. Prove the following measure concentration inequality: If A C V( G) satisfies P (A] > 8, then 1 - P[At] < 8e-tŠ/š, where At denotes the t-neighborhood of A. [I] 8. (The Goemans-Williamson approximation to MAXCUT) Let G = (V, E) be a given graph and let n = lVI. The MAXCUT problem for G is to find the maximum possible number of "crossing" edges for a partition V = AUB of the vertex set into two disjoint subsets, i.e., maxAcv e(A, V\A). This is an NP-complete problem. The exercise outlines a geometric ran­ domized algorithm that finds an approximate solution using semidefinite . progran1nung. (a) Check that the l\1AXCUT problem is equivalent to computing m (b) Let Mopt = max{ 8 I: (1 - XuXv): Xv E {-1, 1}, V E V }· {u,v}EE Mrelax = max{8 I: (1 - (Yu. Yv)): Yv E Rn, IIYvll = 1, v E v }· {u,v}EE Clearly, Mrelax > Mopt · Verify that this relaxed version of the problem is an instance of a semidefinite program, that is, the maximum of a linear function over the intersection of a polytope with the cone of all symmetric positive semidefinite real matrices. 0 (c) Let (Yv: v E V) be some system of unit vectors in R n for which Mrelax is attained. Let r E Rn be a random unit vector, and set Xv = sgn(yv, r) , v E V. Let Mapprox = 8 I:{u,v}EE(l - XuXv) for these Xv· Show that the expectation, with respect to the random choice of r, of Mapprox is at least 0.878 · Mrelax (consider the expected contribution of each edge separately). So we obtain a polynomial-time randomized algorithm pro­ ducing a solution to MAXCUT whose expected value is at least about 88% of the optimal solution. 8J Remark. This algorithm is due to Goemans and Williamson (GW95]. Later, Hastad (Has97] proved that no polynomial-tin1e algorithm can produce better approximation in the worst case than about 94% unless P=NP (also see Feige and Schechtman [FSOl] for nice mathematics show­ ing that the Goemans·--Williamson value 0.878 . . . is, in a certain sense, optimal for approaches based on semidefinite programming). 15.6 Upper Bounds for i00-Embeddings 385 15.6 Upper Bounds for £00-Embeddings In this section we explain a technique for producing low-distortion embed­ dings of finite metric spaces. Although we are mainly interested in Euclidean em beddings, here we begin with em beddings into the space Roo, which are somewhat simpler. We derive almost tight upper bounds. Let (V, p) be an arbitrary metric space. To specify an embedding f: (V, p) -t Rـ rneans to define d functions !1, . . . , fd: V -t R, the coordinates of the embed­ ded points. If we aim at a D-embedding, without loss of generality we may require it to be nonexpanding, which means that I fi ( u) - fi ( v) I < p( u, v) for all u, v E V and all i = 1, 2, . . . , d. The D-embedding condition then rneans that for every pair { u, v} of points of V, there is a coordinate i = i ( u, v) that "takes care" of the pair: lfi ( u) - fi ( v) I > b p( u, v). One of the key tricks in constructions of such embeddings is to take each fi as the distance to some suitable subset Ai C V; that is, fi(u) = p(u, Ai) = ma.xaEAࢆ p( u, a) . By the triangle inequality, we have I p( u, Ai) - p( v, Ai) I < p(u, v) for any u, v E V, and so such an embedding is automatically nonex­ panding. We "only" have to choose a suitable collection of the Ai that take care of all pairs { u, v} . We begin with a simple case: an old observation showing that every finite metric space embeds isometrically into Roc. 15.6.1 Proposition (Frechet's embedding). Let (V, p) be an arbitrary n-point metric space. Then there is an isometric embedding f: V ---+ tø. Proof. Here the coordinates in £ø are indexed by the points of V, and the vth coordinate is given by fv ( u) = p( u, v). In the notation above, we thus put Av = { v}. As we have seen, the embedding is nonexpanding by the triangle inequality. On the other hand, the coordinate v takes care of the pairs { u, v} for all u E V: llf(u) - f(v)lloo > lfv(u) - fv(v)l = p(u, v). D The dimension of the image in this embedding can be reduced a little; for example, we can choose some vo E V and remove the coordinate cor­ responding to v0, and the above proof still works. To reduce the dimension significantly, though, we have to pay the price of distortion. For example, from Corollary 15.3.4 we know that for distortions below 3, the dimension must generally remain at least a fixed fraction of n. We prove an upper bound on the dimension needed for embeddings with a given distortion, which nearly matches the lower bounds in Corollary 15.3.4: 386 Chapter 15: En1bedding Finite Metric Spaces into Norn1ed Spaces 15.6.2 Theorem. Let D = 2q-1 > 3 be an odd integer and let (V, p) be an n-point metric space. Then there is a D-embedding of V into fÆ with d == O(qn1fq Inn). Proof. The basic scheme of the construction is as explained above: Each coordinate is given by the distance to a suitable subset of V. This time the subsets are chosen at randon1 with suitable densities. Let us consider two points u, v E V. What are the sets A such that lp(u, A} - p(v, A)l > Ll, for a given real Ll > 0? For some r > 0, they must intersect the closed r-ball around u and avoid the open (r+Ll)-ball around v; schematically, not empty .. .. - - · · · - - -• " ....... .. ... ·· .. ·· · . · . ,.. empty . . . . . . . . . . . . . . . . ' . . v . · · . ......... .... .. . . ' . , . . . . . . I . . . : . : . . or conversely (with the roles of u and v interchanged). In the favorable situation where the closed r-ball around u does not con­ tain many fewer points of V than the open (r+Û)-ball around v, a random A with a suitable density has a reasonable chance to work. Generally we have no control over the distribution of points around u and around v, but by considering several suitable balls simultaneously, we can find a good pair of balls. We also do not know the right density needed for the sample to \Vork, but since we have many coordinates, we can take samples of essentially all possible densities. Now we begin with the formal proof. We define an auxiliary param­ eter p = n -l I q, and for j == 1, 2, . . . , q, we introduce the probabilities PJ == min(v ,pi). Further, let m == f24n1/q Inn l · For i == 1, 2, . . . , m and j == 1, 2, . . . , q, we choose a random subset Aii C V. The sets (and the cor­ responding coordinates in fÇq) now have double indices, and the index j influences the "density" of Aij. Namely, each point v E V has probability PJ of being included into Aij, and these events are mutually independent. The choices of the Aii, too, are independent for distinct indices i and j. Here is a schematic illustration of the sampling: e o • •o o e 0 •o o e 0 o. • 0 . 0 • 0 0 0 0 \ 0 0 Ư o • co 0 0 • 0 0 0 o• o 0 o oo o 0 0 o• o 0 • • 0 • 0 0 • • 0 • 0 0 0 0 • Al A2 A3 15.6 Upper Bounds for l00-Embeddings 387 We divide the coordinates in £# into q blocks by m coordinates. For v E V, we let /( v )ij = p( v, Aij), i = 1, 2, . . . , m, j = 1, 2, . . . , q. We claiin that with a positive probability, this f: V ---+ fÚq is a D-embedding. We have already noted that f is nonexpanding, and the following lemma serves for showing that with a positive probability, every pair { u, v} is taken care of. 15.6.3 Lemma. Let u, v be two distinct points of V. Then there exists an index j E {1, 2, . . . , q} such that if the set Aij is chosen randomly as above, then the probability of tl1e event . 1 p 1s at east 12 . jp(u, Aij) - p(v, AiJ)I > b p(u, v) (15.6) First, assuming this lemma, we finish the proof of the theorem. To show that f is a D-embedding, it suffices to show that with a nonzero probability, for every pair { u, v} there arc i, j such that the event (15.6) in the lemma occurs for the set Aij. Consider a fixed pair { u, v} and select the appropriate index j as in the lemma. The probability that the event (15.6) does not occur for any of the m indices i is at most (1 - /;)m < e-Pm/!2 < n-2. Since there are (ƽ) < n2 pairs { u, v}, the probability that we fail to choose a good set for any of the pairs is smaller than 1. D Proof of Lemma 15.6.3. Set Û = b p(u, v). Let Bo = {u}, let B1 be the (closed) Û-ball around v, let B2 be the (closed) 2Û-ball around u, . . . , finishing with Bq, which is a qÛ-ball around u (if q is even) or around v (if q is odd). The parameters are chosen so that the radii of Bq-I and Bq add up to p('u, v); that is, the last two balls just touch (recall that D = 2q-1): Let nt denote number of points of V in Bt. We want to select an index j such that nt > n(j-I)/q and nt+I < njfq. (15. 7) To this end, we divide the interval [1, n] into q intervals /1, /2, . . . , Iq, where 388 Chapter 15: Embedding Finite Metric Spaces into Nor1ned Spaces If the sequence ( n1, n2, . . . , nq) is not monotone increasing, i.e., if nt+l < nt for some t, then ( 15. 7) holds for the j such that Ij contains nt. On the other hand, if 1 = n0 < n1 < . . . < nq < n, then by the pigeonhole principle, there exist t and j such that the interval Ij contains both nt and nt+l· Then (15. 7) holds for this j as well. In this way, we have selected the index j whose existence is claimed in the lemma. We will show that with probability at least f2, the set Aij, randomly selected with point probability Pi, includes a point of Bt (event E1) and is disjoint fron1 the interior of Bt+l (event E2); such an Aij satisfies (15.6). Since Bt and the interior of Bt+ 1 are disjoint, the events E1 and E2 are independent. We calculate Prob(E1] = 1 - Prob [Aii n Bt = 0] = 1 - (1 - pj)nt > 1 - e-pJnt . Using (15.7), we have pjnt > pjn(j-l)/q = PjP-j+l = min(ԭ,pi)p-j+l > min(l,p). For p > Ԭ' we get Prob [E1] > 1 - e-112 > w > ̸ ' while for p < t we have Prob [Ed > 1 - e-P , and a bit of calculus verifies that the last expression is well above ̸ for all p E [0, 8). Further, Prob [E2) > (1 - Pj )nt+l > (1 - Pi )nJfq > (1 - Pj) 1/P.7 > ئ (since Pj < 8 ). Thus Prob [Et n E2] > J;, which proves the lemma. o Bibliography and remarks. The embedding method discussed in this section was found by Bourgain [Bou85], who used it to prove Theorem 15.7 .1 explained in the subsequent section. Theorem 15.6.2 is from [Mat96b]. Exercises 1. (a) Find an isometric embedding of ft into fV. Ġ (b) Explain how an embedding as in (a) can be used to compute the diameter of an n-point set in ft in time O(d2dn) . 0 2. Show that if the unit ball K of some finite-dimensional normed space is a convex polytope with 2nt facets, then that normed space embeds isometrically into fW. õ (Using results on approximation of convex bodies by polytopes, this yields useful approxin1ate en1beddings of arbitrary norms into fX.) 3. Deduce from Theorem 15.6.2 that every n-point metric space can be D­ embedded into fT with D = O(log2 n) and k = O(log2 n). õ 15.7 Upper Bounds for Euclidean Embeddings 389 15.7 Upper Bounds for Euclidean Embeddings By a method similar to the one shown in the previous section, one can also prove a tight upper bound on Euclidean embeddings; the method was actually invented for this problem. 15.7.1 Theorem (Bourgain's embedding into l2). Everyn-point metric space (V, p) can be embedded into a Euclidean space with distortion at most O(logn). The overall strategy of the embedding is similar to the embedding into e# in the proof of Theorem 15.6.2. The coordinates in eg are given by distances to suitable subsets. The situation is slightly more complicated than before: For embedding into e#, it was enough to exhibit one coordinate "taking care" of each pair, whereas for the Euclidean embedding, many of the coordinates will contribute significantly to every pair. Here is the appropriate analogue of Lemma 15.6.3. 15. 7.2 Lemma. Let u, v E V be two distinct points. Then there exist real numbers Ò1, Ò2, . . . , Òq > 0 with Ò1 + · · · + Òq = ! p(u, v), where q = llog2 n J + 1, and such that the following holds for each j = 1, 2, . . . , q: If AJ C V is a randomly chosen subset of V, with each point of V included in Aj independently with probability 2-j, then the probability Pj of the event satisfies P j > l 2 . Proof. We fix u and v. We define rq = ! p(u, v) and for j = 0, 1, . . . , q-1, we let rj be the smallest radius such that both IB(u, rJ)I > 2i and IB(v, rJ)I > 21 where, as usual, B(x, r) = {y E V: p(x, y) < r}. We arc going to show that the claim of the lemma holds with D.. j = r j - r j _1. Fix j E {1, 2, . . . , q} and let Aj C V be a random sample with point probability 2-i. By the definition of r1, IB0(u, rJ)I < 2i or IB0(v, r1)1 < 21, where B0(x, r) = {y E V: p(x, y) < r} denotes the open ball (this holds for j = q, too, because lVI < 2q). We choose the notation u, v so that IB0(u, rj )I < 2J. A random set Ai is good if it intersects B(v, rJ_1) and misses B0 ( u, r j). The former set has cardinality at least 2J - I and the latter at most 2J. The calculation of the probability that Ai has these properties is identical to the calculation in the proof of Lemma 15.6.3 with p = 8. D In the subsequent proof of Theorem 15.7.1 we will construct the embed­ ding in a slightly roundabout way, which sheds some light on what is really going on. Define a line pseudometric on V to be any pseudometric v induced by a mapping <p: V -+ R, that is, given by v( u, v) = I <p( u) - <p( v) j. For each A C V, let v A be the line pseudo metric corresponding to the mapping v t-+ p( v, A). As we have noted, each VA is dominated by p, i.e., VA < p 390 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces (the inequality between two (pseudo)metrics on the same point set means inequality for each pair of points). The following easy lemma shows that if a metric p on V can be approx­ imated by a convex combination of line pseudometrics, each of them domi­ nated by p, then a good embedding of ( V, p) into £2 exists. 15.7. 3 Lemma. Let ( V, p) be a finite metric space, and let v1 , . . . , v N be line pseudometrics on V with vi < p for all i and such that N 1 """a·v· > - p Z 't 't - n i=l for some nonnegative a1, . . . , aN summing up to 1. Then (V, p) can be D­ embeddcd into £ɠ. Proof. Let <pi: V -1- R be a mapping inducing the line pseudometric vi. \Ve define the embedding f: V ---+ R!j by Then, on the one hand, N llf(u) - f(v)ll2 = L aivi(u, v)2 < p(u, v)2, i=l because all vi are dominated by p and 2::: ai = 1 . On the other hand, N > L aivi(u, v) i=l by Cauchy-Schwarz, and the latter expression is at least b p( u, v) by the assumption. D Proof of Theorem 15. 7.1. As was remarked above, each of the line pscu­ dometrics VA corresponding to the mapping v H p(v, A) is dominated by p. It remains to observe that Lemma 15.7.2 provides a convex combination of these line pseudometrics that is bounded from below by 4إq • p. The coefficient of each v A in this convex combination is given by the probability of A appear­ ing as one of the sets Aj in Lemma 15.7.2. More precisely, write 7rj(A) for the probability that a random subset of V, with points picked independently with probability 2-1 , equals A. Then the clairn of Lemrna 15. 7.2 in1plies, for every pair { u, v}, 15. 7 Upper Bounds for Euclidean Em beddings L 1fj (A) · vA(u, v) > 1 2 qi · ACV Summing over j = 1, 2, . . . , q, we have L ( t 11"j (AJ) · VA(u, v) > / 2 · t f:l.j = 1 s p(u, v). ACV j=l j=l Dividing by q and using LAcv 1fj (A) == 1, we arrive at 391 with aA == ! LJ=1 nj (A). Lemma 15.7.3 now gives embcddability into £2 with distortion at most 48q. Theorem 15.7. 1 is proved. D Remarks. Almost the same proof with a slight modification of Lemma 15. 7.3 shows that for each p E [1, oo ), every n-point metric space can be embedded into fp with distortion O(log n); see Exercise 1. The proof as stated produces an embedding into space of dimension 2n, since there are 2n subsets A C V, each of them yielding one coordinate. To reduce the dimension, one can argue that not all the sets A are needed: by suitable Chernoff-type estimates, it follows that it is sufficient to choose O(Iogn) random sets with point probability 2-j, i.e., O(log2 n) sets altogether (Exercise 2). Of course, for Euclidean embeddings, an even better dimension O(log n) is obtained using the Johnson-·Lindenstrauss flattening lemma, but for other fp, no flattening lemma is available. An algorithmic application: approximating the sparsest cut. We know that every n-point n1etric space can be O(logn)-embedded into £ࢇ with d == O(log2 n). By inspecting the proof, it is not difficult to give a randomized algorithm that computes such an embedding in polynomial expected time. We show a neat algorithmic application to a graph-theoretic problem. Let G == (V, E) be a graph. A cut in G is a partition of V into two nonempty subsets A and B = V \ A. The density of the cut (A, B) is jf-11, where e(A, B) is the number of edges connecting A and B. Given G, we would like to find a cut of the smallest possible density. This problem is NP­ hard, and here we discuss an efficient algorithm for finding an approximate answer: a cut whose density is at most O(log n) times larger than the density of the sparsest cut, where n == lVI (this is the best known approximation guarantee for any polynomial-time algorithm). Note that this also allows us to approximate the edge expansion of G (discussed in Section 15.5) within a multiplicative factor of O(logn). First we reformulate the problem equivalently using cut pseudometrics. A cut pseudornetric on V is a pseudon1etric T corresponding to so1ne cut (A, B), with r(u, v) == r(v, u) == 1 for u E A and v E B and r(u, v) = 0 for u, v E A or 392 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces u, v E B. In other words, a cut pseudo metric is a line pseudometric induced by a mapping 'lj;: V & {0, 1} (excluding the trivial case where all of V gets mapped to the same point). Letting F = () , the density of the cut (A, B) can be written as T( E)/ T( F), where T is the corresponding cut pseudornetric and r(E) = l:{u,v}EE T(u, v) . Therefore, we would like to minimize the ratio R1 (r) = r(E)/r(F) over all cut pseudometrics r. In the first step of the algorithm we relax the problem, and we find a pseu­ dometric, not necessarily a cut one, minimizing the ratio R 1 (p) = p( E)/ p( F). This can be done efficiently by linear programming. The minimized function looks nonlinear, but we can get around this by a simple trick: We postulate the additional condition p(F) = 1 and minimize the linear function p(E). The variables in the linear program are the (ƽ) numbers p( u, v) for { u, v} E F, and the constraints are p( u, v) > 0 (for all u, v) , p( F) = 1, and those express­ ing the triangle inequalities for all triples u, v, w E V. Having computed a p0 minimizing R1(p), we find a D-embedding f of (V, p0) into some f٠ with D = O(log n). If a0 is the pseudometric induced on V by this /, we clearly have R1 (a0) < D·R1(p0). Now since a0 is an £1-pseudo­ metric, it can be expressed as a nonnegative linear combination of suitable cut pseudometrics (Exercise 15.5.3): ao = 2:ǝ 1 airi, a1, . . . , aN > 0, N < d( n-1 ). It is not difficult to check that R1 (ao) > min{ R1 ( Ti) : i = 1, 2, . . . , N} (Exercise 3). Therefore, at least one of the ri is a cut pseudometric satisfying R1 (Ti) < Rl (ao) < D · RI (Po) < D · RI(ro), where To is a cut pseudometric with the smallest possible R1 ( r0). Therefore, the cut corresponding to this Ti has density at most O(log n) times larger than the sparsest possible cut. Bibliography and remarks. Theorem 15.7.1 is due to Bourgain [Bou85]. The algorithntic application to approxin1ating the sparsest cut uses the idea of an algorithm for a somewhat more complicated problem (multicommodity flow) found by Linial et al. [LLR95] and independently by Aumann and Rabani [AR98]. We will briefly discuss further results proved by variations of Hour­ gain's embedding technique. Many of them have been obtained in the study of approximation algorithms and imply strong algorithmic re­ sults. Tree metrics. Let g be a class of graphs and consider a graph G E Q. Each positive weight function w: E(G) & (0, oo) defines a metric on V(G), namely the shortest-path metric, where the length of a path is the sum of the weights of its edges. All subspaces of the resulting metric spaces are referred to as Q-m,etric˃˄. A tree m,etric is a T-metric for r the class of all trees. Tree metrics generally behave much better than arbitrary metrics, but for embedding problems they are far from trivial. Bourgain (Bou86] proved, using martingales, a surprising lower bound for embedding tree metrics into 1!2: A tree metric on n points requires distortion 0( v'log log n ) in the worst case. His example is the 15.7 Upper Bounds for Euclidean Embeddings complete binary tree with unit edge lengths, and for that example, he also constructed an embedding with 0( y'log log n ) distortion. For embedding the complete binary tree into lp, p > 1, the distortion is 0( (log log n )min(l/2,l/p) ), with the constant of proportionality depend-ing on p and tending to 0 as p ---+ 1. (For Banach-space specialists, we also remark that all tree metrics can be embedded into a given Banach space Z with bounded distortion if and only if Z is not superrefiexive.) In Matousek (Mat99b] it was shown that the complete binary tree is essentially the worst example; that is, every n-point tree metric can be embedded into Rp with distortion 0( (log log n )min(l/2,!/p) ). An alter-native, elementary proof was given for the matching lower bound (see Exercise 5 for a weaker version). Another proof of the lower bound, very short but applying only for embeddings into £2, was found by Linial and Saks (LS02] (Exercise 6). In the notes to Section 15.3 we mentioned that general n-point metric spaces require worst-case distortion O(n1/l(d+I)/2J) for embed­ ding into fԆ, d > 2 fixed. Gupta [ GupOO] proved that for n-point tree metrics, O(n1/(d-l))-embeddings into fԆ are possible. The best known lower bound is O(n1fd), from a straightforward volu1ne argu1nent. Ba­ bilon, Matousek, Maxova, and Valtr [BMMV02] showed that every n-vertex tree with unit-length edges can be 0( fo )-embedded into fԇ. Planar-graph metrics and metrics with excluded minor. A planar­ graph metric is a P-metric with P standing for the class of all pla­ nar graphs (the shorter but potentially confusing term planar met­ ric is used in the literature). Rao [Rao99] proved that every n-point planar-graph metric can be embedded into £2 with distortion only 0( y'Iog n ), as opposed to log n for general metrics. More generally, the same method shows that whenever H is a fixed graph and Excl(H) is the class of all graphs not containing H as a minor, then Excl(H)­ metrics can be 0( v'Iog n )-embedded into /!,2• For a matching lower bound, valid already for the class Excl(K4) (series-parallel graphs), and consequently for planar-graph metrics; see Exercise 15.4.2. We outline Rao's method of embedding. We begin with graphs where all edges have unit weight (this is the setting in [Rao99], but our presentation differs in some details), and then we indicate how graphs with arbitrary edge weights can be treated. The n1ain new ingredient in Rao's method, compared to Bourgain's approach, is a result of Klein, Plotkin, and Rao [KPR93] about a decomposition of graphs with an excluded minor into pieces of low diameter. Here is the decomposition procedure. Let G be a graph, let p be the corresponding graph metric (with all edges having unit length), and let Û be an integer parameter. We fix a vertex v0 E V( G) arbitrarily, we choose an integer r E {0, 1, . . . , Û-1} uniformly at random, and we let Bt = {v E V(G): p(v, vo) -393 394 Chapter 15: Embedding Finite Metric Spaces into Nor1ned Spaces r (mod Û)}. By deleting the vertices of B1 from G, the remaining vertices are partitioned into connected components; this is the first level of the decomposition. For each of these components of G \ B1, we repeat the same procedure; Û rernains unchanged and r is chosen anew at random (but we can use the same r for all the components). Let B2 be the set of vertices deleted from G in this second round, taken together for all the components. The second level of the decom­ position consists of the connected components of G \ (B1 U B2), and decompositions of levels 3, 4, . . . can be produced similarly. The fol­ lowing schematic drawing illustrates the two-level decomposition; the graph is marked as the gray area, and the vertices of B1 and B2 are indicated by the solid and dashed arcs, respectively. For planar graphs, it suffices to use a 3-level decomposition, and for every fixed graph H, there is a suitable k = k(H) such that a k-level decomposition is appropriate for all graphs G E Excl(H). Let B = B1 U · · · U Bk; this can be viewed as the boundary of the components in the k-level decomposition. Here are the key properties of the decomposition: (i) For each vertex v E V(G), we have p(v, B) > c1Û with proba­ bility at least c2, for suitable constants c1, c2 > 0. The probability is with respect to the random choices of the parameters r at each level of the decornposition. (This is not hard to see; for example, in the first level of the decomposition, for every fixed v, p( v, v0) is some fixed number and it has a good chance to be at least c1Û away, modulo Û' from a random r.) (ii) Each component in the resulting decomposition has diameter at most 0(Û). (This is not so easy to prove, and it is where one needs k = k(H) sufficiently large. For H = K3,3, which includes the case of planar graphs, the proof is a relatively simple case analysis.) Next, we describe the embedding of V(G) into €2 in several steps. First we consider Û and the decomposition as above fixed, and we let 15.7 Upper Bounds for Euclidean Embeddings CI, . . . ' Cm be the components of G \ B. For all the ci, we choose random signs a(Ci) E {-1, +1} uniformly and independently. For a vertex x E V(G), we define a(x) = 0 if x E B and a(x) = a(Ci) if X E v ( ci). Then we define the mapping <p B ,a: v (G) -+ R by 'PB,a(v) = a(x) · p(x, B) (the distance of x to the boundary signed by the component's sign). This <fJB,a induces a line pseudometric VB,a, and it is easy to see that v B ,a is dominated by p. Let C be a constant such that all the Ci have diameter at most CD., and let x, y E V(G) be such that CD. < p(x, y) < 2CD.. Such x and y certainly lie in distinct components, and a( x) =f. a(y) with probability 8. With probability at least c2, we have p(x, B) > c1D., and so with a fixed positive probability, VB ,a places x and y at distance at least c1 D.. Now, we still keep D. fixed and consider v B ,a for all possible B and a. Letting o:B,a be the probability that a particular pair (B, a) results from the decomposition procedure, we have L aB,aVB,a(x, y) = O(p(x, y)) B,a whenever CD. < p(x, y) < 2CD.. As in the proof of Lemma 15.7.3, this yields a 1-Lipschitz embedding /b.. : V(G) -+ f࢈ (for some N) that shortens distances for pairs x, y as above by at most a constant factor. (It is not really necessary to use all the possible pairs ( B, a) in the embedding; it is easy to show that const · log n independent random B and a will do.) To construct the final embedding f: V (G) -t f2, we let f ( v) be the concatenation of the vectors f b.. for D. E { 2J: 1 Ʈ 2J < diam( G)}. No distance is expanded by more than 0( y'log diam( G) ) = 0( y'Iog n ), and the contraction is at most by a constant factor, and so we have an ernbedding into £2 with distortion 0 ( y'Iog n ) . Why do we get a better bound than for Bourgain's embedding? In both cases we have about log n groups of coordinates in the em­ bedding. In Rao's embedding we know that for every pair (x, y), one of the groups contributes at least a fixed fraction of p( x, y) (and no group contributes more than p(x, y)). Thus, the sum of squares of the contributions is between p(x, y)2 and p(x, y)2 logn. In Bourgain's em­ bedding (with a comparable scaling) no group contributes more than p(x, y), and the sum of the contributions of all groups is at least a fixed fraction of p(x, y). But since we do not know how the contri­ butions are distributed among the groups, we can conclude only that the sum of squares of the contributions is between p(x, y)2/ logn and p(x, y)2 log n. It remains to sketch the modifications of Rao's embedding for a graph G with arbitrary nonnegative weights on edges. For the un­ weighted case, we defined B1 as the vertices lying exactly at the given 395 396 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces distances from v0. In the weighted case, there need not be vertices exactly at these distances, but we can add artificial vertices by subdi­ viding the appropriate edges; this is a minor technical issue. A more serious proble1n is that the distances p(x, y) can be in a very wide range, not just from 1 to n. We let Û run through all the relevant powers of 2 (that is, such that CÛ < p(x, y) < 2CÛ for some x =f. y), but for producing the decomposition for a particular Û' we use a mod­ ified graph G ǂ obtained from G by contracting all edges shorter than ! . In this way, we can have many Inore than log n values of Û' but only O(log n) of them are relevant for each pair ( x, y), and the analysis works as before. Gupta, Newman, Rabinovich, and Sinclair (GNRS99] proved that any Excl(K4)-mctric, as well as any Excl(K2,3)-mctric, can be 0(1)­ embedded into £1, and they conjectured that for any H, Excl(H)­ metrics might be 0(1)-embeddable into £1 (the constant depending on H). Volume-respecting embeddings. Feige [FeiOO] introduced an interest­ ing strengthening of the notion of the distortion of an embedding, concerning embeddings into Euclidean spaces. Let f: (V, p) -7 £2 be an embedding that for simplicity we require to be 1-Lipschitz (nonex­ panding). The usual distortion of f is determined by looking at pairs of points, while Feige's notion takes into account all k-tuples for some k > 2. For example, if V has 3 points, every two with distance 1, then the following two embeddings into f$ have about the same distortion: • • • • • • But while the left embedding is good in Feige's sense for k == 3, the right one is completely unsatisfactory. For a k-point set P C £2, de­ fine Evol( P) as the ( k-1 )-dimensional volume of the simplex spanned by P (so Evol(P) == 0 if P is affinely dependent). For a k-point metric space (S, p), the volume Vol(S) is defined as sup1 Evol(f(S)), where the supremum is over all 1-Lipschitz f: S -7 £2. An embedding f: (V, p) -7 £2 is (k, D) volume-respecting if for every k-point subset S C V, we have D · Evol(/(8))1/(k-l) > Vol(S)1/(k-l). For D small, this means that the image of any k-tuple spans nearly as large a vol­ ume as it possibly can for a 1-Lipschitz map. (Note, for example, that an isometric embedding of a path into /!2 is not volume-respecting.) Feige showed that Vol(S) can be approximated quite well by an intrinsic para1neter of the metric space (not referring to embed dings), namely, by the tree volume Tvol(S), which equals the products of the edge lengths in a minimum spanning tree on S (with respect to the metric on S). Namely, Vol(S) < (kغI)! Tvol(S) < 2(k-2)/2 Vol(S). He 15.7 Upper Bounds for Euclidean Embeddings proved that for any n-point metric space and all k > 2, the embed­ ding as in the proof of Theorem 15. 7.1 is (k, O(log n + v'k log n log k )) volume-respecting (the result in the conference version of his paper is slightly weaker). 397 The notion of volume-respecting embeddings currently still looks somewhat mysterious. In an attempt to convey some feeling about it, we outline Feige's application and indicate the use of the volume­ respecting condition in it. He considered the problem of approximat­ ing the bandwidth of a given n-vertex graph G. The bandwidth is the minimum, over all bijective maps <p: V( G) --+ {1, 2, . . . , n }, of max { I'P( u) - <p( v) I: { u, v} E E( G)} (so it has the flavor of an approx­ imate embedding problem). Computing the bandwidth is NP-hard, but Feige's ingenious algorithm approximates it within a factor of O((logn)const). The algorithm has two main steps: First, embed the graph (as a metric space) into f2, with m being some suitable power of logn, by a (k, D) volume-respecting embedding /, where k = logn and D is as small as one can get. Second, let .A be a random line in f2 and let '¢( v) denote the orthogonal projection of f( v) on .A. This '¢: V (G) ---+ .A is almost surely injective, and so it provides a linear or­ dering of the vertices, that is, a bijective map <p: V (G) --+ { 1, 2, . . . , n}, and this is used for estimating the bandwidth. To indicate the analysis, we need the notion of local density of the graph G: ld( G) = max{ IB( v, r) 1/r: v E V( G), r = 1, 2, . . . , n }, where B ( v, r) are all vertices at distance at most r from v. It is not hard to sec that ld( G) is a lower bound for the bandwidth, and Feige's analysis shows that 0 (ld (G) (log n )const) is an upper bound. One first verifies that with high probability, if { u, v} E E( G), then the images '¢( u) and '¢( v) on .A are close; concretely, 1'¢( u) - '¢( v) I < ê = 0( /(log n) / m) . For proving this, it suffices to know that f is !-Lipschitz, and it is an immediate consequence of measure concentra­ tion on the sphere. If b is the bandwidth obtained from the ordering given by '¢, then some interval of length ê on .A contains the images of b vertices. Call a k-tuple S C V( G) squeezed if '¢(8) lies in an interval of length ê- If b is large, then there are many squeezed S. On the other hand, one proves that, not surprisingly, if ld( G) is small, then Vol(S) is large for all but a few k-tuples S C V(G). Now, the volunle­ respecting condition enters: If Vol(S) is large, then conv(f(S)) has large (k-1)-dimensional volume. It turns out that the projection of a convex set in R2 with large (k-1}-dimensional volume on a random line is unlikely to be short, and so S with large Vol(S) is unlikely to be squeezed. Thus, by estimating the number of squeezed k-tuples in two ways, one gets an inequality bounding b from above in terms of ld(G). Vempala [Vem98] applied volume-respecting embeddings in an­ other algorithmic problem, this time concerning arrangement of graph 398 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces vertices in the plane. Moreover, he also gave alternative proof of some of Feige's lemmas. Rao in the already mentioned paper [Rao99] also obtained improved volume-respecting embeddings for planar metrics. Bartal 's trees. As we have seen, in Bourgain's method, for a given metric p one constructs a convex combination 2: o:ivi > b p, where vi are line pseudometrics dominated by p. An interesting "dual" result was found by Barta} [Bar96], following earlier work in this direction by Alon, Karp, Peleg, and West [AKPW95]. He approximated a given p by a convex combination Ef 1 O:iTi , where this time the inequalities go in the opposite direction: Ti > p and 2::: aiTi < Dp, with D = O(log2 n) (later he improved this to O(log n log log n) in [Bar98]). The Ti are not line metrics (and in general they cannot be), but they are tree metrics, and even of a special form, the so-called hierarchically well-separated trees. This means that Ti is given as the shortest-path metric of a rooted tree with weighted edges such that the distances from each vertex to all of its sons are the same, and if v is a son of u, and w a son of v, then ri(u, v) > K · ri(v, w) , where K > 1 is a parameter that can be set at will (and the constant in the bound on D depends on it). This result has been used in approximation algorithms for problems involving metric spaces, according to the following scheme: Choose i E {1, 2, . . . , N} at random, with each i having probability o:i, solve the problem in question for the tree metric Ti , and show that the expected value of the solution is not very far from the optimal solution for the original metric p. Since tree metrics embed isometrically into f1, Bartal's result also implies O(log n log log n )-embeddability of all n-point metric spaces into f1, which is just a little weaker than Bourgain's approach (and it also implies that O(logn) is a lower bound in Bartal's setting). For a simpler proof of a weaker version Bartal's result see Indyk [IndOl]. Exercises 1. (Embedding into fp) Prove that under the assumptions of Lemma 15.7.3, the metric space (V, p) can be D-embedded into e::, 1 < p < oo, with distortion at most D. (You Inay want to start with the rather easy cases p = 1 and p = oo, and use Holder's inequality for an arbitrary p.) 0 2. (Dimension reduction for the embedding) (a) Let E1, . . . , Em be independent events, each of them having proba­ bility at least 112 • Prove that the probability of no more than ࢉ of the Ei occurring is at n1ost e-cm , for a sufficiently sntall positive constant c. Use suitable Chernoff-type estimates or direct estimates of binomial co­ efficients. 0 15.7 Upper Bounds for Euclidean Embeddings 399 (b) Modify the proof of Theorem 15.7.1 as follows: For each j 1, 2, . . . , q, pick sets Aij independently at random, i = 1, 2, . . . , m, where the points are included in Aij with probability 2-j and where rn = Clog n for a sufficiently large constant C. Using (a) and Lenl­ mas 15. 7.2 and 15. 7.3, prove that with a positive probability, the embed­ ding f: V ---+ f×m given by f(v)ij = p(v, Aii) has distortion O(log n). 0 3. Let a 1 , a2 , . . . , an, b1 , b2 , . . . , bn, a1, a2, . . . , an be positive real numbers. Show that @] 4. Let Pn be the metric space { 0, 1, . . . , n} with the metric inherited from R (or a path of length n with the graph metric). Prove the following Ramsey-type result: For every D > 1 and every c > 0 there exists an n = n(D, c) such that whenever f: Pn ---+ (Z, a) is a D-embedding of Pn into some metric space, then there are a < b < c, b = ate, such that f restricted to the subspace {a, b, c} of Pn is a (!+c)-embedding. That is, if a sufficiently long path is D-embedded, then it contains a scaled copy of a path of length 2 embedded with distortion close to 1. m Can you extend the proof so that it provides a scaled copy of a path of length k? 5. (Lower bound for embedding trees into €2) (a) Show that for every E > 0 there exists 6 > 0 with the following property. Let xo , x1 , x2, x² E €2 be points such that ll xo - X t il, ll x1 -x2ll , llx1 - x³ II E [1, 1 + 8] and ll xo - x2ll , ll xo - x² II E [2, 2 + 8] (so all the distances are almost like the graph distances in the following tree, except possibly for the one marked by a dotted line). Then ll x2 - x³ II < c; that is, the remaining distance must be very short. 0 (b) Let Tk,rn denote the complete k-ary tree of height m; the following picture shows T 3 ,2: Show that for every r and m there exists k such that whenever the leaves of Tk,m are colored by r colors, there is a subtree of Tk,m isomorphic to T 2 ,m with all leaves having the same color. Á 400 Chapter 15: Embedding Finite Metric Spaces into Normed Spaces (c) Use (a), (b), and Exercise 4 to prove that for any D > 1 there exist m and k such that the tree Tk ǃm considered as a metric space with the shortest-path metric cannot be D-embedded into f2• CTI 6. (Another lower bound for embedding trees into £2) (a) Let x0, x1, . . • , xn be arbitrary points in a Euclidean space (we think of them as images of the vertices of a path of length n under some em­ bedding). Let r = { (a, a + 2k, a+ 2k+1): a = 0, 1, 2, . . . , a+ 2k+l < n, k = 0, 1, 2 . . . }. Prove that """" Jlxa - 2xb + Xcll2 ÒII JJ2· [ ( )2 < [ Xa - Xa+l , c - a (a,b,c)Er a=O this shows that an average triple (xa, Xb, Xc) is "straight" (and provides an alternative solution to Exercise 4 for Z = £2). ࢓ (b) Prove that the complete binary tree T2,1n requires 0( Jlog m ) dis­ tortion for embedding into f2. Consider a nonexpanding embedding f: V(T2,m) --+ f2 and sum the inequalities as in (a) over all images of the root-to-leaf paths. [!] 7. (Bourgain's embedding of complete binary trees into f2) Let Bm = T2,m be the complete binary tree of height m (notation as in Exercise 5). We identify the vertices of Bm with words of length at most · m over the alphabet {0, 1}: The root of Bm is the empty word, and the sons of a vertex w are the vertices wO and w1. We define the en1bedding f: V(Bm) --+ fǮV(Bnt)l-l, where the coordinates in the range of f are indexed by the vertices of Bm distinct from the root, i.e., by nonempty words. For a word w E V(Brn) of length a, let f(w)u = Ja-b+1 if u is a nonempty initial segment of w of length b, and f ( w )u = 0 otherwise. Prove that this embedding has distortion 0( vflog m ) . 0 8. Prove that any finite tree metric can be isotnetrically embedded into f. Ϸ 9. (Low-dimensional embedding of trees) (a) Let T be a tree (in the graph-theoretic sense) on n > 3 vertices. Prove that there exist subtrees T1 and T2 of T that share a single vertex and no edge and together cover T, such that nlin(IV(Tt)J, IV(T2)1) < l+n. lil (b) Using (a), prove that every tree metric space with n points can be isometrically embedded into f# with d = O(log n). 0 This result is from [LLR95]. What Was It About? An Informal Summary Chapter 1 • Linear and affine notions (dependence, hull, subspace, mapping); hyper­ plane, k-flat. • General position: Degenerate configurations have measure zero in the space of all configurations, provided that degeneracy can be described by countably many polynomial equations. • Convex set, hull, combination. • Separation theorem: Disjoint convex sets can be separated by a hyper­ plane; strictly so if one of them is con1pact and the other closed. • Theorems involving the dimension: Helly (ifF is a finite family of convex sets with empty intersection, then there is a subfamily of at moȺt d+ 1 sets with empty intersection), Radon ( d+ 2 points can be partitioned into two subsets with intersecting convex hulls), Caratheodory (if x E conv( X), then x E conv(Y) for some at most (d+l)-point Y C X). • Centerpoint of X: Every half-space containing it contains at least d!l of X. It always exists by Helly. Ham-sandwich: Any d mass distributions in Rd can be simultaneously bisected by a hyperplane. Chapter 2 • rvlinkowski's theorem: A 0-symmetric convex body of volume larger than 2d contains a nonzero integer point. • General lattice: a discrete subgroup of (R d, +). It can be written as the set of all integer linear combinations of at most d linearly independent vectors (basis). Determinant = volume of the parallelotope spanned by a basis. • :tviinkowski for general lattices: Map the lattice onto zd by a linear map-. ping. 402 Wl1at Was It About? An Informal Summary Chapter 3 • Erdos-Szekeres theorem: Every sufficiently large set in the plane in gen­ eral position contains k points in convex position. How large? Exponential in k. • What about k-holes (vertex sets of empty convex k-gons}? For k = 5 yes (in sufficiently large sets), for k > 7 no (Horton sets), k = 6 is a challenging open problem. Chapter 4 • Szemeredi-Trotter theorem: m distinct points and n distinct lines in the plane have at most O(m213n213 + m + n) incidences. • This is tight in the worst case. Example for m = n: Use the k x 4k2 grid and lines y = ax + b with a = 0, 1, . . . , 2k-1 and b = 0, 1, . . . , 2k2-l. • Crossing number theorem: A simple graph with n vertices and m > 4n edges needs 0.( m3 /n2) crossings. Proof: At least m-3n crossings, since planar graphs have fewer than 3n edges, then random sampling. • Forbidden bipartite subgraphs: A graph on n vertices without Kr,s has O(n2-l/r) edges. • Cutting lemma: Given n lines and r, the plane can be subdivided into O(r2) generalized triangles such that the interior of each triangle is in­ tersected by at most š lines. Proof of a weaker version: Triangulate the arrangement of a random sample and show that triangles intersected by many lines won't survive. Application: geometric divide-and-conquer. • For unit distances and distinct distances in the plane, bounds can be proved, but a final answer seems to be far away. Chapter 5 • Geometric duality: Sends a point a to the hyperplane (a, x) = 1 and vice versa; preserves incidences and sidedness. • Convex polytope: the convex hull of a finite set and also the intersection of finitely many half-spaces. • Face, vertex, edge, facet, ridge. A polytope is the convex hull of itȺ ver­ tices. A face of a face is a face. Face lattice. Duality turns it upside down. Simplex. Simple and simplicial polytopes. • The convex hull of n points in Rd can have as many as O(nld/2J ) facets; cyclic polytopes. • This is as bad as it can get: Given the nu1nber of vertices, cyclic polytopes maximize the number of faces in each dimension (upper bound theorem). • Gale transform: An n-point sequence in Rd (affinely spanning Rd) is mapped to a sequence of n vectors in Rn-d-l. Properties: a simple linear algebra. Faces of the convex hull go to subsets whose complement contains 0 in the convex hull. What Was It About? An Informal Summary 403 • 3-dimensional polytopes are nice: Their graphs correspond to vertex 3-connected planar graphs (Steinitz theorem), and they can be realized with rational coordinates. From dimension 4 on, bad things can happen (irrational or doubly exponential coordinates may be required, recogni­ tion is difficult). • Voronoi diagram. It is the projection of a convex polyhedron in dimension one higher (lifting using the paraboloid). Delaunay triangulation (defined using empty balls; dual to the Voronoi diagram). Chapter 6 • Arrangement of hyperplanes (faces, vertices, edges, facets, cells). For d fixed, there are 0( nd) faces. • Clarkson's theorem on levels: At most O(nld/2J kfd/21 ) vertices arc at level at most k. Proof: Express the expected nuntber of level-0 vertices of a random sample in two ways! • Zone theorem: The zone of a hyperplane has 0 ( n d-l) vertices. Proof: Delete a random hyperplane, and look at how many zone faces are sliced into two by adding it back. • Proof of the cutting lemma by a finer sampling argument: Vertically decompose the arrangement of a sample taken with probability p, show that the number of trapezoids intersected by at least tnp lines decreases exponentially with t, take Ɓ-cuttings within the trapezoids. • Canonical triangulation, cutting lemma in R d ( 0( rd) simplices). • Milnor-Thorn theorern: The arrangentent of the zero sets of n polynonti­ als of degree at most D in d real variables has at most O(Dn/d)d faces. • Most arrangements of pseudo lines are nonstretchable (by Milnor-Thorn). Similarly for many other combinatorial descriptions of geometric config­ urations; usually most of them cannot be realized. Chapter 7 • Davenport-Schinzel sequences of order s (no abab . . . with s+ 2 letters); maximum length As ( n). Correspond to lower envelopes of curves: The curves are graphs of functions defined everywhere, every two intersecting at most s times. Lower envelopes of segments yield DS sequences of or­ der 3. • .-\3 = 8(na(n)}; .Xs(n) is almost linear for every fixed s. • The lower envelope of n algebraic surface patches in R d, as well as a single cell in their arrangement, have complexity O(nd-l+e). Charging schemes and more random sampling. 404 What Was It About? An Inforinal Summary Chapter 8 • Fractional Helly theorem: If a family of n convex sets has a(dC1) inter­ secting ( d+ 1 )-tuples, then there is a point common to at least dE 1 n of the sets. • Colored Caratheodory theorem: If each of d+ 1 sets contains 0 in the convex hull, then we can pick one point from each set so that the convex hull of the picked points contains 0. • Tverberg's theorem: (d+l)(r-1)+1 points can be partitioned into r sub­ sets with intersecting convex hulls (the number is the smallest conceivable one: r-1 simplices plus one extra point). • Colored Tverberg theorem: Given points partitioned into d+1 color classes by t points each, we can choose r disjoint rainbow subsets with intersecting convex hulls, t = t( d, r) . Only topological proofs are known. Chapter 9 • The dimension is considered fixed in this chapter. First selection lemma: Given n points, there exists a point contained in a fixed fraction of all simplices with vertices in the given points. • Second selection lemma: If a(dF1) of the simplices are marked, we can find a point in many of the marked simplices (at least 0( a8d (dFl))). Needs colored Tverberg and Erdos-Simonovits. • Order type. Same-type lemma: Given n points in general position and k fixed, one can find k disjoint subsets of size 0( n) , all of whose transversals have the same order type. • A hypergraph regularity lemma: For an c > 0 and a k-partite hypergraph of density bounded below by a constant {3 > 0 and with color classes X1, . . . , Xn of size n, we can choose subsets Y1 C X1, . . . , Y k C Xk, IY1 I = . . . = IYkl > en, c = (k, {3, c) > 0, such that any Zt c Yt, . . . ' zk c yk with IZil > cltil induce some edge. • Positive-fraction selection lemma: Given n red, n white, and n blue points in the plane, we can choose ; points of each color so that all red-white­ blue triangles have a common point; similarly in Rd. Chapter 10 • Set systems; transversal number r, packing number v. Fractional transver­ sal and fractional packing; v = r by LP duality. • Epsilon net, shattered set, VC-dimension. Shatter function lemma: A set system on n points with VC-dimension d has at most Lǜ=O (Ë) sets. • Epsilon net theorem: A random sample of C Ǜ log ! points in a set system of VC-dimension d is an c--net with high probability. In particular, E-nets exist of size depending only on d and c. • Corollary: T = 0( T log T) for bounded VC-dimension. What Was It About? An Informal Summary 405 • Half-spaces in Rd have VC-dimension d+l. Lifting (Veronese map) and the shatter function lemma show that systems of sets in R d definable by Boolean combinations of a bounded number of bounded-degree polyno­ mial inequalities have bounded VC-dimension. • Weak epsilon nets for convex sets: Convex sets have infinite VC-dimen­ sion, but given a finite set X and c > 0, we can choose a weak c--net of size at most f ( d, c-) , that is, a set (generally not a subset of X) that intersects every convex C with IC n XI > ciXI. • Consequently, T is bounded by a function of r for any finite system of convex sets in Rd. • Alon-Kleitman (p, q)-theorem: Let :F be a system of convex sets such that among every p sets, some q intersect (p > q > d+l). Then r(:F) is bounded by a function of d, p, q. Proof: First bound v using fractional Helly; then T is bounded in terms of r = v as above. • A similar {p, q)-theorem for hyperplane transversals of convex sets (even though no Helly theoremf). Chapter 11 • k-sets, k-facets (only for sets in general position!), halving facets. Dual: cells of level k, vertices of level k. The k-set problem is still unsolved. Straightforward bounds from Clarkson's theorem on levels. • Bounds for halving facets yield bounds for k-facets sensitive to k. • A recursive planar construction with a superlinear number of halving edges. • Lovasz lemma: No line intersects more than O(nd-l) halving facets. Proof: When a moving line crosses the convex hull of d-1 points of X, the number of halving facets intersected changes by 1 (halving-facet in­ terleaving lemma). • Implies an upper bound of 0( nd-<5(d)) for halving facets by the second selection lemma. • In the plane a continuous motion argument proves that the crossing num­ ber of the halving-edge graph is O(n2), and consequently, it has O(n413) edges by the crossing number theorem. This is the best we can do in the plane, although O(n1+e) for every fixed c- > 0 is suspected. Chapter 12 • Perfect graph (X = w hereditarily). weak perfect graph conjecture (now theorem): A graph is perfect iff its complement is. • Proof via the polytope { x E R v : x > 0, x(K) < 1 for every clique K}. • Brunn's slice volume inequality: For a compact convex C C Rn+l, voln( {x E C: x1 = t} )1/n is a concave function of t (as long as the slices do not miss the body). 406 What Was It About? An Informal Summary • Brunn-Minkowski inequality: vol(A)1fn + vol(B)1fn < vol(A + B)1fn for nonempty compact A, B C Rn. • A partially ordered set with N linear extensions can be sorted by O(log N) comparisons. There always exists a comparison that reduces the number of linear extensions by a fixed fraction: Compare elements whose average heights differ by less than 1. • Order polytope: 0 < x < 1, Xa < Xb whenever a -< b. Linear extensions correspond to congruent simplices and good comparison to dividing the volume evenly by a hyperplane Xa == Xb· The best ratio is not known (conjectured to be ؤ :  ). Chapter 13 • Volumes and other things in high dimensions behave differently from what we know in R 2 and R 3. For example, the ball inscribed in the unit cube has a tiny volume. • An 17-net is an inclusion-maximal 17-separated set. It is mainly useful because it is ry-dense. In sn-l, a simple volume argument yields 17-nets of size at most (4/1J)n. • An N-vertex convex polytope inscribed in the unit ball Bn occupies at most O(ln(y +1)/n)n/2 of the volume of nn. Thus, with polynomially many vertices, the error of deterministic volume approximation is expo­ nential in the worst case. • Polytopes with such volume can be constructed: For N = 2n use the crosspolytope, for N = 4n a 1-net in the dual sn-1, and interpolate using a product. • Ellipsoid: an affine image of Bn. John's lemma: Every n-dimensional convex body has inner and outer ellipsoids with ratio at most n, and a symmetric convex body admits the better ratio fo. The maximum­ volume inscribed ellipsoid (which is unique) will do as the inner ellipsoid. Chapter 14 • Measure concentration on sn-1: For any set A occupying half of the sphere, almost all of sn-t is at most O(n-112) away from A. Quantita-tively, 1 - P[At] < 2e-t2n/2. • Similar concentration phenomena in many other high-dimensional spaces: Gaussian measure on Rn, cube {0, l}n, permutations, etc. • Many concentration inequalities can be proved via isoperimetric inequal­ ities. Isoperimetric inequality: Among all sets of given volume, the ball has the smallest volume of a t-neighborhood. • Levy's lemma: A !-Lipschitz function f on sn-1 is within O(n-112) of its median on most of sn-1• • Consequently (using ry-nets), there is a high-dimensional subspace on which f is almost constant (use a random subspace). What Was It About? An lnforn1al Sun1n1ary 407 • Norrned spaces, norm induced by a symmetric convex body. • For any n-din1ensional symrnetric convex polytope, log(fo) log(/ n-1) = n ( n) ( rnany vertices or many facets). • Dvoretsky's thcoren1: For every k and c: > 0 there exists n (n = e0(k/c.2) Ⱥuffices) such that any n-dirnensional convex body has a k-dinlension­ al ( 1 +c: )-spherical section. In other words, any high-diinensional normed space has an alrnost Euclidean subspace. Chapter 15 • 1v1ctric space; the distortion of a mapping between two rnetric ȺpaceȺ, D-e1n bedding. Spaces £ǚ and fp. • Flattening lernma: Any n-point Euclidean metric space can be (l+c:)­ ernbedded into £ࢊ, k = O(c:-2 logn) (project on a random k-dimensional subspace). • Lower bound for D-ernbedding into a d-din1ensional normed space: count­ ing; take all subgraphs of a graph without short cycles and with n1any edgeH. • The rn-dirnensional Hanuning cube needs JTii distortion for embedding into £2 (short diagonals and induction). • Edge expansion (conductance), second eigenvalue of the Laplacian ma­ trix. Constant-degree expanders need ࢋ1(1og n) distortion for embedding into £2 (tight). Method: Compare sums of squared distances over the edges and over all pairs, in the graph and in the target space. • D-ernbeddability into £2 is polynomial-time decidable by sernidcfinite . progran1nung. • All n-point spaces etnbed isotnetrically into fٳ. For embeddings with sn1aller dimension, use distances to randon1 subsets of suitable density as coordinates. A sin1ilar Inethod yields O(log n )-ernbedding into f2 (or any other fp). • Exan1ple of algorithmic application: approxin1ating the sparsest cut. Ein­ bed the graph metric into f 1 with low distortion; this yields a cut pseu­ dometric defining a sparse cut. Hints to Selected Exercises 1.2.7(a). T·he existence of an x > 0 with Ax = b means that b lies in the convex cone generated by the columns of A. If b is not in the cone, then it can be separated from it as in Exercise 6(b ). 1.2.7(b). Apply (a) with the dx (n+d) matrix (A I Id), where Id is the iden­ tity matrix. 1.3.5(c). V ,--2d Ţ /-:--(d-+ţ 1). 1.3.8(b). By Helly's theorem, K = nxEX conv(V(x)) =/= 0. Prove that K is the kernel. 1.3.10(b). Assign the set Hx = {(a, b) E Rd x R: (a, x) < b} to each x E X and the set Gy = {(a, b) E Rd x R: (a, x) > b} to each y E Y. Use Helly's theorem. 1.4.1(a). Express Ԉ as UÓ 1 Ci, where C1 C C2 C · · · are compact. Then J.L( Ԉ) = EÓ 1 tt( ci+l \ Ci) by the (1-additivity of J.L· (More generally, every Borel probability rneasure on a separable metric space is regular: The measure of any set can be approximated with arbitrary precision by the measure of a compact set contained in it.) 2.1.4( c). Let p( x) be a polynomial with integer coefficients having a as a root. If deg(p) = d and Ia - m/nl < nd+l, say, then ndp(m/n) is integral, but lndp(m/n)j < 1 for large n. 2.1.5(a). Seek a nonzero vector in Z3 close to the line y = a1x, y = a2x. 2.2.1. Show that elementary row operations on the matrix, which do not change the determinant, also preserve the volume. Diagonalize the matrix. 3.1.4. Project orthogonally on a suitable plane and apply Erdos-Szekeres. 3.2.4. It suffices to deal with the case k = 4m. First prove by induction that a 2m-point cup contained in a Horton set has at least 2m - 2m points of the set above it. 4.1.2. Place points on two circles lying in orthogonal planes in R4• 4.3.3 .. Choose a point set P, one point in each of the m cells. From each top edge, cut off a little segment ab and replace it by the segments ap and pb, where p E P lies below the edge. Each line is replaced by a polygonal curve. 410 Hints to Selected Exercises Consider a graph drawing with P as the vertices and the polygonal curves defining edges. 4.3.4( c). Consider a drawing of G witnessing pair-cr( G) == k. At most 2k edges are involved in any crossings, and the remaining ones (the good edges) form a planar graph. Redraw the edges with crossings so that they do not intersect any of the good edges and, subject to this, have the rninirnurn pos­ sible number of crossings. 4.4.1(a). O(n1017) == O(nl.43). 4.4.1 (b). Let Ci be the points of C that are the centers of at least 2i and at most 2i+I circles. We have ICi I == Qi < nj2i. One incidence of a line of the form fuv with a c E Ci contributes at most 2i+2 edges. 4.4.2(b ). Look at u, v with J.L( u, v) > 4Vci;, and suppose that at least half of the uv edges have their partner edges adjacent to u, say. These partner edges connect u to at least 2Vci; distinct neighbor vertices. By (a), at most Vdi/2 of these partner edges may belong to Eh· 4.4.2(c). We get lEI == O(IE \ Ehl) == O(n413d;16); at the same time, lEI > ndi/2. Thiࢌ gives di == O(n215) and Icirc(n, n) == O(n?15) == O(n1.4). 4. 7 .1. Consider a trapezoid AB B' A'; AB is the bottom side and A' B' the top side. Suppose AB is contained in an edge CD of P 1 and A' B' is an edge of P i+l (the few other possible cases are discussed similarly). Let A1 be the interȺection of the level qj + i with the vertical line AA', and sirnilarly for B1. The segments A' B', A' A1, and B' B1 each have at most q+ 1 intersections. Observe that if AA1 has some a intersections, then CA also has at least a intersections, and similarly for BB1 and BD. At the same tirne CD has at most q+l intersections altogether. Therefore, AA1, AB, and BB1 have no more than q+ 1 intersections in total. 5.1.9(b ). Geometric duality and Helly's theorern. 5.1.9(c). The first segrnent s1 is a chord of the unit circle passing near the center. Each si+I has one endpoint on the unit circle, and the other endpoint almost touches si near the center. 5.3.2. Ask in this way: Given a normal vector a E Rd of a hyperplane, which vertices maximize the linear function x 1---t (a, x)? For example, for the cube, if ai > 0, then Xi has to be + 1; if ai < 0, then Xi == -1; and for ai = 0 both xi == ±1 are possible. 5.3.8. If the rernoved vertices u, v lie in a conunon 2-face f, let h be the plane defining /; from each vertex there is an edge going "away from h," except for the vertices of a single face g # f "opposite" to f. The graph of the face g is connected and can be reached from any other vertex. If u, v do not share a 2-face, pass a plane h through them and one more vertex w. The subgraph on the vertices below h is connected, and so is the su bgraph on the vertices above h; they are connected via the vertex w. 5.4.2. Do not forget to check that (3 is not contained in any hyperplane. Hints to Selected Exercises 411 5.5.1 (c). The sirnplest example seems to be the product of an n-vertex 4-di­ rnensional cyclic polytope with its dual. 5.7.11(c). Assurne n > 2. If x, y are points on the surface of such an inter­ section P, coming from the surface of the same ball €, show that the shorter of the great circle arcs on € connecting x and y lies entirely on the surface of P (this is a kind of "convexity" of the facets). Infer that each ball contributes at rnost one facet, and use Euler's formula. 6.1.5. n! · Cn, where Cn = nȅl (آ,أ) is the nth Catalan number. 6.1.6( a). One possibility is a perturbation argument. Another one is a proof by induction, adding one line at a time. 6.1. 7(b ). Warning: The (R) lines determined by n points in general position are not in general position! 6.2.2 (b). Assurning that no si is vertical, write si = { ( x, y) E R 2: ci < x < di, y = aix + bi}· Whether si and Sj intersect can be determined from the signs of the 0( n2) polynomials ai - aj, ci - Cj, di - dj, ci ( ai - aj) + bi - bj , di ( ai - aj) + bi - bj, i, j = 1, 2, . . . , n. 6.2.2(c). Use the lower bound for the quantity K(n, n) in Chapter 4. 6.3.4(a). First derive Xw > IWI - n, and then use it for a random sample of the lines. 6.4.3(a). Define an incidence graph between lines and the considered m cells (incidence = the line contributes an edge to the cell). This graphs contains no K2,5, since two cells have at most 4 "common tangents." 6.4.3. Each of the given n cellͯ either lieͯ completely within a single triangle Lli, or it is in the zone of an edge of some triangle. Use the zone theorem for bounding the total number of edges of cells of the latter type. 6.5.2(a). E [X2] = L:i,j E[XiXj]· E [XiXi] = p2 for i =/= j and E [Xf] = p. The result is p2n(n - 1) + pn. 7.1.1. Construct the curves frorn left to right: Start with n horizontal lines on the left and always "bring down" the curve required by the sequence. 7.1.4. Warning: The abab subsequence can appear! 7.1.8(b ). For simplicity assume that all the si and ti are all distinct and let E = { s1 , t1, . . . , Bn, tn}. Call a vertex v active for an interval I C R if v appears on the lower envelope of Lt for so1ne t E I and I n { si, s1, ti, tj} =/= 0, where e.i, f j are the lines defining v. Let g( I) be the number of active vertices for I and let g(m) = max{g(I): II n El < m}. Split I in the middle of E n I and derive g(m) < O(m) + g( lm/2J ) + g(fm/21). 7.3.2(b ). Zero out the first and last 1 in each row. Go through the Inatrix column by column and write down the row indices of 1 's. Deleting contiguous repetitions produces a Davenport-Schinzel sequence with no ababa. 7 .4.1 (b). Given a sequence w witnessing 1/J; ( m, n), replace each of the m ͯegments in the decomposition of w by the list of its symbols (and erase contiguous repetitions if needed). 412 Hints to Selected Exercises 8.1.2. Make the sets compact as in the proof of the fractional Helly theorem. Consider all d-element collections K containing one set from each Ci but one, and let VK, be the lexicographic minimum of the intersection of n !C. Let !Co be such that v = VJC0 is the lexicographically largest among all VK, , and let io be the index such that Ko contains no set from Cio. Show that for each c E cio' v is the minimum of c n n Ko' and in particular' v E c. 8.2.1. Regard SUT as a Gale transform of a point sequence and reformulate the problem using that sequence. Or lift S U T into Rd+l suitably. 9.2.2(b). For d = 3: Choose k points on the moment curve, say, and replace each by a cluster of njk points. Use all tetrahedra having two vertices in one cluster and the other two vertices in another cluster. There are about n4 / k2 such tetrahedra, and no point is contained in more than n4/k4 of them if the clusters are small and k is not too large compared to n. 9 .3.1 (b). Be careful with degenerate cases; first determine the dimension of the affine hull of PI, . . . , Pd+l and test whether Pd+2 lies in it. Then you may need to use some number of other affinely independent points among the Pi· 9.3.3(a). Let Xi, Xԉ E xi be such that (xl, · · · , Xd+l) and (xࢍ, . . . ,xࢎ+l) have different orientations. Let Yi be a point rnoving along the segn1ent xix@ at constant speed, starting at Xi at time 0 and reaching xԉ at time 1. By continuity of the determinant, all the Yi lie in a common hyperplane at some moment, and this hyperplane intersects the convex hulls of all the Xi. 9.3.3(b ). Let the hyperplane h intersect all the Ci, and let ai E h n Ci· Use Radon's lemma. 9.3.3(c). Suppose that 0 E conv(UiEJ Ci) n conv(Uj¢/ CJ)· Then there are points Xi E Ci, i = 1, 2, . . . , d+1, such that 0 E conv{xi: i E I} and 0 E conv { x j: j ¢ J}. Hence the vectors {Xi: i E J} are linearly dependent, as well as those of {xi: j ¢ J}. Thus, the linear subspace generated by all the Xi has dimension at most d-1. 9.3.5(a). Partition P into 3 sets and apply the same-type lemma. IfY1, Y2, Y3 are the resulting sets, then each line misses at least one conv(}i). Let P' be the Yi whose convex hull is missed by the largest number of lines of L. 9.3.5(b). First apply (a) with P consisting of the left endpoints of the seg­ ments of S. Then apply (a) again with the right endpoints of the remaining segments and the remaining lines. Finally, discard either the lines intersected by all segments or those intersected by no segment. 9.3.5( c). Use (b) twice. 9.4.4. Consider the complete bipartite graphs with classes Vi and Vj, 1 < i < j < 4, and color each of their edges randomly either red or blue with equal probability. A triple { u, v, w} with u E Vi, v E ltj , w E V k, i < j < k, is present if and only if the edges { u, v} and { u, w} have distinct colors. 10.1.3. Choose the appropriate number of points independently at random according to the distribution given by an optimal fractional transversal. Hints to Selected Exercises 413 10.1.4(a) . Let mk be the number of yet uncovered sets after the last step i such that Xi covered more than k previously uncovered sets (md = IFI, mo == 0). Derive t < L:: %=1 mk--;k-I and note that m,k < vk(F). 10.1.6(b ). By the Farkas lemma, it suffices to check the following: For all u E Rm, v E Rn, and z E R such that u > 0, v > 0, z > 0, uTA < zc, and Av > zb, we have uTb < cT v. For z =I= 0 this is (a), and for z = 0 choose xo E P and Yo E D and use uTb < uT Axo < 0 and cT v > y'{; Av > 0. 10.2.2. All subsets of size at most d. 10.3.1. 7. 10.3.3. Such a p would have to be 0 on the boundary, but if a polynomial is 0 on a segment, then it is 0 on the whole line containing that segment. 10.3.4(b). Choose a j-net S C L for the set system (L, T) and triangulate the arrangement of S. No dangerous triangle appears in this triangulation. 10.3.6(c) . The shattering graph SGd considered in Exercise 5 contains a subdivision of Kd where each edge is subdivided once. Some care is needed, since some vertices might be both shattering and shattered in G. 10.4.1(b ). This method gives size 0 ( c2d-t). 10.4.2(b ) . (a) yields /(c) < (;) + f/(fc/3); set f = 3/ ...fi . The exponent of log ء is log2 3. 10.4.3. We may assume that c is sufficiently small. Let C be convex with IC n XI > En. Then C n X contains points a, b, c such that the shortest of the 3 arcs determined by them, call it a, is at least 0( c). Show that the triangle abc contains a point of Ni, where i is the smallest with c(1.01)i /10 > a. 10.5.2. If x is the last among the lexicographic minima of d-wise intersections of :F, the family { F E :F: x b F} satisfies the (p-d, q-d+ 1 )-condition. 10.5.3(b). By ham-sandwich, choose lines f, f' with IRi n XI < k+1, where R 1, . . . , R4 are the "quadrants" determined by I! and I!'. The point I! n £' and centerpoints of Ri n X form a transversal. 10.6.1 (a). No need to invoke the Alon-Kleitman machinery here. 10 .. 6.1(b) . Use Ramsey's theorem. 10.6.2( a). Count the incidences of endpoints with intervals (it can be as­ sumed that all the intervals have distinct endpoints). To get a better (3, apply Thran's theorem. 10.6.3. For F C /Cك finite, let g = UsEF{81, 82, . . . , Sk}, where 8 = 81 U · · · u Sk with the 8i convex. If F has many intersecting (d+1)-tuples, then Q has many intersecting (d+l)-tuples and so fractional Helly for F, with worse parameters, follows from that for Q. 10.6.4. Let C = f(d+l, d, k), where f(p, d, k) is as in Exercise 3, and h = ( d+ 1 )C. Let F' be the family of all intersections of C-tuples of sets of :F. 414 Hints to Selected Exercises This F' has the (d+l, d+l)-property, and so it has a C-point transversal T. Show that some point of T is contained in all members of F. 11.1.4. In R3: Place the planar construction on g points into the xz plane so that all of its points lie very near 0 and all the halving edges are almost parallel to the x-axis. A set A of g points is placed on the line x = 0, y = 1, and the re1naining g points are the reflected set -A. 11.1.5(a). Use the lower bound for K(n, n) in Chapter 4. 11.1.6(a). All the 12 lenses corresponding to such a K3,4 are contained in L U U, and so L intersects U at least 24 tin1es. This is in1possible, since U has at most 5 edges and L at most 7 edges (using ,\2(n) < 2n-1). 11.1.6( d). To bound vk(£), fix a k-packing M C C, take a randon1 sample R c r, and consider the family A of all lenses e in the arrangement of R "inherited" from M and such that none of the extrernal edges of f are contained in any other lens in the arrangement of R. Extremal edges of a lens are those contained in the lens and adjacent to one of its two end-vertices. 11.3.2. By Exercise l(a), a vertical line intersects the interior of at most I:kEK(k+1) k-edges with k E K. Argue as in the proof of the planar case of Theorem 11.3.3. 11.3.4(b ). These halving triangles are not influenced by projecting the other points of X centrally from Pk+l on a sphere around Pk+ l · 11.3.5(a). Let V be the vertex set of a j-facet F entered by f. Among the j points below the hyperplane defined by V we can choose any k points and add them to V, obtaining an S with F being the facet of conv(S) through which I! leaves conv(S). 11.3.5(b ) . See the end of Section 5.5 for a similar trick. 11.3.5( c) . For h1 = hn-<l-j, let X' be the mirror reflection of X by a horizontal hyperplane. 11.3.5( d). Move x far up. 11.3.6(a). Corollary 5.6.3(iii). 11.3.6(b). Use (a) and the formulas expressing the fk using the hj and the sk using the hj, respectively. 11.3.8(a). Draw a tiny sphere a around a vertex incident to at least 3n triangles. The intersections of the triangles with a define a graph drawn on a . With n vertices and at least 3n edges, the graph is non planar. 12.1.5. Let v be a vertex of P. First check that there is an a E zn such that v is the unique vertex n1inimizing (a, v) . Moreover, we may assume that a' = a + (1, 0, . . . , 0), too, has this property. Then v1 = (a', v) - (a, v) E Z. 12.1.6(b). We need that each integral b E ARn is the image of an integer point. Let A be a regular k x k submatrix of A with k = rank(A); we may assume that A is contained in the first k rows and in the first k columns of Hints to Selected Exercises 415 A. Let b consist of the first k components of b; then x == A -1 b is integral by (a). Append n - k zero components to x. 12.1.6(c). A vertex is determined by some n of the inequalities holding with equality; use (b). 12.1. 7(b ). It suffices to consider n = 2d + 1. For contradiction, suppose that zd n n< 1 !i = 0. For i = 1, 2, . . . , n, let 1'࣢ be T'i translated as far out,vardт as possible so that zd n int ( (nj=1 'Yj) n (nj i+l 'YJ)) = 0. Show that each l'؟ contributes a facet of P' = nȗ 1 l'ؠ and there is a Zi E zd in the relative interior of this facet. Applying (a) to { z1, . . . , Zn} yields a lattice point interior to P'. 12.2.5(b). Suppose vol(A), vol(B) > 0, fix t with vol(A)/(1-t)n == vol(B)jtn, and set C == 1 1 t A and D == f B. 12.2.7(a). Consider the horizontal slice Fy = {x E R: f(x) == y}, and Gy, Hy defined analogously. We have J f == J 01 vol(ࣣ11) dy. The assumption implies (1-t)Fy + tGy C Hy. Apply the one-dimensional Brunn-Minkowski to (1-t)Fy and tGy and integrate over y. 12.2.7(b). Let f(u) be the (n-1)-dimensional volume of the slice of C by the hyperplane x1 == u; sirnilarly for g( u) and D and for h( u) and C +D. 13.1.1. 2n /nL 13.1.2(b ). In = n V n J 0 (X) e-r2 rn-l dr. 13.2.3. Fix the coordinate system so that c = 0 and F lies in the coordinate hyperplane h = { Xn == 0}. Since 0 is not the center of gravity, for some i we have I == JF Xi dx # 0. Without loss of generality, i == 1 and I > 0. Let h1 be h slightly rotated around the flat { x1 == Xn == 0}; i.e., h1 == { x E Rn: {a, x) == 0} with a == (c:, 0, . . . , 0, 1). Let S1 be the simplex determined by the sarne facet hyperplanes as S except that h is replaced by h1. The difference vol(S) - vol(S1) is proportional to cl + O(c2) as c: -+ 0. Let h' be a parallel translation of h1 that touches Bn (near 0), and let S' be the corresponding simplex. Calculation shows that I vol(S1) - vol(S')I == O(c:2). 13.2.5. The Thales theorem implies that if x ¢ B( e v, e II vii), then v lies in the open half-space 'Yx containing 0 and bounded by the hyperplane passing through x and perpendicular to Ox. 13.3.1(b ). Geometric duality and Theorem 13.2.1. 13.4.4(b ). Helly's theorem for suitable sets in Rn+l. 13.4.5( a). Since the ratio of areas is invariant under affine transforms, we may assume that P contains B(O, 1) and is contained in B(O, 2). Infer that 99% of the edges of P have length 0(;) and 99% of the angles are 1r - 0(6). Then there are two consecutive short edges with angle close to 1r. 14.1.4. Choose a radius r such that the caps cut off from r Bn by the considered тlabт together cover at most half of the surface of r Bn. Then vol(K) > vol(K n rBn) > 7rn. 416 Hints to Selected Exercises 14.6.1. Suppose that maxi I vi I = lv11. For any fixed choice of a2, . . . , an, use Ø (lx + Yl + lx - yl) > IYI with y :::;: v1 and x :::;: 2:& 2 aivi· 14.6.2. We need to bound n-112E[IIZIIt/IIZIIJ from below for Z as in Lemma 14.6.4. Each IZil is at least a small constant {3 > 0 with proba­ bility at least 8; derive that IIZII1 = O(n) with probability at least &· 15.2.3(b). Let A1, . . . , An be the eigenvalues of A. The rank is the number of nonzero Ai· Estimate 2::: A\ in two ways: First use the trace of AT A, and then the trace of A and Cauchy-Schwarz. 15.2.3{d). If v1, . . . , vn E Rk, then the matrix A with aij = (vi , vj) has rank at most k. 15.3.4(a). Let n == 2m+1 and let each n-tuple in V have the form (0, e1, e2, . . . , em, em+l + 10cwl, em+l + 10cw2, . . . , e2rn + 10cwm), where each Wi is an 0/1 vector with l40ىe2 J ones among the first m positions and zeros elsewhere. 15.4.2. Let Gi == (Vi, Ei), where V o C V1 C · · · C Vm. For each e E Ei-1, we have a pair { Ue , Ve} of new vertices in Gi in the square that replaces e; let Fi = { { Ue , Ve} : e E Ei-1}. With notation as in the proof of Theorem 15.4.1, put E == Em and F == Eo U U:n 1 Fi and show that RE,F(P) == Jm+1, while RE,F(a) < 1. For the latter, sum up the inequalities a2(Fi) + a2(Ei-I) < a2 (Ei), i = 1, 2, . . . , m, obtained from the short diagonals lemma. 15.4.3. Color the pairs of points; the color of { x, y} is the remainder of flog1+e/2 p(x, y)l modulo r, where r is a sufficiently large integer. Show by induction that a homogeneous set can be embedded satisfactorily. 15.5.2(b). By (a) and Caratheodory's theorem, every metric in .C.fin) is a convex combination of at most N +1 line metrics. To get rid of the extra +1, use the fact that .C.fin) is a convex cone. 15.5.8( c). The expectation of Ø ( 1 - XuXv) is the probability that the hyper­ plane through 0 perpendicular to r separates Yu and Yv, and this equals Ǚ, where 19 E [0, 1r) is the angle of Yu and Yv· On the other hand, the contribution of the edge { u, v} to Mrelax is 8 (1 - (Yu, Yv)) == (1 - cos'l9)/2. The constant 0.878 . . . is the minimum of ΀ · 1 19 19 , 0 < {) < 1r. 71" -cos --15.7.5(c). Suppose that there is a D-embedding f of Tk,m· For every leaf £, consider f restricted to the path p(e) from the root to l, fix a triple {at, be, ce} of vertices as in Exercise 4 (a scaled copy of P2), and label the corresponding leaf by the distances of ae, be, ce from the root. Using (b), choose a T2,m subtree where all leaves have the sarne labels, consider leaves f and l' of this subtree such that p(£) and p(£') first meet at be == b], , and use (a) with xo = f(at), Xt == f(bt), x2 == f(c£), xķ == f(ce' ). 15. 7.6(a). Sum the parallelogram identities (c-1a)2 (II (xa -Xb) - (xb -Xc)ll2 + II (xa - Xb) + (xb - Xc) 112) == (c: a)2 (llxa - Xbll2 + llxb - Xcll2) over (a, b, c) E r. Bibliography The references are sorted alphabetically by the abbreviations (rather than by the authors' names). [AA92) P. K. Agarwal and B. Aronov. Counting facets and incidences. DisC'rete Cornput. Georn., 7:359-369, 1992. (refs: pp. 46, 47) [AACS98] P. K. Agarwal, B. Aronov, T. M. Chan, and M. Sharir. On levels in arrange1nents of lines, segrnents, planes, and triangles. Discrete Comput. Geom., 19(3):315-331, 1998. (refs: pp. 269, 270, 271, 286, 287) [AAHP+98] A. Andrzejak, B. Aronov, S. Har-Peled, R. Seidel, and E. Welzl. Results on k-sets and j-facets via continuous motion arguments. In Proc. 14th Annu. ACM Sympos. Comput. Geom., pages 192-199, 1998. (refs: pp. 269, 270, 286) (AAP+97) P. K. Agarwal, B. Aronov, J. Pach, R. Pollack, and M. Sharir. Quasi-planar graphs have a linear number of edges. Combina­ torica, 17:1-9, 1997. (ref: p. 177) [AAS01] P. K. Agarwal, B. Aronov, and M. Sharir. On the complexity of many faces in arrangements of circles. In Proc. 42nd IEEE Symposium on Foundations of Computer Science, 2001. (refs: pp. 47, 70) [ABFK92) N. Alon, I. Barany, Z. Fiiredi, and D. Kleitman. Point selections and weak c:-nets for convex hulls. Combin., Probab. Comput., 1(3):189-200, 1992. (refs: pp. 215, 254, 270) [ABS97] D. Avis, D. Bremner, and R. Seidel. How good are convex hull algorithms? Comput. Geom. Theory Appl., 7:265-302, 1997. (ref: p. 106) (ABV98] J. Arias-de-Reyna, K. Ball, and R. Villa. Concentration of the distance in finite dimensional normed spaces. M athematika, 45:245-252, 1998. (ref: p. 332) 418 Bibliography [ACE+91] B. Aronov, B. Chazelle, H. Edelsbrunner, L. J. Guibas, M. Sharir, and R. Wenger. Points and triangles in the plane and halving planes in space. Discrete Com put. Geom., 6:435-442, 1991. (refs: pp. 215, 270) [Ach01] D. Achlioptas. Database-friendly random projections. In Proc. 20th A CM SIGACT-SIGMOD-SIGART Symposium on Princi­ ples of Database Systems, pages 274-281, 2001. (ref: p. 361) [ACNS82] M. Ajtai, V. Chvatal, M. Newborn, and E. Szemeredi. Crossing­ free subgraphs. Ann. Discrete Math., 12:9-12, 1982. (ref: p. 56) (AEG+94) B. Aronov, P. Erdos, W. Goddard, D. J. Kleitman, M. Kluger­ man, J. Pach, and L. J. Schulman. Crossing families. Combi­ natorica, 14:127-134, 1994. (ref: p. 177) [AEGS92] B. Aronov, H. Edelsbrunner, L. Guibas, and M. Sharir. The number of edges of many faces in a line segment arrangement. Combinatorica, 12(3}:261-274, 1992. (ref: p. 46) [AF92] D. Avis and K. Fukuda. A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra. Dis­ crete Comput. Geom., 8:295-313, 1992. (ref: p. 106) [AFOO] N. Alon and E. Friedgut. On the number of permutations avoid­ ing a given pattern. J. Combin. Theory, Ser. A, 81:133-140, 2000. (ref: p. 177) [AFH+oo] H. Alt, S. Felsner, F. Hurtado, M. Noy, and E. Welzl. A class of point-sets with few k-sets. Corn put. Georn. Theo'r. Appl., 16:95···101, 2000. (ref: p. 270) [AFR85] N. Alon, P. Frankl, and V. Rodl. Geornetrical realization of set systems and probabilistic communication complexity. In Proc. 26th IEEE Symposium on Foundations of Computer Science, pages 277-280, 1985. (ref: p. 140) [AG86] N. Alon and E. Gyori. The number of small semispaces of a finite set of points in the plane. J. Combin. Theory Ser. A, 41:154-157, 1986. (ref: p. 145) [AGHV01) P. K. Agarwal, L. J. Guibas, J. Hershberger, and E. Veach. Maintaining the extent of a moving point set. Discrete Comput. Geom., 26:353-374, 2001. (ref: p. 194) (AHOO] R. Aharoni and P. E. Haxell. Hall's theorem for hypergraphs. J. Graph Theory, 35:83-88, 2000. (ref: p. 235) [ Aha01] R. Aharoni. Ryser's conjecture for tri-partite hypergraphs. Combinatorica, 21:1-4, 2001. (ref: p. 235) Bibliography 419 [AHL01] N. Alon, S. Hoory, and N. Linial. The Moore bound for irregular graphs. Graphs and Combinatorics, 2001. In press. (ref: p. 367) [AI88] F. Aurenhammer and H. Imai. Geometric relations among Voronoi diagrams. Geom. Dedicata, 27:65-75, 1988. (ref: p. 121) [Ajt98] M. Ajtai. Worst-case complexity, average-case complexity and lattice problems. Documenta Math. J. DMV, Extra volume ICM 1998, vol. III:421-428, 1998. (ref: p. 26) [AK85] N. Alon and G. Kalai. A simple proof of the upper bound theorem. Eur-opean J. Combin., 6:211-214, 1985. (ref: p. 103) [AK92] N. Alon and D. Kleitman. Piercing convex sets and the Had­ wiger Debrunner (p, q)-problem. Adv. Math., 96(1):103-112, 1992. (ref: p. 258) [AK95] N. Alon and G. Kalai. Bounding the piercing number. Discrete Comput. Geom., 13:245-256, 1995. (ref: p. 261) [AKOO] F. Aurenhammer and R. Klein. Voronoi diagrams. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 201-290. Elsevier Science Publishers B.V. North-Holland, Amsterdam, 2000. (refs: pp. 120, 121) [AKMM01] N. Alon, G. Kalai, J. Matousek, and R. Meshulam. Transversal numbers for hypergraphs arising in geometry. Adv. Appl. Math., 2001. In press. (ref: p. 262) [AKP89] N. Alon, M. Katchalski, and W. R. Pulleyblank. The maximum size of a convex polygon in a restricted set of points in the plane. Discrete Comput. Geom., 4:245-251, 1989. (ref: p. 33) [AKPW95] N. Alon, R. M. Karp, D. Peleg, and D. West. A graph-theoretic garne and its application to the k-server problem. SIAM J. Computing, 24(1):78-100, 1995. (ref: p. 398) [AKV92] R. Adamec, M. Klazar, and P. Valtr. Generalized Davenport­ Schinzel sequences with linear upper bound. Discrete Math., 108:219-229, 1992. (ref: p. 176) [Alo] N. Alon. Covering a hypergraph of subgraphs. Discrete Math. In press. (ref: p. 262) [Alo86a] N. Alon. Eigenvalues and expanders. Combinatorica, 6:83-96, 1986. (ref: p. 381) [Alo86b] N. Alon. The number of polytopes, configurations, and real matroids. Mathematika, 33:62-71, 1986. (ref: p. 140) 420 [Alo98] [ALPS01) [AM85) (Ame96] (AMS94) [AMS98] (APS93) [AR92) (AR98) [AroOO] (ARS99] [AS94] (ASOOa] Bibliography N. Alon. Piercing d-intervals. Discrete Comput. Geom., 19:333-334, 1998. (ref: p. 262} N. Alon, H. Last, R. Pinchasi, and M. Sharir. On the complex­ ity of arrangements of circles in the plane. Discrete Comput. Geom., 26:465-492, 2001. (ref: p. 271) N. Alon and V. D. Milman. A1, isoperimetric inequalities for graphs, and superconcentrators. J. Combinatorial Theory, Ser. B, 38( 1) :73-88, 1985. (ref: p. 381) N. Amenta. A short proof of an interesting Helly-type theorem. Disc'rete Cornput. Georn., 15:423-427, 1996. (ref: p. 261) B. Aronov, J. Matousek, and M. Sharir. On the sum of squares of cell complexities in hyperplane arrangements. J. Cornbin. Theory Ser. A, 65:311-321, 1994. (refs: pp. 47, 152) P. K. Agarwal, J. Matousek, and 0. Schwarzkopf. Computing many faces in arrangements of lines and segments. SIAM J. Comput., 27(2):491-505, 1998. (ref: p. 162} B. Aronov, M. Pellegrini, and M. Sharir. On the zone of a surface in a hyperplane arrangement. Discrete Comput. Geom., 9(2):177-186, 1993. (ref: p. 151) J. Arias-de-Reyna and L. Rodriguez-Piazza. Finite metric spaces needing high dimension for Lipschitz embeddings in Ba­ nach spaces. Israel J. Math., 79:103-113, 1992. (ref: p. 367) Y. Aumann and Y. Rabani. An O(log k) approximate min­ cut max-flow theorem and approximation algorithm. SIAM J. Cornput., 27(1):291-301, 1998. (ref: p. 392) B. Aronov. A lower bound for Voronoi diagram complexity. Manuscript, Polytechnic University, Brooklyn, New York, 2000. (refs: pp. 123, 192) N. Alon, L. Ronyai, and T. Szabo. Norm-graphs: variations and applications. J. Combin. Theory Ser. B, 76:280-290, 1999. (ref: p. 68) B. Aronov and M. Sharir. Castles in the air revisited. Discrete Comput. Geom., 12:119-150, 1994. (ref: p. 193) P. K. Agarwal and M. Sharir. Arrangements and their appli­ cations. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 49-119. North-Holland, Ams­ terdam, 2000. (refs: pp. 47, 128, 145, 168, 191) [ASOOb] [ASOOc] (ASOOd] [AS01a] [AS01b] [Ass83] [ASS89] [ASS96] [AST97] [Aur91] [Avi93] [Bal] (Bal90] Bibliography 421 P. K. Agarwal and M. Sharir. Davenport-Schinzel sequences and their geometric applications. In J .-R. Sack and J. U rru­ tia, editors, Handbook of Computational Geometry, pages 1-47. North-Holland, Amsterda1n, 2000. (ref: p. 168) P. K. Agarwal and M. Sharir. Pipes, cigars, and kreplach: The union of Minkowski sums in three dimensions. Discrete Comput. Geom., pages 645-685, 2000. (ref: p. 194) N. Alon and J. Spencer. The Probabilistic Method {2nd edition}. J. Wiley and Sons, New York, NY, 2000. First edition 1993. (refs: pp. 336, 340) B. Aronov and M. Sharir. Cutting circles into pseudo-segments and improved bounds for incidences. Discrete Comput. Geom., 2001. To appear. (refs: pp. 44, 46, 69, 70, 271) B. Aronov and M. Sharir. Distinct distances in three dimen­ sions. Manuscript, School of Computer Science, Tel Aviv Uni­ versity, 2001. (ref: p. 45) P. Assouad. Density and dimension (in French). Ann. Inst. Fourier (Grenoble}, 33:233-282, 1983. (ref: p. 250) P. K. Agarwal, M. Sharir, and P. Shor. Sharp upper and lower bounds on the length of general Davenport-Schinzel sequences. J. Combin. Theory Ser. A, 52(2):228-274, 1989. (ref: p. 176) P. K. Agarwal, M. Sharir, and 0. Schwarzkopf. The overlay of lower envelopes and its applications. Discrete Comput. Geom., 15:1-13, 1996. (ref: p. 192) B. Aronov, M. Sharir, and B. Tagansky. The union of convex polyhedra in three dimensions. SIAM J. Comput., 26:1670-1688, 1997. (ref: p. 194) F. Aurenhammer. Voronoi diagrams: A survey of a fundamental geometric data structure. ACM Comput. Surv., 23(3):345-405, September 1991. (ref: p. 120) D. Avis. The m-core properly contains the m-divisible points in space. Pattern Recognit. Lett., 14(9):703-705, 1993. (ref: p. 205) K. Ball. Convex geometry and functional analysis. In W. B. Johnson and J. Lindenstrauss, editors, Handbook of Banach Spaces. North-Holland, Amsterdam. In press. (refs: pp. 314, 320, 337) K. Ball. Isometric embedding in lp-spaces. European J. Com­ bin., 11(4):305-311, 1990. (ref: p. 383) 422 [Bal92] (Bal97] [Bar82) [Bar89] [Bar93] [Bar96] [Bar97] [Bar98] [Bas98) (BCM99] [BCR98] [BD93] [BDV91] Bibliography K. Ball. Markov chains, Riesz transforms and Lipschitz maps. Geom. Funct. Anal., 2(2):137-172, 1992. (ref: p. 380) K. Ball. An elementary introduction to modern convex georne­ try. In S. Levi, editor, Flavors of Geometry {MSRI Publications vol. 31}, pages 1-58. Cambridge University Press, Cambridge, 1997. (refs: pp. viii, 300, 315, 327, 336, 337, 346) I. Barany. A generalization of Caratheodory's theoren1. Discrete Math., 40:141-152, 1982. (refs: pp. 198, 199, 210) I. Barany. Intrinsic volumes and /-vectors of random polytopes. Math. Ann., 285( 4):671-699, 1989. (ref: p. 99} A. I. Barvinok. A polynomial time algorithm for counting inte­ gral points in polyhedra when the dimension is fixed. In Proc. 34th IEEE Symposium on Foundations of Computer Science, pages 566-572, 1993. (ref: p. 24) Y. Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. In Proc. 37th IEEE Symposium on Foundations of Computer Science, pages 184-193, 1996. (ref: p. 398) A. I. Barvinok. Lattice points and lattice polytopes. In J. E. Goodman and J. O'Rourke, editors, Handbook of Discrete and Computational Geometry, chapter 7, pages 133-152. CRC Press LLC, Boca Raton, FL, 1997. (refs: pp. 24, 294) Y. Bartal. On approximating arbitrary metrics by tree metrics. In Proc. 30th Annu. ACM Sympos. on Theory of Computing, pages 161-168, 1998. (ref: p. 398) S. Basu. On the combinatorial and topological complexity of a single cell. In Proc. 39th IEEE Symposium on Foundations of Computer Science, pages 606-616, 1998. (ref: p. 193) H. Bronnimann, B. Chazelle, and J. :IV1atousek. Product range spaces, sensitive sarnpling, and derandomization. SIAM J. Comput., 28:1552-1575, 1999. (ref: p. 106) J. Bochnak, M. Coste, and M.-F. Roy. Real Algebraic Geometry. Springer, Berlin etc., 1998. Transl. from the French, revised and updated edition. (refs: pp. 135, 191) D. Bienstock and N. Dean. Bounds for rectilinear crossing num­ bers. J. Graph Theory, 17(3):333-348, 1993. (ref: p. 58) A. Bialostocki, P. Dierker, and B. Voxman. Some notes on the Erdos-Szekeres theorem. Discrete Math., 91(3):231-238, 1991. (ref: p. 38) Bibliography 423 [Bec83] J. Beck. On the lattice property of the plane and some prob­ lems of Dirac, Motzkin and Erdos in combinatorial geometry. Combinatorica, 3(3·-4):281-297, 1983. (refs: pp. 45, 50) [Ben66] C. T. Benson. Minimal regular graphs of girth eight and twelve. Canad. J. Math., 18:1091-1094, 1966. (ref: p. 367) [BEPY91] M. Bern, D. Eppstein, P. Plassman, and F. Yao. Horizon the­ orems for lines and polygons. In J. Goodman, R. Pollack, and W. Steiger, editors, Discrete and Computational Geometry: Pa­ pers from the DIMACS Special Year, volume 6 of DIMACS Series in Discrete Mathematics and Theoretical Computer Sci­ ence, pages 45-66. American Mathematical Society, Association for Computing Machinery, Providence, RI, 1991. (ref: p. 151) [Ber61] C. Berge. Farbungen von Graphen, deren samtliche bzw. deren ungerade Kreise starr sind (Zusammenfassung). Wis­ sentschaftliche Zeitschrift, Martin Luther Universitiit Halle­ Wittenberg, Math.-Naturwiss. Reihe, pages 114-115, 1961. (ref: p. 293) [Ber62] C. Berge. Sur une conjecture relative au probleme des codes op­ timaux. Cornmunication, 13eme assemblee generale de l'URSI, Tokyo, 1962. (ref: p. 293) [BF84J E. Boros and Z. Fiiredi. The number of triangles covering the center of an n-set. Geom. Dedicata, 17:69-77, 1984. (ref: p. 210) [BF87] I. Barany and Z. Furedi. Computing the volume is difficult. Discrete Comput. Geom., 2:319-326, 1987. (refs: pp. 320, 322, 324) [BF88] I. Barany and Z. Fiiredi. Approximation of the sphere by poly­ topes having few vertices. Proc. Amer. Math. Soc., 102(3):651-659, 1988. (ref: p. 320) [BFL90] I. Baxany, Z. Furedi, and L. Lovasz. On the number of halving planes. Combinatorica, 10:175-183, 1990. (refs: pp. 205, 215, 229, 269, 270, 280) [BFM86] J. Bourgain, T. Figiel, and V. Milman. On Hilbertian subsets of finite metric spaces. Israel J. Math., 55:147-152, 1986. (ref: p. 373) [BFT95] G. R. Brightwell, S. Felsner, and W. T. Trotter. Balancing pairs and the cross product conjecture. Order, 12( 4):327-349, 1995. (ref: p. 308) 424 Bibliography (BGK+99] A. Brieden, P. Gritzmann, R. Kannan, V. Klee, L. Lovasz, and M. Simonovits. Deterministic and randomized polynomial-time approximation of radii. 1999. To appear in Mathematika. Pre­ lirninary version in Proc. 39th IEEE Symposium on Foundations of Computer Science, 1998, pages 244-251. (refs: pp. 322, 334) [BH93] U. Betke and M. Henk. Approxirnating the volume of convex bodies. Discrete Comput. Geom., 10:15-21, 1993. (ref: p. 321) (Big93] N. Biggs. Algebraic Graph Theor·y. Cambridge U niv. Press, Cambridge, 1993. 2nd edition. (refs: pp. 367, 381) (BK63] W. Bonnice and V. L. Klee. The generation of convex hulls. Math. Ann., 152:1-29, 1963. (ref: p. 8) [BKOO] A. Brieden and M. Kochol. A note on cutting planes, vol­ ume approximation and Mahler's conjecture. Manuscript, TU Miinchen, 2000. (ref: p. 324) [BL81] L. J. Billera and C. W. Lee. A proof of the suffiency of Mc­ Mullen's conditions for !-vectors of simplicial polytopes. J. Combin. Theory Ser. A, 31(3):237-255, 1981. (ref: p. 105) [BL92] I. Barany and D. Larman. A colored version of Tverberg's theorem. J. London Math. Soc. II. Ser., 45:314-320, 1992. (ref: p. 205) [BL99] Y. Benyamini and J. Lindenstrauss. Nonlinear Functional Anal­ ysis, Vol. I, Colloquium Publications 48. American Mathemat­ ical Society (AMS), Providence, RI, 1999. (refs: pp. 336, 352, 358) [BLM89] J. Bourgain, J. Lindenstrauss, and V. Milman. Approximation of zonoids by zonotopes. Acta Math., 162:73-141, 1989. (ref: p. 320) [BLPS99] W. Banaszczyk, A. E. Litvak, A. Pajor, and S. J. Szarek. The flatness theorem for nonsymmetric convex bodies via the lo­ cal theory of Banach spaces. Math. Oper. Res., 24(3):728-750, 1999. (ref: p. 24) [BLZV94) A. Bjorner, L. Lovasz, R. Zivaljevic, and S. Vrecica. Chessboard complexes and matching complexes. J. London Math. Soc., 49:25-39, 1994. (ref: p. 205) (BMMV02) R. Babilon, J. Matousek, J. Maxova, and P. Valtr. Low­ distortion e1nbeddings of trees. In Proc. Graph Drawing 2001. Springer, Berlin etc., 2002. In press. (ref: p. 393) [BMT95] [B097) [Bol85] [Bol87] [Bor75] [BOR99] [Bou85] [Bou86) [BP90] [BPR96] [Bre93] [Bro66] (Br083] [BS89] Bibliography 425 C. Buchta, J. 1.1iiller, and R. F. Tichy. Stochastical approxima­ tion of convex bodies. Math. Ann., 271:225-235, 1895. (ref: p. 324) I. Barany and S. Onn. Colourful linear programming and its relatives. Math. Oper. Res., 22:550-567, 1997. (refs: pp. 199, 204) B. Bollobas. Random Graphs. Academic Press (Harcourt Brace .Jovanovich, Publishers), London-Orlando etc., 1985. (ref: p. 366) B. Bolio bas. l\1artingales, isoperimetric inequalities and random graphs. In 52. Combinatorics, Eger {Hungary), Colloq. Math. Soc. J. Bolyai, pages 113-139. Math. Soc. J. Bolyai, Budapest, 1987. (refs: pp. 336, 340) C. Borell. The Brunn-Minkowski inequality in Gauss space. Invent. Math., 30(2):207--216, 1975. (ref: p. 336) A. Borodin, R. Ostrovsky, and Y. Rabani. Subquadratic ap­ proximation algorithms for clustering problems in high dimen­ sional spaces. In Proc. 31st Annual ACM Symposiurn on Theo'ry of Computing, pages 435-444, 1999. (ref: p. 361) J. Bourgain. On Lipschitz embedding of finite rnetric spaces in Hilbert space. Israel J. Math., 52:46-52, 1985. (refs: pp. 367, 388, 392} J. Bourgain. The metrical interpretation of superreflexivity in Banach spaces. Israel J. Math., 56:222-230, 1986. (ref: p. 392) K. Ball and A. Pajor. Convex bodies with few faces. Proc. Amer. Math. Soc., 110(1):225-231, 1990. (ref: p. 320) S. Basu, R. Pollack, and M.-F. Roy. On the number of cells defined by a family of polynornials on a variety. M athematika, 43:120-126, 1996. (ref: p. 135) G. Bredon. Topology and Geometry {Graduate Texts in Math­ ematics 139). Springer-Verlag, Berlin etc., 1993. (ref: p. 4) W. G. Brown. On graphs that do not contain a Thomsen graph. Canad. Math. Bull., 9:281-285, 1966. (ref: p. 68) A. Br0nsted. An Introduction to Convex Polytopes. Springer­ Verlag, New York, NY, 1983. (ref: p. 85) J. Bakowski and B. Sturmfels. Computational Synthetic Geom­ etry. Lect. Notes in Math. 1355. Springer-Verlag, Heidelberg, 1989. (ref: p. 138) 426 Bibliography [BSTY98] J.-D. Boissonnat, M. Sharir, B. Tagansky, and M. Yvinec. Voronoi diagrams in higher dimensions under certain polyhe­ dral distance functions. Discrete Comput. Geom., 1 9( 4) :4 73-484, 1998. (ref: p. 194) [BT89] T. Bisztriczky and G. Fejes T6th. A generalization of the Erdos-Szekeres convex n-gon theorem. J. Reine Angew. Math., 395:167-170, 1989. (ref: p. 33} [BV82] E. 0. Buchman and F. A. Valentine. Any new Helly numbers? Amer. Math. Mon., 89:370-375, 1982. (ref: p. 13) [BV98] I. Barany and P. Valtr. A positive fraction Erdos-Szekeres the­ orem. Disc'rete Comput. Georn, 19:335-342, 1998. (ref: p. 220) [BVS+99] A. Bjorner, M. Las Vergnas, B. Sturmfels, N. White, and G. M. Ziegler. Oriented Matroids {2nd edition). Encyclopedia of Mathematics 46. Cambridge University Press, Cambridge, 1999. (refs: pp. 100, 137, 139, 222) [Can69] R. Canham. A theorem on arrangements of lines in the plane. Israel J. Math., 7:393-397, 1969. (ref: p. 46) . . [Car07] C. Caratheodory. Uber den Variabilitatsbereich der Koeffizien-ten von Potenzreihen, die gegebenc Werte nicht annehmen. Math. Ann., 64:95-115, 1907. (refs: pp. 8, 98) [Car85] B. Carl. Inequalities of Bernstein-Jackson-type and the de­ gree of compactness of operators in Banach spaces. Ann. Inst. Fourier, 35(3):79-118, 1985. (ref: p. 320) [Cas59] J. Cassels. An Introduction to the Geometry of Numbers. Springer-Verlag, Heidelberg, 1959. (ref: p. 20) (CCPS98] W. J. Cook, W. H. Cunningham, W. R. Pulleyblank, and A. Schrijver. Combinatorial Optimization. Wiley, New York, NY, 1998. (ref: p. 294) [CEG+9o] K. Clarkson, H. Edelsbrunner, L. Guibas, M. Sharir, and E. Welzl. Combinatorial complexity bounds for arrangements of curves and spheres. Discrete Comput. Geom., 5:99-160, 1990. (refs: pp. 44, 45, 46, 4 7, 68, 152) [CEG+93] B. Chazelle, H. Edelsbrunner, L. Guibas, M. Sharir, and J. Snoeyink. Cornputing a face in an arrangement of line seg­ ments and related problems. SIAM J. Comput., 22:1286-1302. 1993. (ref: p. 162} [CEG+94] B. Chazelle, H. Edelsbrunner, L. Guibas, J. Hershberger, R. Sei­ del, and M. Sharir. Selecting heavily covered points. SIAM J. Comput., 23:1138-1151, 1994. (ref: p. 215) Bibliography 427 [CEG+95] B. Chazelle, H. Edelsbrunner, M. Grigni, L. Guibas, M. Sharir, and E. Welzl. Improved bounds on weak E-nets for convex sets. Discrete Comput. Geom., 13:1-15, 1995. (ref: p. 254) [CEGS89] B. Chazelle, H. Edelsbrunner, L. Guibas, and M. Sharir. A singly-exponential stratification scheme for real semi-algebraic varieties and its applications. In Prnc. 16th Internat. Colloq. Automata Lang. Program., volume 372 of Lecture Notes Com­ put. Sci., pages 179-192. Springer-Verlag, Berlin etc., 1989. (ref: p. 162) [CEM+96] K. L. Clarkson, D. Eppstein, G. L. Miller, C. Sturtivant, and S.-H. Teng. Approximating center points with iterative Radon points. Internat. J. Comput. Geom. Appl., 6:357-377, 1996. (ref: p. 16) [CF90] B. Chazelle and J. Friedman. A deterministic view of random sampling and its use in geometry. Combinatorica, 10(3):229-249, 1990. (refs: pp. 68, 161) [CGL85] [Cha93a] [Cha93b] [ChaOOa] [ChaOOb] [ChaOOc] [Chu84] [Chu97] B. Chazelle, L. J. Guibas, and D. T. Lee. The power of geometric duality. BIT, 25:76-90, 1985. (ref: p. 151) B. Chazelle. Cutting hyperplanes for divide-and-conquer. Dis­ crete Comput. Geom., 9(2):145-158, 1993. (refs: pp. 69, 162) B. Chazelle. An optimal convex hull algorithm in any fixed dimension. Discrete Comput. Geom., 10:377-409, 1993. (ref: p. 106) T. M. Chan. On levels in arrangements of curves. In Proc. 41st IEEE Symposium on Foundations of Computer Science, pages 219-227, 2000. (refs: pp. 140, 271) T. M. Chan. Random sampling, halfspace range reporting, and construction of ( < k )-levels in three dimensions. SIAM J. Com­ put., 30(2):561-575, 2000. (ref: p. 106) B. Chazelle. The Discrepancy Method. Cambridge University Press, Cambridge, 2000. (ref: p. 162) F. R. K. Chung. The number of different distances determined by n points in the plane. J. Combin. Theory Ser. A, 36:342-354, 1984. (ref: p. 45) F. Chung. Spectral Graph Theory. Regional Conference Series in Mathematics 92. Amer. fvlath. Soc., Providence, 1997. (ref: p. 381) 428 Bibliograpl1y [CKS+98] L. P. Chew, K. Kedem, :tvf. Sharir, B. Tagansky, and E. Welzl. Voronoi diagrams of lines in 3-space under polyhedral convex distance functions. J. Algorithms, 29(2):238-255, 1998. (ref: p. 192) [Cla87] K. L. Clarkson. New applications of random sampling in compu­ tational geometry. Discrete Comput. Geom., 2:195-222, 1987. (refs: pp. 68, 72) [Cla88a] K. L. Clarkson. Applications of random sampling in computa­ tional geometry, II. In Proc. 4th Annu. ACM Sympos. Comput. Geom., pages 1-11, 1988. (refs: pp. 145, 161) [Cla88b] K. L. Clarkson. A randomized algorithm for closest-point queries. SIAM J. Comput., 17:830-847, 1988. (ref: p. 161) [Cla93] K. L. Clarkson. A bound on local minima of arrangements that implies the upper bound theorem. Discrete Comput. Geom., 10:427-233, 1993. (refs: pp. 103, 280) [CL092] D. Cox, J. Little, and D. O'Shea. Ideals, Varieties, and Algo­ rithms. Springer-Verlag, New York, NY, 1992. (ref: p. 135) [CP88] B. Carl and A. Pajor. Gelfand numbers of operators with values in a Hilbert space. Invent. Math., 94:479-504, 1988. (refs: pp. 320, 324) [CS89] K. L. Clarkson and P. W. Shor. Applications of random sam­ pling in computational geometry, II. Discrete Comput. Geom., 4:387-421, 1989. (refs: pp. 68, 105, 145, 161) [CS99] J. H. Conway and N. J. A. Sloane. Sphere Packings, Lattices and Groups (3rd edition). Grundlehren der Mathematischen Wis­ senschaften 290. Springer-Verlag, New York etc., 1999. (ref: p. 24) [CST92] F. R. K. Chung, E. Szemeredi, and W. T. Trotter. The number of different distances determined by a set of points in the Eu­ clidean plane. Discrete Comput. Geom., 7:1-11, 1992. (ref: p. 45) [Dan63] G. B. Dantzig. Linear Programming and Extensions. Princeton University Press, Princeton, NJ, 1963. (ref: p. 93) [Dan86] L. Danzer. On the solution of the problem of Gallai about circular discs in the Euclidean plane (in German). Stud. Sci. Math. Hung., 21:111-134, 1986. (ref: p. 235) [dBvKOS97J M. de Berg, M. van Kreveld, M. Overmars, and 0. Schwarzkopf. Computational Geometry: Algorithms and Applications. Springer-Verlag, Berlin, 1997. (refs: pp. 116, 122, 162) [DE94] [Del34] [Dey98] [DFK91] [dFPP90] [DFPSOO] [DG99J [DGK63] (Dil50] (Dir42] [Dir50] [DL97) Bibliography 429 T. K. Dey and H. Edelsbrunner. Counting triangle crossings and halving planes. Discrete Comput. Geom., 12:281-289, 1994. (ref: p. 270) B. Delaunay. Sur la sphere vide. A la memoire de Georges Voronoi. Izv. Akad. Nauk SSSR, Otdelenie Maternaticheskih i Estestvennyh Nauk, 7:793-800, 1934. (ref: p. 120) T. K. Dey. Improved bounds on planar k-sets and related problerns. Discrete Comput. Geom., 19:373-382, 1998. (refs: pp. 269, 270, 285) M. E. Dyer, A. Frieze, and R. Kannan. A random polynomial time algorithm for approximating the volume of convex bodies. J. ACM, 38:1-17, 1991. (ref: p. 321) H. de Fraysseix, J. Pach, and R. Pollack. How to draw a planar graph on a grid. Combinatorica, 10{1):41-51, 1990. (ref: p. 94) A. Deza, K. Fukuda, D. Pasechnik, and M. Sato. Generating vertices with symmetries. In Proc. of the 5th Workshop on Al­ gorithms and Computation, Tokyo University, pages 1-8, 2000. (ref: p. 106) S. Dasgupta and A. Gupta. An elementary proof of the Johnson-Lindenstrauss lemma. Technical Report TR-99-06, Intl. Comput. Sci. Inst., Berkeley, CA, 1999. (ref: p. 361) L. Danzer, B. Griinbaum, and V. Klee. Helly's theorem and its relatives. In Convexity, volume 7 of Proc. Symp. Pure Math., pages 101-180. American Mathematical Society, Providence, 1963. (refs: pp. 8, 12, 13, 327) R. P. Dilworth. A decomposition theorem for partially ordered sets. Annals of Math., 51:161-166, 1950. (ref: p. 295) G. L. Dirichlet. Verallgemeinerung eines Satzes aus der Lehre von Kettenbriichen nebst einigen Anwendungen auf die Theorie der Zahlen. In Bericht uber die zur Bekantmachung geeigneten Verhandlungen der Koniglich Preussischen Akademie der Wis­ senschaften zu Berlin, pages 93-95. 1842. Reprinted in L. Kronecker (editor): G. L. Dirichlet's Werke Vol. I, G. Reimer, Berlin 1889, reprinted Chelsea, New York 1969. (ref: p. 21) .. G. L. Dirichlet. Uber die Reduktion der positiven quadratischen Forrnen mit drei unbestimmten ganzen Zahlen. J. Reine Angew. Math., 40:209-227, 1850. (ref: p. 120) M. M. Deza and M. Laurent. Geornetry of C'Uts and Metrics. Algorithms and Combinatorics 15. Springer-Verlag, Berlin etc., 1997. (refs: pp. 107, 357) 430 [Dol'92] [Doi73] [DR50] [DS65] [Dud78] [DV02] [Dvo59] [Dvo61] [Dwo97] [Eck85) [Eck93} [Ede89] [Ede98] Bibliography V. L. Dol'nikov. A generalization of the ham sandwich theorem. Mat. Zametki, 52(2):27-37, 1992. In Russian; English transla­ tion in Math. Notes 52,2:771-779, 1992. (ref: p. 16) J.-P. Doignon. Convexity in cristallographical lattices. J. Ge­ ometry, 3:71-85, 1973. (ref: p. 295) A. Dvoretzky and C. A. Rogers. Absolute and unconditional convergence in normed linear spaces. Proc. N atl. A cad. Sci. USA, 36:192-197, 1950. (ref: p. 352) H. Davenport and A. Schinzel. A combinatorial problem con­ nected with differential equations. Amer. J. Math., 87:684-689, 1965. (ref: p. 175) R. M. Dudley. Central limit theorems for empirical measures. Ann. Probab., 6:899-929, 1978. (ref: p. 250) H. Djidjev and I. Vrt'o. An improved lower bound for crossing numbers. In Proc. Graph Drawing 2001. Springer, Berlin etc., 2002. In press. (ref: p. 57) A. Dvoretzky. A theorem on convex bodies and applications to Banach spaces. Proc. Natl. Acad. Sci. USA, 45:223-226, 1959. Errata. Ibid. 1554. (ref: p. 352) A. Dvoretzky. Some results on convex bodies and Banach spaces. In Proc. Int. Symp. Linear Spaces 1960, pages 123·--160. Jerusalem Academic Press, Jerusalem; Pergamon, Oxford, 1961. (refs: pp. 346, 352) C. Dwork. Positive applications of lattices to cryptography. In Proc. 22dn International Symposium on Mathernatical Foun­ dations of Computer Science (Lect. Notes Comput. Sci. 1295), pages 44-51. Springer, Berlin, 1997. (ref: p. 26) J. Eckhoff. An upper-bound theorem for families of convex sets. Geom. Dedicata, 19:217-227, 1985. (ref: p. 197) J. Eckhoff. Helly, Radon and Caratheodory type theorems. In P. :rvt Gruber and J. M. Wills, editors, Handbook of Convex Geometry. North-Holland, Amsterdam, 1993. (refs: pp. 8, 12, 13) H. Edelsbrunner. The upper envelope of piecewise linear func­ tions: Tight complexity bounds in higher diinensions. Discrete Comput. Geom., 4:337-343, 1989. (ref: p. 186) H. Edelsbrunner. Geometry of modeling biomolecules. In P. K. Agarwal, L. E. Kavraki, and M. Mason, editors, Proc. Work­ shop Algorithmic Found. Robot. A. K. Peters, Natick, MA, 1998. (ref: p. 122) Bibliography 431 [Edm65] J. Edmonds. Maximum matching and a polyhedron with 0,1-vertices. J. Res. National Bureau of Standards (B), 69:125-130, 1965. (ref: p. 294) [EE94] Gy. Elekes and P. Erdos. Similar configurations and pseudo grids. In K. Boroczky et al., editors, Intuitive Geometry. Pro­ ceedings of the 3rd International Conference Held in Szeged, Hungary, From 2 To 7 September, 1991, Colloq. Math. Soc. Janos Bolyai. 63, pages 85-104. North-Holland, Amsterdam, 1994. (refs: pp. 47, 51) [EFPR93] P. Erdos, Z. Fiiredi, J. Pach, and I. Ruzsa. The grid revisited. Discrete Math., 111:189-196, 1993. (ref: p. 47) [EGS90] H. Edelsbrunner, L. Guibas, and M. Sharir. The complexity of many cells in arrangements of planes and related problems. Discrete Comput. Geom., 5:197-216, 1990. (ref: p. 46) [EHP89] P. Erdos, D. Hickerson, and J. Pach. A problem of Leo Moser about repeated distances on the sphere. Amer. Math. Mon., 96:569-575, 1989. (ref: p. 45) (EKZOl] D. Eppstein, G. Kuperberg, and G. M. Ziegler. Fat 4-polytopes and fatter 3-spheres. Manuscript, TU Berlin, 2001. (ref: p. 107) [Ele86] Gy. Elekes. A geometric inequality and the complexity of com­ puting the volume. Discrete Comput. Geom., 1:289-292, 1986. (refs: pp. 320, 322) [Ele97] Gy. Elekes. On the number of sums and products. Acta Arith., 81( 4):365-367, 1997. (ref: p. 50) [Ele99] Gy. Elekes. On the number of distinct distances and certain algebraic curves. Period. Math. Hung., 38(3):173-177, 1999. (ref: p. 48) [EleOl] Gy. Elekes. Sums versus products in number theory, algebra and Erdos geometry. In G. Halasz et al., editors, Paul Erdos and His Mathematics. J. Bolyai Math. Soc., Budapest, 2001. In press. (refs: pp. 4 7, 48, 49, 54) [ELSS73] P. Erdos, L. Lova .;;; z, A. Simmons, and E. Straus. Dissection graphs of planar point sets. In J. N. Srivastava, editor, A Sur­ vey of Combinatorial Theory, pages 139-154. North-Holland, Amsterdam, Netherlands, 1973. (refs: pp. 269, 276) (Enf69] P. Enflo. On the nonexistence of uniform homeomorphisms between Lp-spaces. Ark. Mat., 8:103-105, 1969. (ref: p. 372) 432 [EOS86] [EP71] (Epp95] [Epp98] (EROO] [Erd46] [Erd60] [ES35] (ES63] [ES83] (ES96] [ESOO] (ESS93] [EVW97] Bibliography H. Edelsbrunner, J. O'Rourke, and R. Seidel. Constructing arrangements of lines and hyperplanes with applications. SIAM J. Comput., 15:341-363, 1986. (ref: p. 151) P. Erdos and G. Purdy. Some extrernal problems in geometry. J. Combin. Theory, 10(3):246-252, 1971. (ref: p. 50) D. Eppstein. Dynamic Euclidean minimum spanning trees and extrema of binary functions. Discrete Comput. Geom., 13:111-122, 1995. (ref: p. 124) D. Eppstein. Geometric lower bounds for parametric matroid optimization. Discrete Cornput. Georn., 20:463-476, 1998. (ref: p. 271) Gy. Elekes and L. Ronyai. A combinatorial problem on poly­ nomials and rational functions. J. Combin. Thoery Ser. B, 89(1):1-20, 2000. (ref: p. 48) P. Erdos. On a set of distances of n points. Arner. Math. Monthly, 53:248-250, 1946. (refs: pp. 44, 45, 53, 54, 68) P. Erdos. On sets of distances of n points in Euclidean space. Publ. Math. Inst. Hungar. Acad. Sci., 5:165-169, 1960. (ref: p. 45) P. Erdos and G. Szekeres. A combinatorial problem in geometry. Compositio Math., 2:463-470, 1935. (refs: pp. 32, 33) P. Erdos and H. Sachs. Regular graphs with given girth and minimal number of knots (in German). Wiss. Z. Martin-Luther-Univ. Halle- Wittenberg, Math.-N aturwiss. Reihe, 12:251-258, 1963. (ref: p. 368) P. Erdos and M. Simonovits. Supersaturated graphs and hy­ pergraphs. Combinatorica, 3:181-192, 1983. (ref: p. 215) H. Edelsbrunner and N. R. Shah. Incremental topological flip­ ping works for regular triangulations. Algorithmica, 15:223-241, 1996. (ref: p. 121) A. Efrat and M. Sharir. On the complexity of the union of fat objects in the plane. Discrete Comput. Geom., 23:171-189, 2000. (ref: p. 194) H. Edelsbrunner, R. Seidel, and M. Sharir. On the zone theorem for hyperplane arrangements. SIAM J. Comput., 22(2):418-429, 1993. (ref: p. 151) H. Edelsbrunner, P. Valtr, and E. Welzl. Cutting dense point sets in half. Discrete Comput. Geom., 17(3):243--255, 1997. (refs: pp. 270, 273) [EW85] [EW86] [Far94] [FeiOO] [Fel97] [FH92] [Fie73] [FKS89] [FLM77] [FR01] [Fre73] [Fre76] [Fri91] Bibliography 433 H. Edelsbrunner and E. Welzl. On the number of line sepa­ rations of a finite set in the plane. Journal of Combinatorial Theory Ser. A, 38:15-29, 1985. (ref: p. 269) H. Edelsbrunner and E. Welzl. Constructing belts in two­ dimensional arrangements with applications. SIAM J. Com­ put., 15:271-284, 1986. (ref: p. 75) G. Farkas. Applications of Fourier's mechanical principle (in / Hungarian). Math. Termes. Ertesito, 12:457-472, 1893/94. Ger-man translation in Math. Nachr. Ungam 12:1-27, 1895. (ref: p. 8) U. Feige. Approximating the bandwidth via volume respecting embeddings. J. Comput. Syst. Sci, 60:510-539, 2000. (ref: p. 396) S. Felsner. On the number of arrangements of pseudolines. Discrete Comput. Geom., 18:257-267, 1997. (ref: p. 139) Z. Fi.iredi and P. Hajnal. Davenport-Schinzel theory of matri­ ces. Discrete Math., 103:233-251, 1992. (ref: p. 177) M. Fiedler. Algebraic connectivity of graphs. Czechosl. Math. J., 23(98):298-305, 1973. (ref: p. 381) J. Friedman, J. Kahn, and E. Szemeredi. On the second eigen­ value of randorn regular graphs. In Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, pages 587-598, 1989. (ref: p. 381) T. Figiel, J. Lindenstrauss, and V. D. Milman. The dimension of almost spherical sections of convex bodies. Acta Math., 139:53-94, 1977. (refs: pp. 336, 346, 348, 352, 353) P. Frankl and V. Rodl. Extremal problems on set systems. Random Structures and Algorithms, 2001. In press. (refs: pp. 226, 227) G. A. Freiman. Foundations of a Structural Theory of Set Addi­ tion. Translations of Mathe1natical Monographs. Vol. 37. Ainer­ ican Mathematical Society, Providence, RI, 1973. (ref: p. 47) M. L. Fredman. How good is the information theory bound in sorting? Theor. Comput. Sci., 1:355-361, 1976. (ref: p. 308) J. Friedman. On the second eigenvalue and random walks in random d-regular graphs. Combinatorica, 11:331-362, 1991. (ref: p. 381) 434 [FS01] (Ful70] [Fiir96] (Gal56] [Gal63] [GGL95] (GJOO] (GKS99] (GKV94] (GL87] [GLS88] Bibliography U. Feige and G. Schechtman. On the optimality of the random hyperplane rounding technique for MAXCUT. In Proc. 33rd Annual ACM Symposium on Theory of Computing, 2001. (ref: p. 384) D. R. Fulkerson. The perfect graph conjecture and pluperfect graph theorein. In R. C. Bose et al., editors, Proc. of the Second Chapel Hill Conference on Combinatorial Mathematics and Its Applications, pages 171-175. U niv. of North Carolina, Chapel Hill, North Carolina, 1970. (ref: p. 293) Z. Fiiredi. New asymptotics for bipartite Turan numbers. J. Combin. Theory Ser. A, 75:141-144, 1996. (ref: p. 68) D. Gale. Neighboring vertices on a convex polyhedron. In H. W. Kuhn and A. W. Tucker, editors, Linear Inequalities and Related Systems, Annals of Math. Studies 38, pages 255-263. Princeton University Press, Princeton, 1956. (ref: p. 114) D. Gale. Neighborly and cyclic polytopes. In V. Klee, editor, Convexity, volume 7 of Proc. Syrnp. Pure Math., pages 225-232. American Mathematical Society, 1963. (ref: p. 98) R. L. Graham, M. Grotschel, and L. Lovasz, editors. Handbook of Combinatorics. North-Holland, Amsterdam, 1995. (refs: pp. viii, 85) E. Gawrilow and M. Joswig. polymake: a framework for analyzing convex polytopes. In G. Kalai and G. M. Ziegler, editors, Polytopes-Combinatorics and Computation, pages 43-74. Birkhauser, Basel, 2000. Software available at http : //www . math . tu-berlin . de/di skregeom/polymake/. (ref: p. 85) R. J. Gardner, A. Koldobsky, and T. Schlumprecht. An analytic solution to the Busemann -Petty problem on sections of convex bodies. Annals of Math., 149:691-703, 1999. (ref: p. 314) L. Gargano, J. Korner, and U. Vaccaro. Capacities: From in­ formation theory to extrernal set theory. J. Combin. Theory, Ser. A, 68(2):296-316, 1994. (ref: p. 309) P. M. Gruber and C. G. Lekkerkerker. Geometry of Numbers. North-Holland, Amsterdam, 2nd edition, 1987. (ref: p. 20) M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Al­ gorithms and Combinatorial Optimization, volume 2 of Algo­ rithms and Combinatorics. Springer-Verlag, Berlin etc., 1988. 2nd edition 1993. (refs: pp. 24, 26, 293, 321, 322, 327, 381) Bibliography 435 [Glu89] E. D. Gluskin. Extremal properties of orthogonal paral­ lelepipeds and their applications to the geometry of Banach spaces. Math. USSR Sbornik, 64(1):85-96, 1989. (ref: p. 320) [GM90] H. Gazit and G. L. 1\tiiller. Planar separators and the Euclidean norm. In Proc. 1st Annu. SIGAL Internat. Sympos. Algorithms. Information Processing Society of Japan, Springer-Verlag, Au­ gust 1990. (ref: p. 57) [GNRS99] A. Gupta, I. Newman, Yu. Rabinovich, and A. Sinclair. Cuts, trees and t\ -embed dings of graphs. In Proc. 40th IEEE Sym­ posium on Foundations of Computer Science, pages 399-409, 1999. Also sumbitted to Combinatorica. (ref: p. 396) [G097] J. E. Goodman and J. O'Rourke, editors. Handbook of Discrete and Computational Geometry. CR.C Press LLC, Boca Raton, FL, 1997. (refs: pp. viii, 85) [Goo97] J. E. Goodman. Pseudoline arrangements. In J. E. Goodman and J. O'Rourke, editors, Handbook of Discrete and Computa­ tional Geometry, pages 83-110. CRC Press LLC, Boca Raton, FL, 1997. (refs: pp. 136, 139) [Gor88] Y. Gordon. Gaussian processes and almost spherical sections of convex bodies. Ann. Probab., 16:180-188, 1988. (ref: p. 353) [Gow98] W. T. Gowers. A new proof of Szemeredi's theorem for arithrnetic progressions of length four. Geom. Funct. Anal., 8{3):529-551, 1998. (refs: pp. 48, 227) [GP84] J. E. Goodman and R. Pollack. On the number of k-subsets of a set of n points in the plane. J. Co'rnbin. Theory Ser. A, 36: 101-104, 1984. (ref: p. 145) [GP86] [GP93] [GPS90J [GPW93] J. E. Goodman and R. Pollack. Upper bounds for configurations and polytopes in od. Discrete Comput. Geom., 1:219-227, 1986. (ref: p. 140) J. E. Goodman and R.. Pollack. Allowable sequences and or­ der types in discrete and computational geometry. In J. Pach, editor, New Trends in Discrete and Computational Geometry, volurne 10 of Algorithms and Combinatorics, pages 103-134. Springer, Berlin etc., 1993. (ref: p. 220) J. E. Goodrnan, R. Pollack, and B. Sturmfels. The intrinsic spread of a configuration in od. J. Amer. Math. Soc., 3:639-651, 1990. (ref: p. 138) J. E. Goodman, R. Pollack, and R. Wenger. Geometric transver­ sal theory. In J. Pach, editor, New Trends in Discrete and 436 Bibliography Computational Geometry, volume 10 of Algorithms and Com­ binatorics, pages 163-198. Springer, Berlin etc., 1993. (ref: p. 262) [GPW96] J. E. Goodman, R. Pollack, and R. Wenger. Bounding the num­ ber of geometric permutations induced by k-transversals. J. Combin. Theory Ser. A, 75:187-197, 1996. (ref: p. 220) [GPWZ94} J. E. Goodman, R. Pollack, R. Wenger, and T. Zamfirescu. Arrangements and topological planes. Amer. Math. Monthly, 101(10):866-878, 1994. (ref: p. 136) [Gro56] A. Grothendieck. Sur certaines classes de suites dans les espaces de Banach et le theoreme de Dvoretzky Rogers. Bol. Soc. Math. Sao Paulo, 8:81-110, 1956. (ref: p. 352) [Gro98] M. Gromov. Metric Structures for Riemmanian and non­ Riemmanian spaces. Birkhauser, Basel, 1998. (ref: p. 336) (GRS97] Y. Gordon, S. Reisner, and C. Schutt. Umbrellas and polytopal approximation of the Euclidean ball. J. Approximation Theory, 90(1):9-22, 1997. Erratum ibid. 95:331, 1998. (ref: p. 321) (Grii60] B. Griinbaum. Partitions of mass-distributions and of convex bodies by hyperplanes. Pac. J. Math., 10:1257-1267, 1960. (ref: p. 308) [Grii67] B. Griinbaum. Convex Polytopes. John Wiley & Sons, New York, NY, 1967. (refs: pp. 85, 114) (Grii72) B. Griinbaum. Arrangements and Spreads. Regional Conf. Ser. Math. American Mathematical Society, Providence, RI, 1972. (ref: p. 128) [Gru93] P. I\ti. Gruber. Geometry of numbers. In P. M. Gruber and J. M. Wills, editors, Handbook of Convex Geometry {Vol. B), pages 739-763. North-Holland, Amsterdam, 1993. (ref: p. 20) [GupOO} A. Gupta. Etnbedding tree metrics into low din1ensional Eu­ clidean spaces. Discrete Comput. Geom., 24:105-116, 2000. (ref: p. 393) (GW93] P. M. Gruber and J. M. Wills, editors. Handbook of Convex Ge­ ometry {volumes A and B). North-Holland, Amsterdam, 1993. (refs: pp. viii, 8, 85, 314, 320) [GW95] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. A CM, 42:1115--1145, 1995. (ref: p. 384) [Har66) [Har79} [Has97} [Hoc96] [Hor83] [HS86] [HS94] [HS95J [HW87] [IM98] [Ind01] [JL84] (JM94] Bibliography 437 L. H. Harper. Optimal numberings and isoperimetric problems on graphs. J. Combin. Theory, 1:385-393, 1966. (ref: p. 336) H. Harborth. Konvexe Fiinfecke in ebenen Punktmengen. Elem. Math., 33:116-118, 1979. (ref: p. 37) J. Hast ad. Some optimal inapproximability results. In Proc. 29th Annual ACM Symposium on Theory of Computing, pages 1-10, 1997. (ref: p. 384) D. Hochbaum, editor. Approximation Algorithms for NP-hard Problems. PWS Publ. Co., Florence, Kentucky, 1996. (ref: p. 236) J. D. Horton. Sets with no empty convex 7-gons. Canad. Math. Bull., 26:482-484, 1983. (ref: p. 37) S. Hart and M. Sharir. Nonlinearity of Davenport-Schinzel sequences and of generalized path compression schemes. Com­ binatorica, 6:151-177, 1986. (refs: pp. 173, 175) D. Halperin and M. Sharir. New bounds for lower envelopes in three dimensions, with applications to visibility in terrains. Discrete Comput. Geom., 12:313-326, 1994. (refs: pp. 189, 192) D. Halperin and M. Sharir. Almost tight upper bounds for the single cell and zone problems in three dimensions. Discrete Comput. Geom., 14:385-410, 1995. (ref: p. 193) D. Haussler and E. Welzl. Epsilon-nets and simplex range queries. Discrete Comput. Geom., 2:127-151, 1987. (refs: pp. 68, 242, 254) P. Indyk and R. Motwani. Approximate nearest neighbors: To­ wards removing the curse of dimensionality. In Proc. 30th An­ nual A CM Symposium on Theory of Computing, pages 604-613, 1998. (ref: p. 361) P. Indyk. Algorithmic applications of low-distortion embed­ dings. In Proc. 42nd IEEE Symposium on Foundations of Com­ puter Science, 2001. (refs: pp. 357, 361, 398) W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math., 26: 189-206, 1984. (ref: p. 361) S. Jadhav and A. Mukhopadhyay. Computing a centerpoint of a finite planar set of points in linear time. Discrete Comput. Geom., 12:291-312, 1994. (ref: p. 16) 438 [Joh48] (JSV01] [Kai97] [Kal84] [Kal86] [Kal88] [Kal91] [Kal92] [Kal97] (Kal01] [Kan96] (KarOl] Bibliography F. John. Extremum problems with inequalities as subsidiary conditions. In Studies and Essays, presented to R. Courant on his 60th birthday, January 8, 1948, pages 187-204. Interscience Publishers, Inc., New York, N. Y., 1948. Reprinted in: J. Moser (editor): Fritz John, Collected papers, Volume 2, Birkhauser, Boston, Massachusetts, 1985, pages 543-560. (ref: p. 327) M. Jerrum, A. Sinclair, and E. Vigoda. A polynomial-time approximation algorithm for the permanent of a matrix with non-negative entries. In Proc. 33rd Annu. ACM Symposium on Theory of Computing, pages 712-721, 2001. Also available in Electronic Colloquium on Computational Complexity, Report TR00-079, http : I I ecce . uni -trier . de/ ecce/. (ref: p. 322) T. Kaiser. Transversals of d-intervals. Discrete Comput. Geom., 18:195-203, 1997. (ref: p. 262) G. Kalai. Intersection patterns of convex sets. Israel J. Math., 48:161 17 4, 1984. (ref: p. 197) G. Kalai. Characterization of f-vectors of families of convex sets in Rd. II: Sufficiency of Eckhoff's conditions. J. Combin. Theory, Ser. A, 41:167-188, 1986. (ref: p. 197) G. Kalai. A simple way to tell a simple polytope frorn its graph. J. Combin. Theory, Ser. A, 49(2):381-383Ǹ 1988. (ref: p. 93) G. Kalai. The diameter of graphs of convex polytopes and /­ vector theory. In Applied Geometry and Discrete Mathematics {The Victor Klee Festschrift), DIMACS Series in Discr. Math. and Theoret. Comput. Sci. Vol. 4, pages 387-411. Amer. J\1ath. Soc., Providence, RI, 1991. (ref: p. 104) G. Kalai. A subexponential randomized simplex algorithm. In Proc. 24th Annu. ACM Sympos. Theory Comput., pages 475--482, 1992. (ref: p. 93) G. Kalai. Linear progrannning, the sin1plex algorithm and sirn­ ple polytopes. Math. Program., 79B:217-233, 1997. (ref: p. 93) G. Kalai. Combinatorics with a geometric flavor: Some exam­ ples. In Visions in Mathematics Towards 2000 {GAF A, special volume), part II, pages 742-792. Birkhauser, Basel, 2001. (ref: p. 204) G. Kant. Drawing planar graphs using the canonical ordering. Algorithmica, 16:4-32, 1996. (ref: p. 94) Gy. Karolyi. Ramsey-remainder for convex sets and the Erdos­ Szekeres theorem. Discrete Applied Math., 109:163-175, 2001. (ref: p. 33) [Kat78] [KGT01] (Kha89J (Kir03] [Kis68J [KK95] [KL79J [KL91] [Kla92} [Kla99] [KlaOO] [Kle53] [Kle64] Bibliography 439 M. Katchalski. A Helly type theorem for convex sets. Can. Math. Bull., 21:121-123, 1978. (ref: p. 13) D. J. Kleitman, A. Gyarfas, and G. Toth. Convex sets in the plane with three of every four meeting. Combinatorica, 21(2):221-232, 2001. (ref: p. 258) L. G. Khachiyan. Problems of optimal algorithms in convex programming, decomposition and sorting (in Russian). In Yu. I. Zhuravlev, editor, The Computer and Choice P,-oblems, pages 161-205. Nauka, Moscow, 1989. (ref: p. 308) .. P. Kirchberger. Uber Tschebyschefsche Annaherungsmethoden. Math. Ann., 57:509 --540, 1903. (ref: p. 13) S. S. Kislitsyn. Finite partially ordered sets and their corre­ sponding permutation sets (in Russian). Mat.Zametki, 4:511-518, 1968. English translation in Math. Notes 4:798-801, 1968. (ref: p. 308) J. Kahn and J.-H. Kim. Entropy and sorting. J. Assoc. Comput. Machin., 51:390-399, 1995. (ref: p. 309) 1\JI. Katchalski and A. Liu. A problem of geometry in Rn. Proc. Amer. Math. Soc., 75:284-288, 1979. (ref: p. 197) J. Kahn and N. Linial. Balancing extensions via Brunn­ Minkowski. Combinatorica, 11 ( 4) :363-368, 1991. (ref: p. 308) M. Klazar. A general upper bound in extremal theory of se­ quences. Comment. Math. Univ. Carol., 33:737-746, 1992. (ref: p. 176) M. Klazar. On the maximum length of Davenport-Schinzel se­ quences. In R. Graham et al., editors, Contemporary Trends in D'tscrete Mathematics (DIMACS Series in Discrete Mathemat­ ics and Theoretical Computer Science, Vol. 49 ), pages 169-178. Amer. Math. Soc., Providence, RI, 1999. (ref: p. 176) M. Klazar. The Fiiredi-Hajnal conjecture implies the Stanley­ Wilf conjecture. In D. Krob et al., editors, Formal Power Series and Algebraic Combinatorics (Proceedings of the 12th FPSAC conference, Moscow, June 25-30, 2000), pages 250-255. Springer, Berlin etc., 2000. (ref: p. 177) V. Klee. The critical set of a convex body. A mer. J. Math., 75:178-188, 1953. (ref: p. 12) V. Klee. On the number of vertices of a convex polytope. Cana­ dian J. Math, 16:701-720, 1964. (refs: pp. 103, 105) 440 Bibliography [Kle89] R. Klein. Concrete and Abstract Voronoi Diagrams, volume 400 of Lecture Notes Comput. Sci. Springer-Verlag, Berlin etc., 1989. (ref: p. 121) [Kle97] J. Kleinberg. Two algorithms for nearest-neighbor search in high dimension. In Proc. 29th Annu. ACM Sympos. Theory Cornput., pages 599-608, 1997. (ref: p. 361) [KLL88] R. Kannan, A. K. Lenstra, and L. Lovasz. Polynomial factor­ ization and nonrandornness of bits of algebraic and some tran­ scendental numbers. Math. Comput., 50(181):235-250, 1988. (ref: p. 26) [KLMR98] L. E. Kavraki, J .-C. Latombe, R. Motwani, and P. Raghavan. Randomized query processing in robot path planning. J. Com­ put. Syst. Sci., 57:50-60, 1998. (ref: p. 250) [KLPS86] K. Kedem, R. Livne, J. Pach, and M. Sharir. On the union of Jordan regions and collision-free translational motion amidst polygonal obstacles. Discrete Comput. Geom., 1:59-71, 1986. (ref: p. 194) [KLS97} R. Kannan, L. Lovasz, and M. Simonovits. Random walks and an O(n5) volume algorithm for convex bodies. Random Struc. Algo., 11:1--50, 1997. (ref: p. 322) [KM97a] G. Kalai and J. Matousek. Guarding galleries where every point sees a large area. Israel J. Math, 101:125-140, 1997. (refs: pp. 235, 250) [KM97b] M. Karpinski and A. Macintyre. Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. J. Syst. Comput. Sci., 54(1 ):169-176, 1997. (ref: p. 250) [Koc94] M. Kochol. Constructive approximation of a ball by polytopes. Math. Slovaca, 44(1):99-105, 1994. (ref: p. 324) [Kol01] V. Koltun. Almost tight upper bounds for vertical decomposi­ tions in four dimensions. In Proc. 42nd IEEE Symposium on Foundations of Computer Science, 2001. (ref: p. 162) [Kor73] J. Korner. Coding of an information source having ambigu­ ous alphabet and the entropy of graphs. In Inform. The­ ory, statist. Decision Funct., Random Processes; Transact. 6th Prague Conf. 1971, pages 411-425, 1973. (ref: p. 309) [KP01] V. Kaibel and M. E. Pfetsch. Computing the face lattice of a polytope from its vertex-facet incidences. Technical Report, Inst. fiir Mathematik, TU Berlin, 2001. (ref: p. 105) [KPR93] [KPT01) [KPW92) [KRS96) [KS84) [KS96] [KST54] [KT99] [KV94) [KV01) [KZOO] (Lar72] [Lat91] Bibliography 441 P. Klein, S. Plotkin, and S. Rao. Excluded minors, network decomposition, and multicommodity flow. In Proc. 25th Annual ACM Symposium on the Theory of Computing, pages 682-690, 1993. (ref: p. 393) Gy. Karolyi, J. Pach, and G. T6th. A modular version of the Erdos-Szekeres theorem. Studia Mathem,atica Hungarica, 2001. In press. (ref: p. 38) J. Koml6s, J. Pach, and G. Woeginger. Almost tight bounds for €-nets. Discrete Comput. Geom., 7:163-173, 1992. (ref: p. 243) J. Kollar, L. R6nyai, and T. Szabo. Norm-graphs and bipartite Thran numbers. Combinatorica, 16(3):399-406, 1996. (ref: p. 68) J. Kahn and M. Saks. Balancing poset extensions. Order, 1:113-126, 1984. (ref: p. 308) J. Koml6s and M. Simonovits. Szemeredi's regularity lemma and its applications in graph theory. In D. Miklos et al. edi­ tors, Combinatorics, Paul Erdos Is Eighty., Vol. 2, pages 295-352. Janos Bolyai Mathematical Society, Budapest, 1996. (ref: p. 226) T. Kovari, V. S6s, and P. Thran. On a problem of k. zarankiewicz. Coll. Math., 3:50-57, 1954. (ref: p. 68) N. Katoh and T. Tokuyama. Lovasz's lemma for the three­ dimensional K -level of concave surfaces and its applications. In Proc. 40th IEEE Symposium on Foundations of Computer Science, pages 389-398, 1999. (ref: p. 271) M. Klazar and P. Valtr. Generalized Davenport-Schinzel se­ quences. Combinatorica, 14:463-476, 1994. (ref: p. 176) Gy. Karolyi and P. Valtr. Point configurations in d-space with­ out large subsets in convex position. Discrete Comput. Geom., 2001. To appear. (ref: p. 33) G. Kalai and G. M. Ziegler, editors. Polytopes-Combinatorics and Computation. DMV-seminar Oberwolfach, Germany, November 1997. Birkhauser, Basel, 2000. (ref: p. 85) D. G. Larman. On sets projectively equivalent to the vertices of a convex polytope. Bull. Lond. Math. Soc., 4:6-12, 1972. (ref: p. 206) J.-C. Latombe. Robot Motion Planning. Kluwer Academic Pub­ lishers, Boston, 1991. (ref: p. 122} 442 [LedOI] [Lee82] [Lee91] [Lei83] [Lei84] [Len83] [Lev26J [Lev 51] [Lin84] [Lin92] [LLL82] [LLR95] [LM75] Bibliography M. Ledoux. The Concentration of Measure Phenomenon, vol­ ume 89 of Mathematical Surveys and Monographs. An1er. Math. Soc., Providence, RI, 2001. (refs: pp. 336, 340) D. T. Lee. On k-nearest neighbor Voronoi diagrams in the plane. IEEE Trans. Comput., C-31:478--487, 1982. (ref: p. 122) C. W. Lee. Winding numbers and the generalized lower-bound conjecture. In J.E. Goodman, R. Pollack, and W. Steiger, edi­ tors, Computational Geometry: Papers from the DIMACS spe­ cial year, DIMACS Series in Discrete Mathematics and Theo­ retical Computer Science 6, pages 209-219. Amer. Math. Soc., 1991. (ref: p. 280) F. T. Leighton. Complexity issues in VLSI. Iv1IT Press, Canl­ bridge, MA, 1983. (ref: p. 57) F. T. Leighton. New lower bound techniques for VLSI. Math. Systems Theory, 17:47-70, 1984. (ref: p. 56) H. W. Lenstra. Integer programming with a fixed number of variables. Math. Oper. Res., 8:538-548, 1983. (ref: p. 24) F. Levi. Die Teilung der projektiven Ebene durch Gerade oder Pseudogerade. Ber. Math.-Phys. Kl. sachs. Akad. Wiss. Leipzig, 78:256--267, 1926. (ref: p. 136) P. Levy. Problemes concrets d 'analyse fonctionelle. Gauthier Villars, Paris, 1951. (ref: p. 340) N. Linial. The information-theoretic bound is good for merging. SIAM J. Comput., 13:795-801, 1984. (ref: p. 308) J. Lindenstrauss. Almost spherical sections; their existence and their applications. In J ahresbericht der D MV, Jubilaeumstag., 100 Jahre DMV, Bremen/Dtschl. 1990, pages 39-61, 1992. (refs: pp. 336, 352) A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz. Factoring polynomials with rational coefficients. Math. Ann., 261:514-534, 1982. (ref: p. 25) N. Linial, E. London, and Yu. Rabinovich. The geometry of graphs and some its algorithmic applications. Combinatorica, 15:215-245, 1995. (refs: pp. 379, 380, 392, 400) D. Larman and P. I\1ani. Almost ellipsoidal sections and projec­ tions of convex bodies. Math. Proc. Camb. Philos. Soc., 77:529-546, 1975. (ref: p. 353) [LM93] [LMOO] [LMN01] [LMS94] [Lov] [Lov71] [Lov72] [Lov74] [Lov86] [Lov93] [LP86] (LPS88] Bibliography 443 J. Lindenstrauss and V. D. Milman. The local theory of normed spaces and its applications to convexity. In P. M. Gruber and J. M. Wills, editors, Handbook of Convex Geometry, pages 1149-1220. North-Holland, Amsterdam, 1993. (refs: pp. 327, 336) N. Linial and A. Magen. Least-distortion Euclidean em beddings of graphs: Products of cycles and expanders. J. Combin. Theor·y Se'r. B, 79:157-171, 2000. (ref: p. 380) N. Linial, A. Magen, and N. Naor. Euclidean ernbeddings of regular graphs · ·· the girth lower bound. Geometric and Func­ tional Analysis, 2001. In press. (ref: p. 380) C.-Y. Lo, J. I\1atousek, and W. L. Steiger. Algorithms for ham­ sandwich cuts. Discrete Comput. Geom., 11:433, 1994. (ref: p. 16) J.-P. Laumond and l\1. H. Overmars, editors. Algorithms for Robotic Motion and ManipuJation. A. K. Peters, Wellesley, MA, 1996. (ref: p. 122) L. Lovasz. Semidefinite programs and combinatorial optimiza­ tion. In C. Linhares-Sales and B. Reed, editors, Recent Ad­ vances in Algorithmic Discrete Mathematics. Springer, Berlin etc. To appear. (refs: pp. 293, 381) L. Lovasz. On the number of halving lines. Annal. Univ. Scie. Budapest. de Rolando Eotvos Nominatae, Sectio Math., 14:107-108, 1971. (refs: pp. 269, 280) L. Lovasz. Normal hypergraphs and the perfect graph conjec­ ture. Discrete Math., 2:253-267, 1972. (ref: p. 293) L. Lovasz. Problem 206. Matematikai Lapok, 25:181, 1974. (ref: p. 198) L. Lova.sz. An Algorithmic Theory of Numbers, Graphs and Convexity. SIAM Regional Series in Applied Mathematics. SIAM, Philadelphia, 1986. (refs: pp. 24, 25) L. Lovasz. Cornbinatorial Problems and Exercises {2nd edition). Akademiai Kiad6, Budapest, 1993. (refs: pp. 235, 374) L. Lovasz and M. D. Plummer. Matching Theory, volume 29 of Ann. Discrete Math. North-Holland, 1986. (ref: p. 235) A. Lubotzky, R. Phillips, and P. Sarnak. Ramanujan graphs. Combinatorica, 8:261-277, 1988. (refs: pp. 367, 382) 444 [LR97] [LS02] [LUW95] [LUW96] (LW88] [Mac50] [Mar88] [Mat90a] [Mat90b] [Mat92] [Mat96a] [Mat96b] [Mat97) Bibliography M. Laczkovich and I. Ruzsa. The number of homothetic sub­ sets. In R. L. Graham and J. Nesetfil, editors, The Mathematics of Paul Erdos, Vol. II, volume 14 of Algorithms and Combina­ torics, pages 294-302. Springer, Berlin etc., 1997. (ref: p. 47) N. Linial and M. Saks. The Euclidean distortion of complete binary trees-An elementary proof. Discr. Comput. Geom., 2002. To appear. (ref: p. 393) F. Lazebnik, V. A. Ustimenko, and A. J. Woldar. A new series of dense graphs of high girth. Bull. Amer. Math. Soc., New Ser., 32(1):73-79, 1995. (ref: p. 367) F. Lazebnik, V. A. Ustimenko, and A. J. Woldar. A characteri­ zation of the components of the graphs D(k, q). Discrete Math., 157(1-3):271-283, 1996. (ref: p. 367) N. Linial L. Lovasz and A. Wigderson. Rubber bands, convex embeddings and graph connectivity. Combinatorica, 8:91-102, 1988. (ref: p. 92} A.M. Macbeath. A compactness theorem for affine equivalence­ classes of convex regions. Canad. J. Math, 3:54-61, 1950. (ref: p. 321) G. A. Margulis. Explicit group-theoretic constructions of com­ binatorial schemes and their application to the design of ex­ panders and concentrators (in Russian). Probl. Peredachi Inf., 24(1):51-60, 1988. English translation: Probl. lnf. Transm. 24, No.1, 39-46 (1988). (ref: p. 382) J. Matousek. Construction of €-nets. Discrete Comput. Geom., 5:427-448, 1990. (refs: pp. 68, 75) J. Matousek. Bi-Lipschitz embeddings into low-dimensional Eu­ clidean spaces. Comment. Math. Univ. Carolinae, 31:589-600, 1990. (ref: p. 368) J. Matousek. Efficient partition trees. Discrete Com put. Geom., 8:315-334, 1992. (ref: p. 69) J. Matousek. Note on the colored Tverberg theorem. J. Com­ bin. Theory Ser. B, 66:146-151, 1996. (ref: p. 205) J. Matousek. On the distortion required for embedding finite metric spaces into normed spaces. Israel J. Math., 93:333-344, 1996. (refs: pp. 140, 367, 388) J. Matousek. On embedding expanders into fp spaces. Israel J. Math., 102:189-197, 1997. (ref: p. 379) [Mat98] [Mat99a] [Mat99b] [Mat01) [McM70] (McM93] [McM96] [Mic98] [Mil64] [Mil69] [Mil71] [Mil98] [Min96] [Mne89] Bibliography 445 J. Matousek. On constants for cuttings in the plane. Discrete Comput. Geom., 20:427-448, 1998. (ref: p. 75) J. Matousek. Geometric Discrepancy (An Illustrated Guide}. Springer-Verlag, Berlin, 1999. (ref: p. 243) J. Matousek. On embedding trees into uniforntly convex Banach spaces. Israel J. Math, 114:221-237, 1999. (ref: p. 393) J. Matousek. A lower bound for weak epsilon-nets in high di­ mension. Discrete Comput. Geom., 2001. In press. (ref: p. 254) P. McMullen. The maximal number of faces of a convex poly­ tope. Mathematika, 17:179-184, 1970. (ref: p. 103) P. McMullen. On simple polytopes. Invent. Math., 1 13:419-444, 1993. (ref: p. 105) P. McMullen. Weights on polytopes. Discrete Comput. Geom., 15:363-388, 1996. (ref: p. 105) D. Micciancio. The shortest vector in a lattice is hard to approx­ imate within some constants. In Proc. 39th IEEE Symposium on Foundations of Computer Science, pages 92-98, 1998. (ref: p. 25) J. W. Milnor. On the Betti numbers of real algebraic varieties. Proc. Amer. Math. Soc., 15:275-280, 1964. (ref: p. 135) V. D. Milman. Spectrum of continuous bounded functions on the unit sphere of a Banach space. Funct. Anal. Appl., 3:67-79, 1969. (refs: pp. 341, 348) V. D. Milman. New proof of the theorem of Dvoretzky on sec­ tions of convex bodies. Funct. Anal. Appl., 5:28-37, 1971. (refs: pp. 341, 348, 353) V. D. Milman. Surprising geometric phenomena in high­ dimensional convexity theory. In A. Balog et al., editors, Eu­ ropean Congress of Mathematics {ECM}, Budapest, Hungary, July 22-26, 1996. Volume II, pages 73-91. Birkhauser, Basel, 1998. (refs: pp. 313, 321, 336) H. Minkowski. Geometrie der Zahlen. Teubner, Leipzig, 1896. Reprinted by Johnson, New York, NY 1968. (refs: pp. 20, 300) M. E. Mnev. The universality theorems on the classification problem of configuration v arieties and convex polytopes vari­ eties. In 0. Y. Viro, editor, Topology and Geometry-Rohlin Seminar, volume 1346 of Lecture Notes Math., pages 527-544. Springer, Berlin etc., 1989. (ref: p. 138) 446 Bibliography [Mor94] M. Morgenstern. Existence and explicit constructions of q + 1 regular Ramanujan graphs for every prime power q. J. Combin. Theory, Ser. B, 62(1):44-62, 1994. (ref: p. 382) [Mos52] L. Moser. On the different distances determined by n points. Amer. Math. Monthly, 59:85-91, 1952. (ref: p. 45) [MPS+94] J. Matousek, J. Pach, M. Sharir, S. Sifrony, and E. Welzl. Fat triangles determine linearly many holes. SIAM J. Comput., 23:154-169, 1994. (ref: p. 194) [MS71] P. McMullen and G. C. Shephard. Convex Polytopes and the Upper Bound Conjecture, volume 3 of Lecture Notes. Cambridge University Press, Can1bridge, England, 1971. (refs: pp. 85, 114) [l\IIS86] V. D. Milman and G. Schechtman. Asymptotic Theory of Finite Dimensional Normed Spaces. Lecture Notes in Math. 1200. Springer-Verlag, Berlin etc., 1986. (refs: pp. 300, 335, 336, 340, 346, 353, 361) [MSOO] W. Morris and V. Soltan. The Erdos-Szekeres problem on points in convex position-a survey. Bull. Amer. Math. Soc., New Ser., 37( 4):437-458, 2000. (ref: p. 32) [MSW96] J. Matousek, M. Sharir, and E. Welzl. A subexponential bound for linear programming. Algoritmica, 16:498-516, 1996. (refs: pp. 94, 327) [Mul93a] K. Mulmuley. Computational Geometry: An Introduction Through Randomized Algorithms. Prentice Hall, Englewood Cliffs, NJ, 1993. (refs: pp. 161, 162) [Mul93b] K. Mulmuley. Dehn-Sommerville relations, upper bound the­ orem, and levels in arrangements. In Proc. 9th Annu. ACM Sympos. Comput. Geom., pages 240-246, 1993. (ref: p. 280) [NarDO] W. Narkiewicz. The Development of Prim,e Number Theory. Springer, Berlin etc., 2000. (ref: p. 54) [NPPS01] E. Nevo, J. Pach, R. Pinchasi, and M. Sharir. Lenses in ar­ rangements of pseudocircles and their applications. Discrete Comput. Geom., 2001. In press. (ref: p. 271) [NR01] I. Newman and Yu. Rabinovich. A lower bound on the dis­ tortion of embedding planar metrics into Euclidean space. Manuscript, Computer Science Department, Univ. of Haifa; submitted to Discrete Comput. Geom., 2001. (ref: p. 372) [NykOO] H. Nyklova. Almost empty convex polygons. KAM-DIMATIA Series 498-2000 (technical report), Charles University, Prague, 2000. (ref: p. 39) [OBS92] [OP49] [OS94] [OT91] [OY85] [PA95] [Pac98] [Pac99J [Pin02] [Pis89] [P6r02] [PPOl] [PR93] Bibliography 447 A. Okabe, B. Boots, and K. Sugihara. Spatial Tessellations: Concepts and Applications of Voronoi Diagrams. John Wiley & Sons, Chichester, UK, 1992. (ref: p. 120) 0. A. Oleinik and I. B. Petrovskii. On the topology of of real algebraic surfaces (in Russian). Izv. Akad. Nauk SSSR, 13:389-402, 1949. (ref: p. 135) S. Onn and B. Sturmfels. A quantitative Steinitz' theorem. Beitriige zur· Algebr-a und Geometr-ie / Contr·ibutions to Algebr-a and Geometry, 35:125-129, 1994. (ref: p. 94) P. Orlik and H. Terao. Ar1nngements of Hyperplanes. Springer­ Verlag, Berlin etc., 1991. (ref: p. 129) , C. O'Dunlaing and C. K. Yap. A "retraction" method for plan-ning the motion of a disk. J. Algorithms, 6:104-111, 1985. (ref: p. 122) J. Pach and P. K. Agarwal. Combinatorial Geometry. John Wiley & Sons, New York, NY, 1995. (refs: pp. viii, 20, 24, 44, 45, 50, 53, 56, 57, 92, 243) J. Pach. A Tverberg-type result on multicolored simplices. Comput. Geom.: Theor. Appl., 10:71-76, 1998. (refs: pp. 220, 226, 229) J. Pach. Geometric graph theory. In J. D. Lamb et al., editors, Surveys in Combinatorics. Proceedings of the 1 7th British com­ binatorial conference, Univer·sity of Kent at Canter·bur-y, UK, 1999, Lond. Math. Soc. Lect. Note Ser. 267, pages 167-200. Cambridge University Press, 1999. (ref: p. 56) R. Pinchasi. Gallai-Sylvester theorem for pairwise intersecting unit circles. Discrete Comput. Geom., 2002. To appear. (ref: p. 44) G. Pisier. The Volume of Convex Bodies and Banach Space Ge­ ometry. Cambridge University Press, Cambridge, 1989. (refs: pp. 315, 335, 336, 353, 361) A. P6r. A partitioned version of the Erdos-Szekeres theorem. Discrete Comput. Geom., 2002. To appear. (ref: p. 220) J. Pach and R. Pinchasi. On the number of balanced lines. Discrete Comput. Geom., 25:611-628, 2001. (ref: p. 280) R. Pollack and M.-F. Roy. On the number of cells defined by a set of polynomials. C. R. Acad. Sci. Paris, 316:573-577, 1993. (ref: p. 135) 448 [PS89] [PS92] [PS98a] [PS98b] (PSOl] [PSS88] [PSS92J [PSS96] [PSS01] [PSTOO] [PT97] [PT98] [PTOO] [Rad21] Bibliography J. Pach and M. Sharir. The upper envelope of piecewise lin­ ear functions and the boundary of a region enclosed by convex plates: combinatorial analysis. Discrete Com put. Geom., 4:291-309, 1989. (ref: p. 186} J. Pach and M. Sharir. Repeated angles in the plane and related problems. J. Combin. Theory Ser. A, 59:12-22, 1992. (refs: pp. 46, 49, 50} J. Pach and M. Sharir. On the number of incidences between points and curves. Combinatorics, Probability, and Computing, 7:121-127, 1998. (refs: pp. 46, 49, 64) J. Pach and J. Solymosi. Canonical theorems for convex sets. Discrete Comput. Geom., 19:427-435, 1998. (ref: p. 220) J. Pach and J. Solymosi. Crossing patterns of segments. J. Combin. Theory Ser. A, 96:316-325, 2001. (refs: pp. 223, 227) R. Pollack, M. Sharir, and S. Sifrony. Separating two sim­ ple polygons by a sequence of translations. Discrete Comput. Geom., 3:123-136, 1988. (ref: p. 176) J. Pach, W. Steiger, and E. Szemeredi. An upper bound on the number of planar k-sets. Discrete Comput. Geom., 7:109-123, 1992. (ref: p. 269) J. Pach, F. Shahrokhi, and M. Szegedy. Applications of the crossing number. Algorithmica, 16:111-117, 1996. (ref: p. 57) J. Pach, I. Safruti, and M. Sharir. The union of cubes in three dimensions. In Proc. 1 7th Annu. ACM Sympos. Com­ put. Geom., pages 19-28, 2001. (ref: p. 194) J. Pach, J. Spencer, and G. T6th. New bounds for crossing numbers,. Discrete Comput. Geom., 24:623--644, 2000. (refs: pp. 57, 58) J. Pach and G. T6th. Graphs drawn with few crossings per edge. Combinatorica, 17:427-439, 1997. (ref: p. 56) J. Pach and G. Toth. A generalization of the Erdos-Szekeres theorem to disjoint convex sets. Discrete Comput. Geom., 19(3):437-445, 1998. (ref: p. 33} J. Pach and G. T6th. Which crossing number is it anyway? J. Combin. Theory Ser. B, 80:225-246, 2000. (ref: p. 58) J. Radon. Mengen konvexer Korper, die einen gemeinsamen Punkt enthalten. Math. Ann., 83:1 13-115, 1921. (ref: p. 12} [Rad47] [Rao99] [RBG01] [Rea68] [RG97] [RG99] [Rou01a] [Rou01b] [RS64] [Rud91] (Rnz94] [RVWOO] Bibliography 449 R. Rado. A theorem on general measure. J. London Math. Soc., 21:291-300, 1947. (ref: p. 16) S. Rao. Small distortion and volume respecting embeddings for planar and Euclidean metrics. In Proc. 15th Annual ACM Symposium on Comput. Geometry, pages 300-306, 1999. (refs: pp. 393, 398) L. R6nyai, L. Babai, and M. K. Ganapathy. On the number of zero-patterns of a sequence of polynontials. J. A mer·. Math. Soc., 14(3):717-735 (electronic), 2001. (ref: p. 136) J. R. Reay. An extension of Radon's theorem. Illinois J. Math˂ 12:184-189, 1968. (ref: p. 204) J. Richter-Gebert. Realization Spaces of Polytopes. Lecture Notes in Mathematics 1643. Springer, Berlin, 1997. (refs: pp. 92, 94, 139) J. Richter-Gebert. The universality theorems for oriented ma­ troids and polytopes. In B. Chazelle et al., editors, Advances in Discrete and Computational Geometry, Contemp. Math. 223, pages 269-292. Amer. Math. Soc., Providence, RI, 1999. (refs: pp. 94, 138, 139) J.-P. Roudneff. Partitions of points into simplices with k­ dimensional intersection. Part I: The conic Tverberg's theorem. European J. Combinatorics, 22:733-743, 2001. (ref: p. 204) J.-P. Roudneff. Partitions of points into simplices with k­ dimensional intersection. Part II: Proof of Reay's conjecture in dimensions 4 and 5. European J. Combinatorics, 22:745-765, 2001. (ref: p. 204) .. A. Renyi and R. Sulanke. Uber die konvexe Hiille von n zufallig gewahlten Punkten II. Z. Wahrsch. V erw. Gebiete, 3:138-147, 1964. (ref: p. 328) W. Rudin. Functional Analysis {2nd edition). McGraw-Hill, New York, 1991. (ref: p. 8) I. Z. R,uzsa. Generalized arithmetical progressions and sumsets. Acta Math. Hung., 65(4):379-388, 1994. (ref: p. 47) 0. Reingold, S. P. Vadhan, and A. Wigderson. Entropy waves, the zig-zag graph product, and new constant-degree expanders and extractors. In Proc. 41st IEEE Symposium on Foundations of Computer Science, pages 3-13, 2000. (refs: pp. 381, 382) 450 [SA95] [Sal75] [Sar91] [Sar92] [Sau72] [SchOl] [Sch11] [Sch38] [Sch48] [Sch86] [Sch87] [Sch90] [Sch93] Bibliography M. Sharir and P. K. Agarwal. Davenport-Schinzel Sequences and Their Geometric Applications. Cambridge University Press, Cambridge, 1995. (refs: pp. 168, 172, 173, 176, 181, 191) G.T. Sallee. A Helly-type theorem for widths. In Geom. Metric Lin. Spaces, Proc. Conf. East Lansing 1974, Lect. Notes Math. 490, pages 227-232. Springer, Berlin etc., 1975. (ref: p. 13) K. Sarkaria. A generalized van Kampen-Flores theorem. Proc. Amer·. Math. Soc., 11:559-565, 1991. (ref: p. 368) K. Sarkaria. Tverberg's theorem via number fields. Israel J. Math., 79:317, 1992. (ref: p. 204) N. Sauer. On the density of families of sets. Journal of Combi­ natorial Theory Ser. A, 13:145-14 7, 1972. (ref: p. 242) L. Schlafli. Theorie der vielfachen Kontinuitat. Denkschriften der· Schweizer·ichen naturforschender Gesellschaft, 38:1-237, 1901. Written in 1850-51. Reprinted in Ludwig Schlafii, 1814-1895, Gesammelte mathematische Abhandlungen, Birkhauser, Basel 1950. (ref: p. 85) P. H. Schoute. Analytic treatment of the polytopes regularly de­ rived from the regular polytopes. Verhandelingen der Koning­ lijke Akademie van Wetenschappen te Amsterdam, 11(3), 1911. (ref: p. 85) I. J. Schoenberg. Metric spaces and positive definite functions. Trans. Amer. Math. Soc., 44:522-53, 1938. (ref: p. 357) E. Schmidt. Die Brunn-Minkowski Ungleichung. Math. Nachrichten, 1:81-157, 1948. (ref: p. 336) A. Schrijver. Theory of Linear and Integer Programming. Wiley-lnterscience, New York, NY, 1986. (refs: pp. 8, 24, 25, 85) C. P. Schnorr. A hierarchy of polynomial time lattice basis re­ duction algorithms. Theor. Comput. Sci., 53:201-224, 1987. (ref: p. 25) W. Schnyder. Embedding planar graphs on the grid. In Proc. 1st ACM-SIAM Sympos. Discrete Algorithms, pages 138-148, 1990. (ref: p. 94) R. Schneider. Convex Bodies: The Brunn-Minkowski Theory, volume 44 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge, 1993. (ref: p. 301) [Sei91] [Sei95] [Sei97] [Sha94] [Sha01] [She72] [Sho91] [Sib81] [Sie89] [SST84] [SST01] Bibliography 451 R. Seidel. Small-dimensional linear programming and convex hulls made easy. Discrete Comput. Geom., 6:423-434, 1991. (ref: p. 105) R. Seidel. The upper bound theorem for polytopes: an easy proof of its asymptotic version. Comput. Geom. Theory Appl., 5:115-116, 1995. (ref: p. 104) R. Seidel. Convex hull computations. In J. E. Goodman and J. O'Rourke, editors, Handbook of Discr·ete and Computational Geometry, chapter 19, pages 361-376. CRC Press LLC, Boca Raton, FL, 1997. (ref: p. 105) M. Sharir. Almost tight upper bounds for lower envelopes in higher dimensions. Discrete Comput. Geom., 12:327-345, 1994. (ref: p. 192) M. Sharir. The Clarkson-Shor technique revisited and ex­ tended. In Proc. 1 7th A nnu. A CM Sympos. Com put. Geom., pages 252-256, 2001. (refs: pp. 145, 146) S. Shelah. A combinatorial problem, stability and order for models and theories in infinitary languages. Pacific J. Math., 41:247-261, 1972. (ref: p. 242) P. W. Shor. Stretchability of pseudolines is NP-hard. In P. Gritzrnan and B. Sturrnfels, editors, Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift, volume 4 of DIMACS Series in Discrete Mathematics and Theoretical Com­ puter Science, pages 531-554. AMS Press, 1991. (ref: p. 138) R. Sibson. A brief description of natural neighbour interpola­ tion. In V. Barnet, editor, Interpreting Multivariate Data, pages 21-36. John Wiley & Sons, Chichester, 1981. (ref: p. 122) C. L. Siegel. Lectures on the Geometry of Numbers. Notes by B. Friedman. Rewritten by K omaravolu Chandrasekharan with the assistance of Rudolf Suter·. Springer-Verlag, Berlin etc., 1989. (ref: p. 20) J. Spencer, E. Szemeredi, and W. T. Trotter. Unit distances in the Euclidean plane. In B. Bollobas, editor, Graph Theory and Combinatorics, pages 293-303. Academic Press, New York, NY, 1984. (ref: p. 45) M. Sharir, S. Smorodinsky, and G. Tardos. An improved bound for k-sets in three dimensions. Discrete Comput. Geom., 26:195-204, 2001. (refs: pp. 270, 286) 452 [ST74] [ST83] [ST01J [Sta75] [Sta80] [Sta86] (Ste26] (Stel6] [Ste22] [Ste85] [SUOO] [SV94] [SW01] Bibliography V. N. Sudakov and B. S. Tsirel'son. Extremal properties of half­ spaces for spherically invariant measures (in Russian). Zap. Naucn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI}, 41:14-24, 1974. Translation in J. Soviet. Math. 9:9-18, 1978. (ref: p. 336) E. Szemeredi and W. Trotter, Jr. A combinatorial distinction between Euclidean and projective planes. European J. Combin., 4:385-394, 1983. (ref: p. 44) J. Solymosi and Cs. T6th. Distinct distances in the plane. Dis­ crete Comput. Geom., 25:629-634, 2001. (refs: pp. 45, 61) R. Stanley. The upper-bound conjecture and Cohen-Macaulay rings. Stud. Appl. Math., 54:135-142, 1975. (ref: p. 104) R. Stanley. The number of faces of a simplical convex polytope. Adv. Math., 35:236-238, 1980. (ref: p. 105) R. P. Stanley. Two poset polytopes. Discrete Comput. Geom., 1:9-23, 1986. (ref: p. 309) J. Steiner. Einige Gesetze iiber die Theilung der Ebene und des Raumes. J. Reine Angew. Math., 1:349-364, 1826. (ref: p. 128) E. Steinitz. Bedingt konvergente Reihen und konvexe Systeme I; II; III. J. Reine Angew. Math, 143; 144; 146:128-175; 1-40; 1-52, 1913; 1914; 1916. (ref: p. 8) E. Steinitz. Polyeder und Raumeinteilungen. Enzykl. Math. Wiss., 3:1-139, 1922. Part 3AB12. (ref: p. 92) H. Steinlein. Borsuk's antipodal theorem and its generaliza­ tions and applications: a survey. In A. Granas, editor, Methodes topologiques en analyse nonlineaire, pages 166-235. Colloq. Semin. Math. Super., Semin. Sci. OTAN (NATO Advanced Study Institute) 95, U niv. de Montreal Press, Montreal, 1985. (ref: p. 16) J.-R. Sack and J. Urrutia, editors. Handbook of Computational Geometry. North-Holland, Amsterdam, 2000. (refs: pp. viii, 162) 0. Sykora and I. Vrt'o. On VLSI layouts of the star graph and related networks. Integration, The V LSI Journal, 17(1):83-93, 1994. (ref: p. 57) M. Sharir and E. Welzl. Balanced lines, halving triangles, and the generalized lower bound theorem. In Prnc. 1 7th Annu. ACM Sympos. Comput. Geom., pages 315----318, 2001. (refs: pp. 280, 281) [SY93] [Syl93] [Sze74] [Sze78] [Sze97] [Tag96J [Tal93] [Tal95] [Tam88] [Tan84J (Tar75] [Tar95] [Tar01] [Tho65] Bibliography 453 J. R. Sangwine-Yager. Mixed volumes. In P. M. Gruber and J. M. Wills, editors, Handbook of Convex Geometry (Vol. A), pages 43-71. North-Holland, Amsterdam, 1993. (refs: pp. 300, 301) J. J. Sylvester. Mathematical question 11851. Educational Times, 59:98, 1893. (ref: p. 44) E. Szemeredi. On a problem of Davenport and Schinzel. Acta Arithmetica, 25:213-224, 1974. (ref: p. 175) E. Szemeredi. Regular partitions of graphs. In Problemes combi­ natoires et theorie des graphes, Orsay 1976, Colloq. int. CNRS No.260, pages 399-401. CNRS, Paris, 1978. (ref: p. 226) L. Szekely. Crossing numbers and hard Erdos problems in dis­ crete geometry. Combinatorics, Probability, and Computing, 6:353-358, 1997. (refs: pp. 44, 45, 56, 61) B. Tagansky. A new technique for analyzing substructures in arrangements of piecewise linear surfaces. Discrete Comput. Geom., 16:455-4 79, 1996. (ref: p. 186) G. Talenti. The standard isoperimetric theorem. In P. M. Gru­ ber and J. M. Wills, editors, Handbook of Convex Geometry (Vol. A), pages 73-123. North-Holland, Amsterdam, 1993. (ref: p. 336) M. Talagrand. Concentration of measure and isoperimetric in­ equalities in product spaces. Publ. Math. I.H.E.S., 81:73-205, 1995. (ref: p. 336) A. Tamir. Improved complexity bounds for center location problems on networks by using dynan1ic data structures. SIAM J. Discr. Math., 1:377-396, 1988. (ref: p. 169) M. R. Tanner. Explicit concentrators from generalized n-gons. SIAM J. Alg. Discr. Methods, 5(3):287-293, 1984. (ref: p. 381) R. E. Tarjan. Efficiency of a good but not linear set union algorithm. J. ACM, 22:215-225, 1975. (ref: p. 175) G. Tardos. Transversals of 2-intervals, a topological approach. Combinatorica, 15:123-134, 1995. (ref: p. 262) G. Tardos. On distinct sums and distinct distances. Manuscript, Renyi Institute, Budapest, 2001. (refs: pp. 45, 61, 63) R. Thorn. On the homology of real algebraic varieties (in French). In S.S. Cairns, editor, Differential and Combinato­ rial Topology. Princeton Univ. Press, 1965. (ref: p. 135) 454 [Tit59] [TJ89] [Tot65] [Tot01a] [T6t01b] [Tro92] [Tro95] [TT98] [Tut60] [TV93] [TV98] [Tve66] [Tve81] [UrrOO] Bibliography J. Tits. Sur la trialite et certains groupes qui s'en deduisent. Publ. Math. I. H. E. S., 2:13-60, 1959. (ref: p. 367) N. Tomczak-Jaegermann. Banach-Mazur Distances and Finite­ Dimensional Operator Ideals. Pitman Monographs and Surveys in Pure and Applied Mathematics 38. J. Wiley, New York, 1989. (refs: pp. 327, 353) L. Fejes T6th. Regular Figures (in German}. Akademiai Kiad6 Budapest, 1965. (ref: p. 322) Cs. T6th. The Szemen§di-Trotter theorem in the complex plane. Combinatorica, 2001. To appear. (ref: p. 44) G. Toth. Point sets with many k-sets. Discrete Comput. Geom., 26:187-194, 2001. (refs: pp. 269, 276) W. T. Trotter. Combinatorics and Partially Ordered Sets: Di­ mension Theor·y. Johns Hopkins Series in the Mathematical Sci­ ences. The Johns Hopkins University Press, 1992. (ref: p. 308) W. T. Trotter. Partially ordered sets. In R. L. Grahan1, M. Grotschel, and L. Lovasz, editors, Handbook of Combina­ torics, pages 433-480. North-Holland, Amsterdam, 1995. (ref: p. 308) H. Tamaki and T. Tokuyama. How to cut pseudo-parabolas into segments. Discrete Comput. Geom., 19:265--290, 1998. (refs: pp. 70, 270) W. T. Tutte. Convex representations of graphs. Proc. London Math. Soc., 10(38):304-320, 1960. (ref: p. 92) H. Tverberg and S. Vrecica. On generalizations of Radon's theorem and the ham sandwich theorem. European J. Combin . . 14:259-264, 1993. (ref: p. 204) G. T6th and P. Valtr. Note on the Erdos-Szekeres theorem. Discrete Comput. Geom., 19(3):457-459, 1998. (ref: p. 33) H. Tverberg. A generalization of Radon's theorem. J. London Math. Soc., 41:123-128, 1966. (ref: p. 203) H. Tverberg. A generalization of Radon's theorem II. Bull. Aust. Math. Soc., 24:321-325, 1981. (ref: p. 204) J. Urrutia. Art gallery and illumination problems. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 973-1027. North-Holland, 2000. (ref: p. 250) [Val92aJ [Val92b] [Val94] [Val98] [Val99a] [Val99b] [Val01] [VC71] [Vem98) [Vin39] [Vor08] [VZ93] [Wag01] Bibliography 455 P. Valtr. Convex independent sets and 7-holes in restricted planar point sets. Discrete Comput. Geom., 7:135-152, 1992. (refs: pp. 33, 37) P. Valtr. Sets in Rd with no large empty convex subsets. Dis­ crete Appl. Math., 108:115-124, 1992. (ref: p. 37) P. Valtr. Planar point sets with bounded ratios of distances. Doctoral Thesis, Mathematik, FU Berlin, 1994. (ref: p. 34) P. Valtr. Guarding galleries where no point sees a small area. Israel J. Math, 104:1-16, 1998. (ref: p. 250) P. Valtr. Generalizations of Davenport-Schinzel sequences. In R. Graham et al., editors, Contemporary Trends in Dis­ crete Mathematics, volume 49 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 349-389. Amer. Math. Soc., Providence, RI, 1999. (refs: pp. 176, 177) P. Valtr. On galleries with no bad points. Discrete and Com­ putational Geometry, 21:193-200, 1999. (ref: p. 250) P. Valtr. A sufficient condition for the existence of large empty convex polygons. Discrete Comput. Geom., 2001. To appear. (ref: p. 38) V. N. Vapnik and A. Ya. Chervonenkis. On the uniform con­ vergence of relative frequencies of events to their probabilities. Theory Probab. Appl., 16:264-280, 1971. (refs: pp. 242, 243) S. Vempala. Random projection: a new approach to VLSI lay­ out. In Proc. 39th IEEE Symposium on Foundations of Com­ puter Science, pages 389-395, 1998. (ref: p. 397) P. Vincensini. Sur une extension d'un theoreme de M. J. Radon sur les ensembles de corps convexes. Bull. Soc. Math. France, 67:115--119, 1939. (ref: p. 12) G. M. Voronoi. Nouvelles applications des parametres conti­ nus a la theorie des fortnes quadratiques. deuxieme Memoire: Recherches sur les parallelloedres primitifs. J. Reine Angew. Math., 134:198-287, 1908. (ref: p. 120) A. Vucic and R. Zivaljevic. Note on a conjecture of Sierksma. Discrete Comput. Geom, 9:339-349, 1993. (ref: p. 205) U. Wagner. On the number of corner cuts. Adv. Appl. Math., 2001. In press. (ref: p. 271) 456 (War68] [Weg75] [Wel86] [Wel88] (Wel01] [Wil99] [Wol97] [WS88] [WW93] (WW01] [Zas75] (Zie94] [Ziv97] Bibliography H. E. Warren. Lower bound for approximation by nonlinear manifolds. Trans. Amer. Math. Soc., 133:167-178, 1968. (ref: p. 135) G. Wegner. d-collapsing and nerves of families of convex sets. Arch. Math., 26:317-321, 1975. (ref: p. 197) E. Welzl. More on k-sets of finite sets in the plane. Discrete Comput. Geom., 1:95-100, 1986. (ref: p. 270) E. Welzl. Partition trees for triangle counting and other range searching problems. In Proc. 4th Annu. ACM Sympos. Comput. Geom., pages 23-33, 1988. (ref: p. 242) E. Welzl. Entering and leaving j-facets. Discrete Comput. Geom., 25:351-364, 2001. (refs: pp. 104, 145, 280, 282) A. J. Wilkie. A theorem of the complement and some new o­ rninimal structures. Bel. Math., New Ser-., 5( 4):397-421, 1999. (ref: p. 250) T. Wolff. A Kakeya-type problem for circles. Amer. J. Math., 119(5):985-1026, 1997. (ref: p. 44) A. Wiernik and M. Sharir. Planar realizations of nonlinear Davenport-Schinzel sequences by segments. Discrete Comput. Geom., 3:15-47, 1988. (refs: pp. 173, 176) W. Weil and J. A. Wieacker. Stochastic geometry. In P. M. Gruber and J. M. Wills, editors, Handbook of Convex Geometry (Vol. B), pages 391-1438. North-Holland, Amsterdam, 1993. (ref: p. 99) U. Wagner and E. Welzl. A continuous analogue of the upper bound theorem. Discrete Comput. Geom., 26:205- 219, 2001. (ref: p. 114) T. Zaslavsky. Facing up to Arrangements: Face-Count Formulas for Partitions of Space by Hyperplanes, volume 154 of Memoirs A mer. Math. Soc. American Mathematical Society, Providence, RI, 1975. (ref: p. 128) G. M. Ziegler. Lectures on Polytopes, volume 152 of Graduate Texts in Mathematics. Springer-Verlag, Heidelberg, 1994. Cor-rected and revised printing 1998. (refs: pp. viii, 78, 85, 86, 89, 90, 92, 93, 103, 105, 114, 129, 137) .... R. T. Zivaljevic. Topological methods. In J. E. Goodman and J. O'Rourke, editors, Handbook of Discrete and Computational Geometry, chapter 11, pages 209-224. CRC Press LLC, Boca Raton, FL, 1997. (ref: p. 368) [Ziv98] [ZV90] [ZV92] Bibliography 457 v R. T. Zivaljevic. User's guide to equivariant methods in combi-natorics. II. Publ. Inst. Math. (Beograd} (N.S.), 64(78):107-132, 1998. (ref: p. 205) R.T. Zivaljevic and S.T. Vrecica. An extension of the ham sandwich theorem. Bull. London Math. Soc., 22:183-186, 1990. (ref: p. 16) Zivaljevic and S. Vrecica. The colored Tverberg's problem and complexes of injective functions. J. Combin. Theory Ser. A, 61:309-318, 1992. (ref: p. 205) Index The index starts with notation composed of special symbols, and Greek let­ ters are listed next. Terms consisting of more than one word mostly appear in several variants, for exa1nple, both "convex set" and "set, convex." An entry like "armadillo, 19(8.4.1), 22(Ex. 4)" means that the term is located in theorem (or definition, etc.) 8.4.1 on page 19 and in Exercise 4 on page 22. For many terms, only the page with the term's definition is shown. Names or notation used only within a single proof or remark are usually not indexed at all. For iinportant theore1ns, the index also points to the pages where they are applied. l x J (floor function), xv ࣝX l (ceiling function), XV I X I (cardinality), xv llxll (Euclidean norm), xv llxll 1 (€1-norm), 84 llxiiP (fp-norm), 357 llxlloo (maximum norm), 83, 357 llfiiLip (Lipschitz norm), 356 llxllz (general norm), 344 llxliK (norm induced by K), 344 G (graph complement), 290 (z) (unordered k-tuples), xvi .Fly (restriction of a set system), 238 8A (boundary), xv X (dual set), 80(5.1.3) (x, y) (scalar product), xv A + B (Minkowski sum), 297 r(x) (gamma function), 312 0(·) (asymptotically at least), xv <I?( G) (edge expansion), 373 <I?d(n) = ȏ 1 (؞), 127(6.1) 8(·) (both 0(·) and f2(·)), xv a(G) (independence number), 290 a(n) (inverse Ackermann), 173 x( G) (chromatic number), 290 x(G, w) (weighted chromatic number), 292 c-approximation, 242 E-net, 237(10.2.1), 237(10.2.2) - size, 239(10.2.4) - weak, 261(10.6.3) - - for convex sets, 253(10.4.1) E-pushing, 102 ry-dense set, 313 ry-net, 314 - application, 323, 340, 343, 365, 368 ry-separated set, 314 r.p(d) (Euler's function), 53 A8(n) (maximum length of DS sequence), 167 v( :F) (packing number), 232 v (:F) (fractional packing number), 233 460 vk (F) (simple k-packing number), 236(Ex. 4) w(G) (clique number), 290 w ( G, w) (weighted clique number), 291 1r_r( ·) (shatter function), 239 'V;(m, n) (m-decomposable DB-sequence, length), 178 p(Y 1, . . . , Y k) (hypergraph density), 223 a(n) (lower envelope of segments, complexity), 166 r( :F) (transversal number), 232 r (F) (fractional transversal number), 232 Ak ( n) ( kth function in the Acker1nann hierarchy), 173 A(n) (Ackermann function), 173 Ackermann function, 173 AffDep( a), 109 affine combination, 1 affine dependence, 2 affine Gale diagram, 112 affine hull, 1 affine mapping, 3 affine subspace, 1 affinely isomorphic arrangements, 133 AfNal(a), 109 Alexandrov-Fenchel inequality, 301 algebraic geometry, 131 algebraic number, 20(Ex. 4) algebraic surface patches - lower envelope, 189 - single cell, 191(7.7.2) algebraic surfaces, arrangement, 130 - decomposition problem, 162 algorithm - convex hull, 86, 105 - for f2-embedding, 378 - for centerpoint, 16 - for ham sandwich, 16 Index - for volume approximation, 315, 321 - Goemans-Williamson for MAXCUT, 384(Ex. 8) - greedy, 235, 236(Ex. 4) - LLL, 25 - simplex, 93 - sparsest cut, approximation, 391 almost convex set, 38, 39(Ex. 5) almost orthogonal vectors, 362(Ex. 3) t-almost spherical body, 341 almost spherical projection, 353 almost spherical section - of a convex body, 345(14.4.5), 348(14.6.1) - of a crosspolytope, 346, 353(Ex. 2) - of a cube, 343 - of an ellipsoid, 342(14.4.1) antichain, 295(Ex. 4) approximation - by a fraction, 19(2.1.3), 20(Ex. 4), 21(Ex. 5) - of a sparsest cut, 391 - of edge expansion, 391 - of volume, 321 - - hardness, 315 c:-approximation, 242 arc, 54 arithmetic progression - generalized, 4 7 - primes in, 53( 4.2.4) - Szemeredi's theorem, 227 arrangement - affine isomorphism, 133 - central, 129 - isornorphis1n, 133 - many cells, 43, 46, 58(Ex. 3), 152(Ex. 3) - of arbitrary sets, 130 - of hyperplanes, 126 - - number of cells, 127(6.1.1) - - unbounded cells, 129(Ex. 2) - of lines, 42 - of pseudolines, 132, 136 -· of pseudosegments, 270 - of segments, 130 - realization space, 138 - sin1ple, 127 - stretchable, 134, 137 -·- triangulation, 72(Ex. 2), 160 art gallery, 246, 250 atomic lattice, 89 Bn (unit ball in Rn), xv B(x, r) (r-ball centered at x), xv balanced line, 280 Balinski's theorem, 88 ball - £1 , see crosspolytope - random point in, 312 -- smallest enclosing, 13(Ex. 5) - - uniqueness, 328(Ex. 4) - volume, 311 Banach spaces, local theory, 329, 336 Banach-Mazur distance, 346 bandwidth, 397 basis (lattice), 21 - reduced, 25 Bezdek's conjecture, 44 hi-Lipschitz mapping, 356 binomial distribution, 240 bipartite graph, xvi bisection width, 57 bisector, 121 Blaschke-Santalo inequality, 320 body, convex ·- almost spherical, 341 - almost spherical section, 345(14.4.5), 348(14.6.1) - approximation by ellipsoids, 325(13.4.1) - lattice points in, 17-28 - volume approximation, 315, 321 Index 461 Borsuk-Ulam theorem, application, 15, 205 bottom-vertex triangulation, 160, 161 brick set, 298 Brunn's inequality, 297(12.2.1) - application, 306 Brunn-Minkowski inequality, 297(12.2.2) - application, 331, 333 - dimension-free form, 301(Ex. 5) Busemann-Petty problem, 313 Cn (Hamming cube), 335 cage representation, 93 canonical triangulation, see bottom-vertex triangulation cap, 31 - spherical (volume), 333 Caratheodory's theorem, 6(1.2.3), 8 - application, 199, 200, 208, 319 - colorful, 199(8.2.1) - - application, 202 Cauchy-Schwarz inequality, xvi cell - complexity - - in R2 176 ' - - in higher dimensions, 191, 193 - of an arrangement, 43, 126, 130 24-cell, 95(Ex. 4) center transversal theorem, 15(1.4.4) centerpoint, 14(1.4.1), 210 centerpoint theorem, 14(1.4.2), 205 central arrangement, 129 chain, 295(Ex. 4) chain polytope, 309 Chebyshev's inequality, 240 chirotope, 216 chromatic number, 290 462 circles - cutting lemrna, 72 - incidences, 45, 63(Ex. 1), 63(Ex. 2), 69, 70(Ex. 2), 73(Ex. 4) - - application, 50(Ex. 8) - touching (and planar graphs), 92 - unit - - incidences, 42, 49(Ex. 1), 52( 4.2.2), 58(Ex. 2), 70(Ex. 1) - - Sylvester-like result, 44 circumradius, 317(13.2.2) - approximation, 322 Clarkson's theorem on levels, 141(6.3.1) clique number, 290 closed from above (or from below), 36 closest pair, computation, 122 coatomic lattice, 89 d-collapsible simplicial complex, 197 colored Holly theorem, 198(Ex. 2) colored Tverberg theorem, 203(8.3.3) - application, 213 - for r == 2, 205 colorful Caratheodory theorem, 199(8.2.1) - application, 202 combination - affine, 1 - convex, 6 combinatorially equivalent polytopes, 89(5.3.4) combinatorics, polyhedral, 289 compact set, xvi comparability graph, 294(Ex. 4), 309 complete graph, xvi complex plane, point-line incidences, 44 Index complex, simplicial - d-Leray, 197 - d-collapsible, 197 - d-representable, 197 - Van Kampen-Flores, 368 compression, path, 175 concentration - for a Hamming cube, 335(14.2.3) - for a sphere, 331(14.1.1) - for an expander, 384(Ex. 7) - for product spaces, 340 - Gaussian, 334(14.2.2) - of projection, 359(15.2.2) (p, q )-condition, 255 conductance, see edge expansion cone - convex, 9(Ex. 6), 201 - metric, 106, 377 - of squared Euclidean metrics, 377 cone(X), 201 conjecture - 1._ࣤ 308 3 3 ' - d-step, 93 - Bezdek's, 44 - Dirac-1-v1otzkin, 50 - Fiiredi-Hajnal, 177 - Griinbaum-Motzkin, 261 - Hirsch, 93 - Kalai's, 204 - perfect graph, strong, 291 - perfect graph, weak, 291 - Purdy's, 48 - Rcay's, 204 - Ryser's, 235 - Sierksma's, 205 - Stanley-Wilf, 177 connected graph, xvi constant, lattice, 23 continuous motion argument, 284 continuous upper bound theorem, 114 conv(X) (convex hull), 5 convex body - almost spherical, 341 -- almost spherical section, 345(14.4.5), 348(14.6.1) - approximation by ellipsoids, 325(13.4.1) - lattice points in, 17-28 - volume approximation, 315, 321 convex combination, 6 convex cone, 9(Ex. 6), 201 convex function, xvi convex hull, 5 - algorithm, 86, 105 - of random points, 99, 324 convex independent set, 30( 3.1.1) - in a grid, 34(Ex. 2) -- in higher dimension, 33 - size, 32 convex polygons, union complexity, 194 convex polyhedron, 83 convex polytope, 83 - almost spherical, number of facets, 343(14.4.2) - integral, 295(Ex. 5) - number of, 139(Ex. 3) -- realization, 139 - symmetric, number of facets, 347(14.4.2) - volume - - lower bound, 322 - - upper bound, 315(13.2.1) convex polytopes, union complexity, 194 convex position, 30 convex set, 5(1.2.1) convex sets - in general position, 33 - intersection patterns, 197 - transversal, 256(10.5.1) - upper bound theorem, 198 - VC-dimension, 238 Index copies, similar (counting), 47, 51(Ex. 10) cr( G) (crossing number), 55 cr(X) (crossing number of the halving-edge graph), 283 criterion, Gale's, 97(5.4.4) cross-ratio, 47 463 crossing (in a graph drawing), 54 crossing edges, pairwise, 176 crossing number, 54 - and forbidden subgraphs, 57 - odd, 58 - pairwise, 58 crossing number theorem, 55( 4.3.1) - application, 56, 61, 70, 283 - for multigraphs, 60( 4.4.2) crosspolytope, 83 - almost spherical section, 346, 353(Ex. 2) - faces, 88 - projection, 86(Ex. 2) cryptography, 26 cube, 83 - almost spherical section, 343 - faces, 88 - Hamming, 335 - - embedding into €2, 369 - - measure concentration, 335 ( 14.2 .3) cubes, union complexity, 194 cup, 30 curve, moment, 97(5.4.1) curves - cutting into pseudosegments, 70, 271, 272(Ex. 5), 272(Ex. 6) - incidences, 46 - lower envelope, 166, 187(7.6.1) - single cell, 176 cut pseudometric, 383(Ex. 3), 391 cut, sparsest, approximation, 391 cutting, 66 - on the average, 68 cutting lemma, 66( 4.5.3), 68 464 - application, 66, 261 - for circles, 72 - higher-dimensional, 160(6.5.3) - lower bound, 71 - proof, 71, 74, 153, 162, 251(Ex. 4) cutwidth, 57 cyclic polytope, 97(5.4.3) - universality, 99(Ex. 3) cylinders, union complexity, 194 V (duality), 81 Do (duality), 78(5.1.1) D ( il) (defining set), 15 7 D-embedding, 356(15.1.1) d-intervals, transversal, 262, 262(Ex. 2) d-step conjecture, 93 Davenport-Schinzel sequence, 167 - asymptotics, 174 - decomposable, 178 - generalized, 174, 176 --.. - realization by curves, 168(Ex. 1) decomposition problem, 162 decomposition, vertical, 72(Ex. 3), 156 deep below, 35 defining set, 158 deg(x) (degree in halving-edge graph), 283 degree, xvi Dehn-Sommerville relations, 103 Delaunay triangulation, 117, 120, 123(Ex. 5) Delone, see Delaunay dense set, 33 ry-dense set, 313 density - of a graph, local, 397 - of a hypergraph, 223 dependence, affine, 2 det A, 21 determinant - and affine dependence, 3 Index - and orientation, 216 - and volume, 26(Ex. 1) - of a lattice, 21 diagram - Gale, 112 - power, 121 - Voronoi, 115 - - abstract, 121 - -- · complexity, 119(5. 7.4), 122(Ex. 2), 123(Ex. 3), 192 - - farthest-point, 120 - - higher-order, 122 - wiring, 133 diameter - and smallest enclosing ball, 13(Ex. 5) - approximation, 322 - in f 1 , computation, 388(Ex. 1) Dilworth's theorem, 294(Ex. 4) dim(:F) (VC-dimension), 238(10.2.3) dimension - of a polytope, 83 - Vapnik-Chervonenkis, see VC-dimension - VC-dimension, 238(10.2.3) Dirac-Motzkin conjecture, 50 Dirichlet tessellation, see Voronoi diagram Dirichlet's theorem, 53 disk, largest empty, computation, 122 disks - transversal, 231, 262(Ex. 1) - union complexity, 124(Ex. 10), 193 distance, Banach-Mazur, 346 distances - distinct, 42, 59( 4.4.1) - - bounds, 45 - unit, 42 - - and incidences, 49(Ex. 1) - - for convex position, 45 - - in R2, 45 3 -·-- - in R 45 ' - - in R4, 45, 49(Ex. 2) . . . ____ - lower bound, 52( 4.2.2) -- - on a 2-sphere, 45 -- - upper bound, 58(Ex. 2) distortion, 356(15.1.1) distribution - binomial, 240 --- normal, 334, 352 divisible point, 204 domains of action, 120 dominated (pseudo )metric, 389 double-description method, 86 drawing (of graph), 54 - on a grid, 94 -- rub her-band, 92 dual polytope, 90 dual set, 80(5.1.3) dual set system, 245 dual shatter function, 242 duality -- of linear programming, 233(10.1.2) - of planar graphs, 80 - transform, 78(5.1.1), 81(5.1.4) Dvoretzky's theorem, 348(14.6.1), 352 Dvoretzky-Rogers lemma, 349(14.6.2), 352 E[·] (expectation), xv E( -<) (linear extensions), 303 e( -<) = IE(-<) I, 303 E( G) (edge ǚet ), xvi e(Y1, • • • , Y k) (number of edges on the Yi), 223 edge -- k-edge, 266 - halving, 266 - of a polytope, 87 - of an arrangement, 43, 130 edge expansion, 373 - approximation, 391 edges - pairwise crossing, 176 Index - parallel, 176 Ed1nondǚ' Inatching polytope theorem, 294 efficient comparison theorem, 303( 12.3.1) 465 eigenvalue, second, 37 4, 381 Elekes-R6nyai theorem, 48 elimination, Fourier-Motzkin, 86 ellipsoid - almost spherical section, 342(14.4.1) - definition, 325 - Lowner-John, 327 - smallest enclosing - - computation, 327 - - uniqueness, 328(Ex. 3) ellipsoid method, 381 e1nbedding - distortion and dimension, 368 - -- lower bound, 364(15.3.3) - into f 1 , 378, 379, 396 - into £2, 399(Ex. 5), 400(Ex. 6), 400(Ex. 7) - - algorithm, 378 - - dimension reduction, 358(15.2.1), 362(Ex. 3), 369(Ex. 4) - - lower bound, 366, 370( 15.4.1)' 375( 15 .5.1)' 380 - - testability, 376(15.5.2) - - upper bound, 388(Ex. 3), 389(15. 7.1) - into f(X) - - isometric, 385( 15.6.1) - - upper bound, 386(15.6.2) - into fp, 379, 391, 398(Ex. 2), 398(Ex. 1) - - isometric, 383(Ex. 5), 383(Ex. 2) - into arbitrary normed space, 367 - isometric, 356 - of planar-graph metrics, 393 466 - of tree metrics, 392, 399(Ex. 5), 400(Ex. 6), 400(Ex. 7), 400(Ex. 9) - volume-respecting, 396 D-embedding, 356(15.1.1) entropy (graph), 309 envelope, lower - of curves, 166, 187(7.6.1) - of segments, 165 - - lower bound, 169(7.2.1) - of simplices, 186 - of triangles, 183(7.5.1), 186 - superimposed projections, 192 epsilon net theorem, 239(10.2.4) - application, 24 7, 251 (Ex. 4) - if and only if form, 252 equivalent polytopes, combinatorially, 89(5.3.4) equivalent radius, 297 Erdos-Sachs construction, 368(Ex. 1) Erdos-Simonovits theorem, 213(9.2.2) Erdos-Szekeres lemma, 295(Ex. 4) Erdos-Szekeres theorem, 30(3.1.3) - another proof, 32 - application, 35 - generalizations, 33 - positive-fraction, 220(9.3.3), 222(Ex. 4) - quantitative bounds, 32 Euler function, 53 excess, 154 Excl (H) (excluded minor class), 393 excluded minor, and metric, 393 expander, 373, 381 -382 - measure concentration, 384(Ex. 7) exposed point, 95(Ex. 9) extension - linear, 302 -· of Lipschitz mapping, 361 Index extremal point, 87, 95(Ex. 9), 95(Ex. 10) extreme (in arrangement), 145(Ex. 1) fk(P) (nu1nber of k-faces), 96 !-vector, 96 - of a representable complex, 197 face - of a polytope, 86( 5.3.1) - of an arrangement, 126, 130 - popular, 151 face lattice, 88 facet - k-facet, 265 - halving, 266 - - interleaving lemma, 2 77 ( 11.3 .1) --· interleaving lemma, application, 279, 284, 287 - of a polytope, 87 - of an arrangement, 126 factorization, of polynomial, 26 Fano plane, 44 Farkas lemma, 7(1.2.5), 8, 9(Ex. 7) farthest-point Voronoi diagram, 120 fat objects, union complexity, 194 fat-lattice polytope, 107(Ex. 1) finite projective plane, 44, 66 first selection lemma, 208 ( 9 .1.1) - application, 253 - proofs, 210 flag, 105, 129(Ex. 6) flat, 3 flattening lemma, 358(15.2.1) - application, 366 - lower bound, 362(Ex. 3), 369(Ex. 4) fii pping ( Delaunay triangulation), 120 forbidden - permutation, 177 - short cycles, 362 - subgraph, 64 - - and crossing number, 57 - subhypergraph, 213(9.2.2) - submatrix, 177 - subsequence, 17 4 forest, regular, 18(2.1.2) form, linear, 27(Ex. 4) four-square theorem, Lagrange's, 28(Ex. 1) Fourier-Motzkin elimination, 86 fraction, approximation by, 19(2.1.3), 20(Ex. 4), 21(Ex. 5) fractional Helly theorem, 195(8.1.1) - application, 209, 211, 258 - for line transversals, 260(10.6.2) fractional packing, 233 fractional transversal, 232 - bound, 256(10.5.2) - for infinite systems, 235 Freiman's theorem, 47 Frechet's embedding, 385 function - Ackermann, 173 . - convex, xv1 -- dual shatter, 242 -- Euler's, 53 - Lipschitz, 337 - -- concentration, 337-341 -- primitive recursive, 17 4 - rational, on Cartesian productժ 48 - shatter, 239 functional, Laplace, 340 Fiiredi-Hajnal conjecture, 177 g( n) (number of distinct distances), 42 g-theorem, 104 g-vector, 104 Gale diagram, 112 Gale transform, 107 - application, 210, 282(Ex. 6) Index Gale's criterion, 97(5.4.4) Gallai-type problem, 231 gallery, art, 246, 250 Gaussian distribution, 352 Gaussian integers, 52 Gaussian measure, 334 - concentration, 334(14.2.2) general position, 3 - of convex sets, 33 generalized arithmetic progression, 4 7 467 generalized Davenport-Schinzel sequence, 174, 176 generalized lower bound theorem, 105 - application, 280 generalized triangle, 66( 4.5.3) genus, and VC-dimension, 251(Ex. 6) geometric graph, 56, 176 geometry - of numbers, 17, 20 - real algebraic, 131 Geronimus polynomial, 380 girth, 362 - and £2-embeddings, 380 Goemans-Williamson algorithm for MAXCUT, 384(Ex. 8) graded lattice, 89 graph, xvi - bipartite, xvi - comparability, 294(Ex. 4), 309 - complete, xvi - connected, xvi - determines a simple polytope, 93 - entropy, 309 - geometric, 56, 176 - intersection, 139(Ex. 2) - isomorphism, xvi - Kr,8-free, 65, 68 - Moore, 367 - of a polytope, 87 - - connectivity, 88, 95(Ex. 8) 468 - perfect, 290-295 - regular, xvi - shattering, 251(Ex. 5) - without short cycles, 362 graph drawing, 54 - on a grid, 94 - rubber-band, 92 Grassmannian, 339 greedy algorithm, 235, 236(Ex. 4) growth function, see shatter function Griinbaum-Motzkin conjecture, 261 h( a) (height in poset), 305 H-polyhedron, 82(5.2.1) H-polytope, 82(5.2.1) h-vector, 102 Hadwiger's transversal theorem, 262 Hadwiger-Deb runner (p, q)-problem, 255 Hahn-Banach theorem, 8 half-space, 3 half-spaces, VC-dimension, 244( 10.3.1) Hall's marriage theorem, 235 halving edge, 266 halving facet, 266 - interleaving lemma, 277(11.3.1) - - application, 279, 284, 287 ham-sandwich theorem, 15(1.4.3) - application, 218 Hammer polytope, 348(Ex. 1) Hamming cube, 335 - embedding into f2, 369 Harper's inequality, 335 HDd(P, q) ((p, q)-theorem), 256(10.5.1) height, 304 Helly number, 12 Helly order, 263(Ex. 4) Belly's theorem, 10(1.3.2) Index - application, 12(Ex. 2), 13 (Ex. 5) , 14 ( 1. 4.1) , 8 2 (Ex. 9), 196(8.1.2)' 200 - colored, 198(Ex. 2) - fractional, 195(8.1.1) - - application, 209, 211, 258 - - for line transversals, 260(10.6.2) Belly-type theorem, 261, 263(Ex. 4) - for containing a ray, 13(Ex. 7) - for lattice points, 295(Ex. 7) - for line transversals, 82(Ex. 9) - for separation, 13(Ex. 10) - for visibility, 13(Ex. 8) - for width, 12(Ex. 4) HFACd(n) (number of halving facets) , 26 7 hierarchically well-separated tree, 398 high above, 35 higher-order Voronoi diagram, 122 Hilbert space, 357 Hirsch conjecture, 93 k-hole, 34 - modulo q, 38 Horton set, 36 - in Rd, 38 hull - affine 1 ' - convex, 5 - - algorithm, 86, 105 hypergraph, 211 hyperplane, 3 - linear, 109 hyperplane transversal, 259(10.6.1), 262 hyperplanes, arrangement, 126 /(µ) (intersecting objects), 154 I ( m, n) (number of point-line incidences), 41 I ( P, L) (point-line incidences), 41 Index Jlcirc(m, n) (number of point-unit circle incidences), 42 Icirc(m, n) (number of point-circle incidences), 45 incidence matrix, 234 incidences, 41 - point-circle, 45, 63(Ex. 1), 63(Ex. 2), 69, 70(Ex. 2), 73(Ex. 4) - - application, 50(Ex. 8) - point-curve, 46 - point-line, 41 ( 4.1.1) - - in the complex plane, 44 - - lower bound, 51(4.2.1) - point-plane, 46 - point-unit circle, 42, 49(Ex. 1), 52( 4.2.2), 70(Ex. 1) - - upper bound, 58(Ex. 2) independence number, 290 independent set, 290 induced subgraph, 290 inequality - Alexandrov-Fenchel, 301 - Blaschke-Santal6, 320 - Brunn's, 297(12.2.1) - - application, 306 ؝ Brunn-Minkowski, 297(12.2.2) - - application, 331, 333 - - dimension-free form, 301(Ex. 5) - Cauchy-Schwarz, xvi - Chebyshev's, 240 - Harper's, 335 - isoperimetric, 333-337 - - reverse, 337 - Jensen's, xvi - Prekopa-Leindler, 300, 302(Ex. 7) - Sobolev, logarithmic, 337 inradius, 317(13.2.2) - approximation, 322 integer programming, 25 k-interior point, 9 interpolation, 117 intersection graph, 139(Ex. 2) d-intervals, transversal, 262, 262(Ex. 2) inverse Blaschke-Santal6 inequality, 320 isometric embedding, 356 isomorphism - of arrangements, 133 - - affine, 133 - of graphs, xvi - of hypegraphs, 211 469 isoperimetric inequality, 333ࣞ337 - reverse, 337 Jensen's inequality, xvi John's lemma, 325(13.4.1) - application, 347, 350 Johnson-Lindenstrauss flattening lemma, 358( 15.2.1) - application, 366 - lower bound, 362(Ex. 3), 369(Ex. 4) join, 89 Kn (complete graph), xvi Kr,s (complete bipartite graph), 64 Kr,s-free graph, 65, 68 Kk ( t) (complete k-partite hypergraph), 212 /C2 (planar convex sets), 238 K(m, n) (number of edges of m cells), 43 k-edge, 266 k-facet, 265 k-fiat, 3 k-hole, 34 - modulo q, 38 k-interior point, 9 k-partite hypergraph, 211 k-set, 265 - polytope, 273(Ex. 7) k-uniform hypergraph, 211 Kovari-Sos-Turan theorem, 65( 4.5.2) 470 Kakeya problem, 44 Kalai's conjecture, 204 kernel, 13 (Ex. 8) KFACd(n, k) (maxin1um number of k-facets), 266 KFAC(X, k) (number of k-facets), 266 Kirchberger's theorem, 13(Ex. 10) knapsack problem, 26 Koebe's representation theorem, 92 Konig's edge-covering theorem, 235, 294(Ex. 3) Krasnosel'skii's theorem, 13(Ex. 8) Krein-Milman theorem, in Rd, 96(Ex. 10) Kruskal-Hoffman theorem, 295(Ex. 6) £2 (squared Euclidean metrics), 377 .eP (countable sequences with fp-norm), 357 fp-norm, 35 7 £ԅ (Rd with fp-norm), 357 £1-ball, see crosspolytope Lagrange's four-square theorem, 28(Ex. 1) Laplace functional, 340 Laplacian matrix, 37 4 largest empty disk, computation, 122 lattice - face, 88 - general definition, 22 - given by a basis, 21 - shortest vector, 25 lattice basis theorem, 22(2.2.2) lattice constant, 23 lattice packing, 23 lattice point, 17 - computation, 24 - Helly-type theorem, 295(Ex. 7) Lawrence's representation theorem, 137 Index lemma - cutting, 66( 4.5.3), 68 - - application, 66, 261 - - for circles, 72 - - higher-dimensional, 160(6.5.3) - - lower bound, 71 - - proof, 71, 7 4, 153, 162, 251(Ex. 4) - Dvoretzky-Rogers, 349(14.6.2), 352 - Erdos-Szekeres, 295(Ex. 4) - Farkas, 7(1.2.5), 8, 9(Ex. 7) - first selection, 208(9.1.1) - - application, 253 - - proofs, 210 - halving-facet interleaving, 277(11.3.1) - - application, 279, 284, 287 - John's, 325(13.4.1) - - application, 347, 350 - Johnson-Lindenstrauss flattening, 358(15.2.1) - - application, 366 - - lower bound, 362(Ex. 3), 369(Ex. 4) - Levy's, 338(14.3.2), 340 - - application, 340, 359 - Lovasz, 278(11.3.2) - - exact, 280, 281(Ex. 5) ---- - planar, 280(Ex. 1) - positive-fraction selection, 228(9.5.1) - Radon's, 9(1.3.1), 12 - - application, 11, 12(Ex. 1), 222(Ex. 3), 244 - --- positive-fraction, 220 - regularity - - for hypergraphs, 226 - - for hypergraphs, weak, 223(9.4.1) - - for hypergraphs, weak, application, 227(Ex. 2), 229 - - Szemeredi's, 223, 226 -- same-type, 217(9.3.1) - - application, 220, 229 - - partition version, 220 - second selection, 211(9.2.1) - --- application, 228, 279 · · · · ---- lower bounds, 215(Ex. 2) - - one-dimensional, 215(Ex. 1) - shatter function, 239(10.2.5) - - application, 245, 248 lens (in arrangement), 272(Ex. 5), 272(Ex. 6) d-Leray simplicial complex, 197 level, 73, 141 -·- and k-sets 266 ' - and higher-order Voronoi diagrams, 122 - at most k, complexity, 141(6.3.1) - for segments, 186(Ex. 2) - for triangles, 183 - simplification, 7 4 Levy's lemma, 338(14.3.2), 340 - application, 340, 359 LinDep(a), 109 line pseudometric, 383(Ex. 2), 389 line transversal, 82(Ex. 9), 259(10.6.1), 262 line, balanced, 280 linear extension, 302 linear form, 27(Ex. 4) linear hyperplane, 109 linear ordering, 302 linear programming, 7 - algorithm, 93 - duality, 233(10.1.2) linear subspace, 1 linearization, 244 lines, arrangement, 42 Lin Va I (a) , 1 09 Lipschitz function, concentration, 337-341 Lipschitz mapping, 337 - extension, 361 Lipschitz norm, 356 Index 471 Lipton-Tarjan separator theorem, 57 LLL algorithm, 25 local density, 397 local theory of Banach spaces, 329, 336 location, in planar subdivision, 116 log x (iterated logarithm), xv Lovasz lemma, 278(11.3.2) - exact, 280, 281 (Ex. 5) - planar, 280(Ex. 1) lower bound theorem, generalized, 105 · · ······ application, 280 lower envelope - of curves, 166, 187(7.6.1) - of segments, 165 - - lower bound, 169(7.2.1) - of simplices, 186 - of triangles, 183(7.5.1), 186 - superimposed projections, 192 Lowner--·John ellipsoid, 327 m(f, n) (maximum number of edges for girth > f), 362 Manhattan distance, see £1-norm many cells, complexity, 43, 46, 58(Ex. 3), 152(Ex. 3) . mapping - affine, 3 - bi-Lipschitz, 356 - Lipschitz, 337 - - extension, 361 - Veronese, 244 marriage theorem, Hall's, 235 matching, 232 matching number, see packing number matching polytope, 289, 294 matrix - forbidden pattern, 177 - incidence, 234 - Laplacian, 374 - rank and signs, 140(Ex. 4) 472 matroid, oriented, 137 MAXCUT problem, 384(Ex. 8) maximum norm, see £CX)-norm measure - Gaussian, 334 - on k-dimensional subspaces, 339 - on sn-l, uniform, 330 - on SO(n) (Haar), 339 - uniform, 237 n1easure concentration - for a Hamming cube, 335(14.2.3) - for a sphere, 331(14.1.1) - for an expander, 384(Ex. 7) - for product spaces, 340 - Gaussian, 334(14.2.2) med(f) (median of f), 337 medial axis transform, 120 median, 14, 337 meet, 89 method - double-description, 86 - ellipsoid, 381 metric - cut, 383(Ex. 3), 391 - line, 383(Ex. 2), 389 - of negative type, 379 - planar-graph, 393 - shortest-path, 392 - squared Euclidean, cone, 377 - tree, 392, 398, 399(Ex. 5), 400(Ex. 6), 400(Ex. 7), 400(Ex. 9) metric cone, 106, 377 metric polytope, 106 metric space, 355 Milnor-Thorn theorem, 131, 135 minimum spanning tree, 123(Ex. 6) minimum, successive, 24 Minkowski sum, 297 Minkowski's second theorem, 24 Minkowski's theorem, 17(2.1.1) Index - for general lattices, 22(2.2.1) Minkowski-Hlawka theorem, 23 minor, excluded, and metric, 393 mixed volume, 301 molecular modeling, 122 moment curve, 97(5.4.1) x-monotone (curve), 73 monotone subsequence, 295(Ex. 4) Moore graph, 367 motion planning, 116, 122, 193 multigraph, xvi multiset, xv nearest neighbor searching, 116 neighborhood, orthogonal, 318 nerve, 197 17-net, 314 - application, 323, 340, 343, 365, 368 e-net, 237(10.2.1), 237(10.2.2) - size, 239(10.2.4) - weak, 261 (10.6.3) - - for convex sets, 253(10.4.1) nonrepetitive segment, 178 norm, 344 - focn 357 - fp, 357 - Lipschitz, 356 - maximum, see RCX)-norm normal distribution, 334, 352 number - algebraic, 20(Ex. 4) - chromatic, 290 - clique, 290 - crossing, 54 - - and forbidden subgraphs, 57 - -- odd 58 ' - - pairwise, 58 --- fractional packing, 233 - fractional transversal, 232 - Helly, 12 -- independence, 290 - matching, see packing number - packing, 232 Index - piercing, see transversal nutnber --- transversal, 232 -- - bound using r , 236, 242(10.2. 7) 0( ·) (asymptotically at most), xv o( ·) (asymptotically smaller), xv octahedron, generalized, see crosspolytope odd crossing number, 58 odd-er( G) (odd crossing number), 58 w- conjecture, 308 oracle (for convex body), 316, 321 order polytope, 303(12.3.2) order type, 216, 221(Ex. 1) order, Helly, 263(Ex. 4) ordering, 302 - linear, 302 orientation, 216 oriented matroid, 137 orthogonal neighborhood, 318 P( -<) (order polytope), 303 p [ . ] (uniform measure on sn-l)' 330 P d,D (sets definable by polynomials), 244(10.3.2) packing, 232 - fractional, 233 -- lattice, 23 packing number, 232 pair, closest, computation, 122 pair-cr( G) (pairwise crossing number), 58 pairwise crossing edges, 176 pairwise crossing number, 58 Pappus theorem, 134 paraboloid, unit, 118 parallel edges, 176 partially ordered set, 302 k-partite hypergraph, 211 partition - Radon, 10 - Tverberg, 200 partition theorem, 69 patches, algebraic surface - lower envelope, 189 - single cell, 191(7.7.2) path compression, 175 473 pattern, sign, of polynomials, 131 - on a variety, 135 pencil, 132 pentagon, similar copies, 51(Ex. 10) perfect graph, 290-295 permanent, approximation, 322 permutahedron, 78, 85 - faces, 95(Ex. 3) permutation, forbidden pattern, 177 perturbation argument, 5, 101 planar-graph metric, 393 plane, 3 - Fano, 44 - projective, 2 - topological, 136 planes, incidences, 46 point - r-divisible, 204 - exposed, 95(Ex. 9) - extremal, 87, 95(Ex. 9), 95(Ex. 10) - k-interior, 9 - lattice, 17 - - computation, 24 - - Helly-type theorem, 295(Ex. 7) - Radon, 10, 13(Ex. 9) - random, in a ball, 312 - Tverberg, 200 point location, 116 point-line incidences, 41 ( 4.1.1) - in the complex plane, 44 points, random, convex hull, 99, 324 polarity, see duality 474 polygons, convex, union complexity, 194 polyhedral combinatorics, 289 polyhedron - convex, 83 - H-polyhedron, 82(5.2.1) polymake, 85 polynomial - factorization, 26 - Geronimus, 380 - on Cartesian products, 48 polytope (convex), 83 - almost spherical, number of facets, 343(14.4.2) - chain, 309 - combinatorial equivalence, 89(5.3.4) - cyclic, 97(5.4.3) - - universality, 99(Ex. 3) - dual, 90 - fat-lattice, 107(Ex. 1) - graph, 87 - - connectivity, 88, 95(Ex. 8) - Hammer, 348(Ex. 1) · - H-polytope, 82(5.2. 1) - integral, 295(Ex. 5) - k-set, 273(Ex. 7) - matching, 289, 294 - metric, 106 - number of, 139(Ex. 3) - order, 303(12.3.2) - product, 107(Ex. 1) - realization, 94, 1 13, 139 - simple, 90(5.3.6) - - determined by graph, 93 - simplicial, 90(5.3.6) - spherical, 124(Ex. 11) - stable set, 293 - symmetric, number of facets, 347(14.4.2) ···-- traveling salesman, 289 - union complexity, 194 - volume - - lower bound, 322 Index - - upper bound, 315(13.2.1) - V-polytope, 82(5.2.1) popular face, 151 poset, 302 position - convex, 30 - general, 3 positive-fraction - Erdos-Szekeres theorem, 220(9.3.3), 222(Ex. 4) - Radon's lemma, 220 - selection lemma, 228 ( 9. 5.1) - Tverberg's theorem, 220 post-office problem, 116 power diagram, 121 (p, q )-condition, 255 (p, q)-theorem, 256(10.5.1) - for hyperplane transversals, 259(10.6.1) Pn§kopa-Leindler inequality, 300, 302(Ex. 7) pr1me - in a ring, 52 - in arithmetic progressions, 53( 4.2.4) prime number theorem, 52 primitive recursive function, 17 4 Prob[·] (probability), xv probabilistic method, application, 55, 61, 71, 142, 148, 153, 184, 240, 268, 281(Ex. 5), 340, 352, 359, 364, 386-391 problem - art gallery, 246, 250 -- Busemann-Petty, 313 - decomposition, for algebraic surfaces, 162 - Gallai-type, 231 - Hadwiger-Debrunner, (p, q), 255 - k-set, 265 - Kakeya, 44 - knapsack, 26 - post-office, 116 -- set cover, 235 -- subset sum, 26 - Sylvester's, 44 - UNION-FIND, 175 - Zarankiewicz, 68 product space, measure concentration, 340 product, of polytopes, 107(Ex. 1) . programming --- integer, 25 - linear, 7 - - algorithm, 93 - -- duality, 233(10.1.2) - semidefinite, 378, 380 projection - almost spherical, 353 - concentration of length, 359(15.2.2) - polytopes obtained by, 86(Ex. 2) projective plane, 2 -- finite, 44, 66 pseudocircles, 271 pseudodisk, 193 pseudolattice, pentagonal, 51 (Ex. 10) pseudolines, 132, 136 pseudometric, line, 383(Ex. 2), 389 pseudoparabolas, 272(Ex. 5), 272(Ex. 6) pseudosegments - cutting curves into, 70, 271, 272(Ex. 5), 272(Ex. 6) - extendible, 140(Ex. 5) -········ level in arrangement, 270 Purdy's conjecture, 48 c-pushing, 102 QSTAB(G), 293 quadratic residue, 27 quasi-isometry, 358 Rd 1 ' r-divisible point, 204 Index radius, equivalent, 297 Radon point, 10, 13(Ex. 9) Radon's lemma, 9(1.3.1 ), 12 - application, 11, 12(Ex. 1), 222(Ex. 3), 244 - positive-fraction, 220 rainbow simplex, 199 Ramsey's theorem, 29 475 - application, 30, 32, 34(Ex. 3), 39(Ex. 6), 99(Ex. 3), 373(Ex. 3) random point in a ball, 312 random points, convex hull, 99, 324 random rotation, 339 random subspace, 339 rank, and signs, 140(Ex. 4) rational function on Cartesian product, 48 ray, Helly-type theorem, 13(Ex. 7) real algebraic geometry, 131 realization - of a polytope, 94, 113 - of an arrangement, 138 Reay's conjecture, 204 reduced basis, 25 Reg, 157 reg(p) (Voronoi region), 115 regular forest, 18(2.1 .2) regular graph, xvi regular simplex, 84 - volume, 319 regularity lemma - for hypergraphs, 226 - - weak, 223(9.4.1) - for hypergraphs, weak - - application, 227(Ex. 2) - for hypergraphs, weak, application, 229 - Szemeredi 's, 223, 226 relation, Dehn-Sommerville, 103 d-representable simplicial complex, 197 residue, quadratic, 27 restriction (of a set system), 238 476 reverse isoperimetric inequality, 337 reverse search, 106 ridge, 87 robot motion planning, 116, 122, 193 rotation, random, 339 Ryser's conjecture, 235 sn (unit sphere in Rn+l ), 313 same-type lemma, 217(9.3.1) - application, 220, 222(Ex. 5), 229 - partition version, 220 same-type transversals, 217 searching - nearest neighbor, 116 - reverse, 106 second eigenvalue, 37 4, 381 second selection lemma, 211(9.2.1) - application, 228, 279 - lower bounds, 215(Ex. 2) - one-dimensional, 215(Ex. 1) section, almost spherical - of a convex body, 345(14.4.5), 348( 14.6.1) - of a crosspolytope, 346, 353(Ex. 2) - of a cube, 343 - of an ellipsoid, 342( 14.4.1) segments - arrangement, 130 - intersection graph, 139(Ex. 2) - level in arrangement, 186(Ex. 2) - lower envelope, 165 - - lower bound, 169(7.2.1) - Ramsey-type result, 222(Ex. 5), 227(Ex. 2) - single cell, 176 - zone, 150 selection lemma - first, 208(9.1.1) - - application, 253 Index - - proofs, 210 - positive-fraction, 228(9.5.1) - second, 211(9.2.1) - - application, 228, 279 - - lower bounds, 215(Ex. 2) - - one-dimensional, 215(Ex. 1) semialgebraic set, 189 - and VC-dimension, 245 semidefinite programming, 378, 380 ry-separated set, 314 separation theorem, 6(1.2.4) - application, 8, 80, 323, 377 separation, Helly-type theorem, 13(Ex. 10) separator theorem, 57 sequence, Davenportࣟ·Schinzel, 167 - asymptotics, 174 - decomposable, 178 - generalized, 174, 176 - realization by curves, 168(Ex. 1) set - almost convex, 38, 39(Ex. 5) - brick, 298 - convex, 5(1.2.1) - convex independent, 30(3.1.1) - - in a grid, 34(Ex. 2) - - in higher dimension, 33 - - size, 32 - defining, 158 - dense, 33 - dual, 80(5.1.3) - Horton, 36 - - in Rd 38 ' - independent, 290 - partially ordered, 302 - polar, see d. ual set - semialgebraic, 189 - - a-nd VC࣠4hnension, 245 - k-set, 265 - - polytope, 273(Ex. 7) - shattered, 238(10.2.3) · - stable, see independent set set cover problem, 235 set system, dual, 245 sets, convex - intersection patterns, 197 - transversal, 256(10.5.1) - upper bound theorem, 198 - VC-dimension, 238 seven-hole theorem, 35(3.2.2) shatter function, 239 - dual, 242 shatter function lemma, 239(10.2.5) - application, 245, 248 shattered set, 238(10.2.3) shattering graph, 251 (Ex. 5) shelling, 104 shortest vector (lattice), 25 shortest-path metric, 392 Sierksma's conjecture, 205 sign matrix, and rank, 140(Ex. 4) sign pattern, of polynomials, 131 - on a variety, 135 sign vector (of a face), 126 similar copies (counting), 4 7, 51(Ex. 10) simple arrangement, 127 simple k-packing, 236(Ex. 4) simple polytope, 90( 5.3.6) - determined by graph, 93 simplex, 84(5.2.3) - circumradius and inradius, 317(13.2.2) - faces, 88 - projection, 86(Ex. 2) - rainbow, 199 - regular, 84 - X -simplex, 208 - volume, 319 simplex algorithm, 93 simplices - lower envelope, 186 - single cell, 193 simplicial complex Index - d-Leray, 197 - d-collapsible, 197 - d-representable, 197 simplicial polytope, 90( 5.3.6) simplicial sphere, 103 simplification (of a level), 7 4 single cell - in R2, 176 477 - in higher dimensions, 191, 193 site (in a Voronoi diagram), 1 15 smallest enclosing ball, 13(Ex. 5) - uniqueness, 328(Ex. 4) smallest enclosing ellipsoid - computation, 327 - uniqueness, 328(Ex. 3) SO(n), 339 - measure concentration, 335 Sobolev inequalities, logarithmic, 337 sorting with partial information, 302-309 space - Hilbert, 357 - Rp, 357 - metric, 355 - realization, 138 spanner, 369(Ex. 2} spanning tree, minimum, 123{Ex. 6) sparsest cut, approximation, 391 sphere - measure concentration, 331 ( 14.1.1) - simplicial, 103 spherical cap, 333 spherical polytope, 124(Ex. 11) STAB(G) (stable set polytope), 293 stable set, see independent set stable set polytope, 293 Stanley-Wilf conjecture, 177 star-shaped, 13(Ex. 8) Steinitz theorem, 88(5.3.3), 92 - quantitative, 94 478 d-step conjecture, 93 stretchability, 134, 137 strong perfect graph conjecture, 291 strong upper bound theorem, 104 subgraph, xvi - forbidden, 64 - induced, 290 subgraphs, transversal, 262 subhypergraph, 211 subsequence, monotone, 295(Ex. 4) subset sum problem, 26 subspace - affine, 1 - linear, 1 - random, 339 successive minimum, 24 sum - Minkowski, 297 - of squared cell complexities, 152(Ex. 1) sums and products, 50(Ex. 9) superimposed projections of lower envelopes, 192 surface patches, algebraic - lower envelope, 189 - single cell, 191 (7. 7.2) surfaces, algebraic, arrangement, 130 - decomposition problem࣡ 162 Sy Ivester's problem, 44 Szemeredi regularity lemma, 223, 226 Szemeredi-Trotter theorem, 41(4.1.1) - application, 49(Ex. 5), 50(Ex. 6), 50(Ex. 7), 50(Ex. 9), 60, 63(Ex. 1) - in the complex plane, 44 - proof, 56, 66, 69 T(d, r) (Tverberg number), 200 Tcol ( d, r) (colored Tverberg number), 203 Index tessellation, Dirichlet, see Voronoi diagram theorem - Balinski's, 88 - Borsuk-Ulam, application, 15, 205 - Caratheodory's, 6(1.2.3), 8 - - application, 199, 200, 208, 319 - center transversal, 15(1.4.4) - centerpoint, 14(1.4.2), 205 - Clarkson's, on levels, 141(6.3.1) - colored Helly, 198(Ex. 2) - colored Tverberg, 203(8.3.3) - - application, 213 - - for r = 2, 205 - colorful Caratheodory, 199(8.2.1) - - application, 202 - crossing number, 55(4.3.1) - - application, 56, 61, 70, 283 - - for multigraphs, 60( 4.4.2) - Dilworth's, 294(Ex. 4) ----- Dirichlet's, 53 - Dvoretzky's, 348(14.6.1), 352 - Edmonds', matching polytope, 294 - efficient comparison, 303(12.3.1) -· Elekes--R6nyai, 48 -- epsilon net, 239(10.2.4) - - application, 247, 251(Ex. 4) - - if and only if form, 252 - Erdos-Simonovits, 213(9.2.2) - Erdos-Szekeres, 30(3.1.3) - - another proof, 32 - - application, 35 - - generalizations, 33 - - positive-fraction, 220(9.3.3), 222(Ex. 4) - - quantitative bounds, 32 - fractional Helly, 195(8.1.1) - - application, 209, 211, 258 - for line transversals, 260(10.6.2) Freiman's, 4 7 g-theorem, 104 Hadwiger's transversal, 262 Hahn-Banach, 8 Hall's, marriage, 235 ham-sandwich, 15(1.4.3) - application, 218 Belly's, 10(1.3.2) - application, 12(Ex. 2), 13(Ex. 5), 14(1.4.1), 82(Ex. 9), 196(8.1.2), 200 Helly-type, 261, 263 (Ex. 4) - for containing a ray, 13(Ex. 7) Index - for lattice points, 295 (Ex. 7) - for line transversals, 82(Ex. 9) - for separation, 13(Ex. 10) - for visibility, 13(Ex. 8) - for width, 12(Ex. 4) Kovari-S6s-Thran, 65( 4.5.2) Kirchberger's, 13(Ex. 10) Koebe's, 92 Konig's, edge-covering, 235, 294(Ex. 3) Krasnosel'skii's, 13(Ex. 8) Krein-Milman, in Rd, 96(Ex. 10) Kruskal-Hoffman, 295(Ex. 6) Lagrange's, four-square, 28(Ex. 1) lattice basis, 22(2.2.2) Lawrence's, representation, 137 lower bound, generalized, 105 · -· · application, 280 Ivlilnor-Thom, 131, 135 Minkowski's, 17(2.1.1) - for general lattices, 22(2.2.1) - second, 24 Ivlinkowski-Hlawka, 23 479 - Pappus, 134 - (p, q), 256(10.5.1) - - for hyperplane transversals, 259(10.6.1) - prime number, 52 - Ramsey's, 29 - - application, 30, 32, 34(Ex. 3), 39(Ex. 6), 99(Ex. 3), 373(Ex. 3) - separation, 6(1.2.4) - - application, 8, 80, 323, 377 - separator, Lipton-Tarjan, 57 - seven-hole, 35(3.2.2) - Steinitz, 88(5.3.3), 92 - - quantitative, 94 - Szemeredi-Trotter, 41 ( 4.1.1) - - application, 49(Ex. 5), 50(Ex. 6), 50( Ex. 7), 50(Ex. 9), 60, 63(Ex. 1) - - in the complex plane, 44 - - proof, 56, 66, 69 - Tverberg's, 200(8.3.1) - - application, 208 - - positive-fraction, 220 - - proofs, 203 - two-square, 27(2.3.1) - upper bound, 100(5.5.1), 103 - - and k-facets, 280 - - application, 119 ----- - continuous analogue, 114 - - for convex sets, 198 - - formulation with h-vector, 103 - - proof, 282(Ex. 6) - - strong, 104 - weak epsilon net, 253(10.4.2) - - another proof, 254(Ex. 1) - zone, 146(6.4.1) - - planar, 168(Ex. 5) Thiessen polygons, 120 topological plane, 136 torus, n-dimensional, measure concentration, 335 480 total unimodularity, 294, 295(Ex. 6) trace (of a set system), 238 transform - duality, 78(5.1.1), 81(5.1.4) - Gale, 107 - - application, 210, 282(Ex. 6) - medial axis, 120 transversal, 82(Ex. 9), 231 - criterion of existence, 218(9.3.2) - fractional, 232 - - bound, 256(10.5.2) - - for infinite systems, 235 - hyperplane, 259(10.6.1), 262 - line, 262 - of convex sets, 256(10.5.1) - of disks, 231, 262(Ex. 1) - of d-intervals, 262, 262(Ex. 2) - of subgraphs, 262 - same-type, 217 transversal number, 232 - bound using r, 236, 242(10.2. 7) transversal theorem, Hadwiger's, 262 traveling salesman polytope, 289 tree - hierarchically well-separated, 398 . . . - spanning, minimum, 123(Ex. 6) tree metric, 392, 398, 399(Ex. 5), 400(Ex. 6), 400(Ex. 7), 400(Ex. 9) tree volume, 396 tree-width, 262 triangle, generalized, 66( 4.5.3) triangles - fat, union complexity, 194 - level in arrangement, 183 - lower envelope, 183(7.5.1), 186 - VC-dimension, 250(Ex. 1) triangulation Index - bottom-vertex, 160, 161 - canonical, see bottom-vertex triangulation - Delaunay, 117, 120, 123(Ex. 5) - of an arrangement, 72(Ex. 2) Tverberg partition, 200 Tverberg point, 200 Tverberg's theorem, 200(8.3.1) - application, 208 - colored, 203(8.3.3) - - application, 213 - - for r = 2, 205 - positive-fraction, 220 - proofs, 203 24-cell, 95(Ex. 4) two-square theorem, 27(2.3.1) type, order, 216, 221(Ex. 1) U ( n) (number of unit distances), 42 unbounded cells, number of, 129(Ex. 2) k-uniform hypergraph, 211 uniform measure, 237 unimodularity, total, 294, 295(Ex. 6) union, complexity, 193-194 - for disks, 124(Ex. 10) UNION-FIND problem, 175 unit paraboloid, 118 unit circles - incidences, 42, 49(Ex. 1), 52(4.2.2), 58(Ex. 2), 70(Ex. 1) - Sylvester-like result, 44 unit distances, 42 - and incidences, 49 (Ex. 1) - for convex position, 45 - in R2, 45 - in R3, 45 - in R4, 45, 49(Ex. 2) - lower bound, 52( 4.2.2) - on a 2-sphere, 45 - upper bound, 58(Ex. 2) universality of cyclic polytope, 99(Ex. 3) up-set, 304 upper bound theorem, 100(5.5.1), 103 - and k-facets, 280 ----- application, 119 - continuous analogue, 114 - for convex sets, 198 - formulation with h-vector, 103 - proof, 282(Ex. 6) - strong, 104 V (G) (vertex set) , xvi Vr, (volume of the unit n-ball), 311 V(x) (visibility region), 247 V-polytope, 82(5.2.1) Van Kampen-Flores simplicial co1nplex, 368 Vapnik-Chervonenkis dimension, see VC-dimension VC-dimension, 238( 10.2.3) - bounds, 244(10.3.2), 245(10.3.3) - for half-spaces, 244( 10.3.1) --- for triangles, 250(Ex. 1) vector - /-vector, 96 - - of a representable complex, 197 -- g-vector, 104 -- h-vector 102 ' ---- shortest (lattice), 25 - sign (of a face), 126 vectors, almost orthogonal, 362(Ex. 3) Veronese mapping, 244 vertex - of a polytope, 87 - of an arrangement, 43, 130 vertical decomposition, 72(Ex. 3), 156 visi hili ty, 246 - Helly-type theorem, 13(Ex. 8) vol(·), xv volume Index 481 - approximation, 321 - - hardness, 315 - mixed, 301 - of a ball, 311 - of a polytope - - lower bound, 322 - - upper bound, 315(13.2.1) - of a regular simplex, 319 - tree, 396 volume-respecting embedding, 396 Voronoi diagram, 115 - abstract, 121 - complexity, 119(5.7.4), 122(Ex. 2), 123(Ex. 3), 192 - farthest-point, 120 - higher-order, 122 weak E-net, 261 (10.6.3) - for convex sets, 253(10.4.1) weak epsilon net theorem, 253(10.4.2) - another proof, 254(Ex. 1) weak perfect graph conjecture, 291 weak regularity lemma, 223(9.4.1) - application, 227 (Ex. 2), 229 width, 12(Ex. 4) - approximation, 322, 322(Ex. 4) - bisection, 57 Wigner-Seitz zones, 120 wiring diagram, 133 X -simplex, 208 x-rnonotone (curve), 73 Zarankiewicz problem, 68 zone - ( <k)-zone, 152(Ex. 2} - in a segment arrangement, 150 - of a hyperplane, 146 - of a surface, 150, 151 - of an algebraic variety, 150, 151 zone theorem, 146( 6.4.1) - planar, 168(Ex. 5}
190606
https://www.brightinthemiddle.com/4-ways-to-teach-distance-time-graphs/?srsltid=AfmBOor3lj77tRBQ1GtH7V5v1fqPK7Dh6zpXfSLibpPvCkVGFtxKuqan
4 Ways to Teach Distance-Time Graphs - Bright in the Middle FREE Training: Secrets to Science Lessons They LoveJoin Us Here! Close Top Banner Skip to main content Menu Home About freebies ShopSubmenu Shop Site Shop TPT Membership AccountSubmenu Account Login Membership Login Search for: Search Button 0 items Bright in the Middle Rigorous and Fun Science Activities 4 Ways to Teach Distance-Time Graphs Middle School Math, Middle School Science, Task Cards Are you looking for fun ways to teach distance-time graphs? One of my favorite topics to teach my students is distance-time graphs! But what I have found over the years is that my students need multiple exposures to distance-time graphs and not just working on a distance-time graph worksheet. They tend to get so confused about them so I love having multiple ways to teach and review distance-time graphs! Here are some of my favorite ways to teach distance-time graphs to my students and I hope they’ll bring some new or fresh ideas to your science classroom! TeachDistance-Time Graphs with Interactive Digital Lessons This is the year for digital lessons, am I right? I originally created digital lessons for face-to-face and they incorporated a lot of interaction! But when we shifted to distance learning, having digital lessons and notes for students both at home and in school has been a lifesaver! Some of my favorite digital lessons to use for teaching distance time are linked here! Story-Match Story matching activities are a great way to challenge students! These activities push my students to apply what they have learned about distance-time graphs! Students are given real-life situations and stories about distance and time. It’s their job to match the story to the graph. It’s important for students to understand distance-time graphs and how motion is graphed. This helps them to distinguish how they are graphed together in a super fun and engaging activity! I also love using these as an engaging activity during the holidays! Check out this Christmas theme and Valentine’s Day theme! If you’re looking for a review or assessment, this story match activity can be used in both print and digital. Distance-Time Graphs Task Cards Task cards are great for differentiation and my students love them! Task cards are cards that have tasks for students to complete. Students can record their answers or complete the tasks in their notebooks, on a pre-created answer sheet, on whiteboards, and more. The possibilities are truly endless and that’s why I love using them in my classroom! You can set up task cards in so many ways to keep it fresh and exciting for your students. Some of my favorite ways to have my students complete task card activities are “around the room,” centers, early finishers or even for an assessment! Task cards are so versatile and a great way to incorporate movement in the classroom! Here are some of my favorite task cards to use when teaching distance-time graphs: Distance-Time Task Cards Describing Functions Task Cards Write Your Own There’s no better way to assess if my students have mastered distance-time graphs than having them write their own scenarios! You can provide pictures of graphs and have your students create the written part of the graph. My students love using their creativity to write up their own scenarios and it shows me whether they are able to apply what they have learned. These are great for my early finishers or students who like a challenge! You can create your own graphs or you can use this resource to get started! Have you ever wondered how to bring the WOW Factor to your science classroom? December 3, 2020 · 3 Comments Previous Post: « 10 Gifts for Science Teachers – Amazon Finds! Next Post: Pre-assessment Ideas for Your Students » Reader Interactions Leave a Reply Cancel reply Your email address will not be published.Required fields are marked Comment Name Email Website Trackbacks The Ultimate WOW Factor List to Teach all Middle School Science Topicssays: August 23, 2022 at 6:12 AM […] Distance-Time Graphs […] Reply What's Inquiry-Based Learning? 4 Delightful Examples to Use in Your Science Class!says: January 27, 2024 at 4:25 PM […] they walk in the classroom, they may not have a passion for distance-time graphs, but we must try our best to make them see the […] Reply 5 WOW Factor Ideas for Teaching Speed vs Time Graphs - Bright in the Middlesays: December 3, 2024 at 5:00 AM […] speed, whereas on a distance-time graph, it can mean motionless. So, if you haven’t covered distance-time graphs, you should do that! Then, have a conversation with your students about the similarities and […] Reply Menu Home Blog Shop Site Contact Privacy Policy Return Policy Copyright ©2025 · BRANDING + WEBSITE DESIGN BY LAUGH EAT LEARN Notifications
190607
https://www.omnicalculator.com/statistics/normal-approximation
Board Last updated: Normal Approximation Calculator The normal approximation calculator (more precisely, normal approximation to binomial distribution calculator) helps you to perform normal approximation for a binomial distribution. What is normal approximation to binomial distribution? The normal approximation of binomial distribution is a process where we apply the normal distribution curve to estimate the shape of the binomial distribution. The fundamental basis of the normal approximation method is that the distribution of the outcome of many experiments is at least approximately normally distributed. If you are not familiar with that typically bell-shaped curve, check our normal distribution calculator. How to calculate normal approximation — normal approximation calculator with steps You need to set the following variables to run the normal approximation to binomial calculator. Before using the tool, however, you may want to refresh your knowledge of the concept of probability with our probability calculator. 1. Problem setup First, tell the normal approximation calculator about the probabilistic problem. Number of occurrences or trials (N); Probability of success (p) or the probability of failure (q=1−p); Number of successes (n); and Select the probability you would like to approximate at the event restatement. P(x=n) — the probability for an exact discrete value of n; P(x>n) — the probability for events corresponding to a value greater than n; P(x≤n) — the probability for events occurring at most n; P(x<n) — the probability for events corresponding to a value lesser than n; or P(x≥n) — the probability for events occurring at least n times. 2. Results After specifying the problem, you can immediately read both the final and partial results. The mean (μ); The variance(σ2); The standard deviation (σ); The problem statement; The continuity correction; The Z-score; The Z-value; and The approximated probability. How do I calculate normal approximation to binomial distribution? If you want to compute the normal approximation to binomial distribution by hand, follow the below steps. Find the sample size (the number of occurrences or trials, N) and the probabilities p and q — which can be the probability of success (p) and probability of failure (q=1−p), for example. Check if you can apply the normal approximation to the binomial. If N×p and N×q are both larger than 5, then you can use the approximation without worry. Find the mean (μ) by multiplying n with p, i.e. μ=N×p. Compute the variance (σ2) by multiplying N, p and q, as σ2=N×p×q. Determine the standard deviation (SD or σ) by taking the square root of the variance: N×p×q​. State the problem (the number of successes, n) using the continuity correction factor according to the below table. (You can learn about this concept more if you open our continuity correction calculator). This is necessary because the normal distribution is a continuous probability distribution, and the binomial distribution is a discrete probability distribution. | Problem statement | Continuity correction | --- | | x = n | n-0.5 < x < n+0.5 | | x ≤ n | x < n + 0.5 | | x < n | x < n − 0.5 | | x ≥ n | x > n − 0.5 | | x > n | x > n + 0.5 | Find the Z-score with (x−μ)/σ. Check the Z-value in the Z-table. Determine the probability associated with the Z-values according to the below table, which will be the normal approximation of binomial distribution. | Problem statement | Probability | --- | | x = n | difference of Z-values for n+0.5 and n-0.5 | | x ≤ n | Z-value | | x < n | Z-value | | x ≥ n | 1 − Z-value | | x > n | 1 − Z-value | Normal approximation to the binomial — Example #1 Assume you have a fair coin and want to know the probability that you would get 40 heads after tossing the coin 100 times. Gather information from the above problem. N=100 (number of occurrences or trials); n=40 (number of successes); and p=0.5 (probability of success on a given trial). Verify that the sample size is large enough to use the normal approximation. N×p=50≥5 N×(1−p)=50≥5 We're good! 3. State the problem using the continuity correction factor. x≤40∴P(x<40.5) Find the mean (μ) and standard deviation (σ) of the binomial distribution. μ=N×p=100×0.5=50 σ²=N×p×(1−p)=100×0.5×(1−0.5)=25 σ=25​=5 Find the Z-score using the mean and standard deviation. z=(x−μ)/σ=(40.5−50)/5=−9.5/5=−1.9 Find the Z-value and determine the probability. z=0.4713 Thus, the probability that a coin lands on heads less than or equal to 40 times during 100 flips is 0.0287 or 2.8717%. Normal approximation to the binomial — Example #2 Assume you have reliable data stating that 60% of the working people commute by public transport to work in a given city. If a random sample of size 30 is selected (all working persons), what is the probability that precisely 10 persons will travel from those by public transport? Gather information from the above statement. N=30 (number of occurrences or trials); n=10 (number of successes); and p=0.6 (probability of success on a given trial). Verify that the sample size is large enough to use the normal approximation. N×p=18≥5 N×(1−p)=12≥5 As these numbers are nice and large, we're good to go! 3. State the problem using the continuity correction factor. x=10∴P(9.5<x<10.5) Find the mean (μ) and standard deviation (σ) of the binomial distribution. μ=N×p=30×0.6=18 σ2=N×p×(1−p)=30×0.6×(1−0.6)=7.2 σ=7.2​=2.6833 Find the two z-scores using the mean and standard deviation. z1​=(x−μ)/σ=(9.5−18)/2.68=−8.5/2.68=−3.168 z2​=(x−μ)/σ=(10.5−18)/2.68=−7.5/2.68=−2.795 Find the Z-value and determine the probability. P=z2​−z1​=0.0026−0.0008=0.0018 Thus, the probability that precisely 10 people travel by public transport out of the 30 randomly chosen people is 0.0018 or 0.18%. FAQs Can I use normal approximation if the product of the trials and the probability of the event is less than five? No. The number of trials (or occurrences, N) relative to its probabilities (p and 1−p) must be sufficiently large (N×p ≥ 5 and N×(1−p) ≥ 5) to apply the normal distribution in order to approximate the probabilities related to the binomial distribution. What is normal approximation to binomial distribution? The normal approximation to the binomial distribution is a process by which we approximate the probabilities related to the binomial distribution. What is the z-value of 60.5 occurrences when the mean is 50 and standard deviation is 5? The z-value is 2.3 for the event of 60.5 (x = 60.5) occurrences with the mean of 50 (μ = 50) and standard deviation of 5 (σ = 5). The computation takes the form of (x – μ) / σ = (60.5 – 50) / 5 = 11.5 / 5 = 2.3). What are the main steps for the normal approximation to binomial distribution? You should take the following steps to proceed with the normal approximation to binomial distribution. Find the number of occurrences or trials (N) with its probabilities (p). Check if the number of trials is sufficiently high (N×p ≥ 5 and N×(1-p) ≥ 5). Apply a continuity correction by adding or subtracting 0.5 from the discrete x-value. Find the mean (μ) and standard deviation (σ). Find the z-score (z = (x – μ) / σ). Find the probability associated with the z-score. Sum of Squares Calculator Midrange Calculator Coefficient of Variation Calculator 1 Problem setup Results | | | --- | | Problem statement | P(x ≥ 40) | | Continuity correction | P(x > 39.5) | | Z-score | P(z ≥ -2.1) | | Z-value | 0.0179 | | Approximated probability | 1 - 0.0179 = 0.9821 | The approximated probability for the event to happen is 0.9821 or 98.2136%. Share result Reload calculatorClear all changes Did we solve your problem today? Yes No Check out 29 similar inference, regression, and statistical tests calculators 📉 Absolute uncertainty AB test Bonferroni correction Share Calculator Normal Approximation Calculator Share Calculator Normal Approximation Calculator Learn more about these settings
190608
https://chemistry.stackexchange.com/questions/98137/nomenclature-of-common-naming-of-alkynes-vs-alkenes-both-ene
Nomenclature of common naming of alkynes vs alkenes; both -ene? - Chemistry Stack Exchange Join Chemistry By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Chemistry helpchat Chemistry Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Nomenclature of common naming of alkynes vs alkenes; both -ene? Ask Question Asked 7 years, 3 months ago Modified7 years, 2 months ago Viewed 286 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. Is there a difference between common naming of alkenes vs alkynes? Why is Ethylene the common name for an alkene and acetylene for an alkyne? Isn’t acet- the common form of naming ethyl groups anyway so why is ethyl used in a common name? nomenclature Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications asked Jun 10, 2018 at 19:07 LaurenLauren 81 1 1 silver badge 2 2 bronze badges 1 Related Q: Naming system -“ylene”mykhal –mykhal 2018-11-10 09:16:36 +00:00 Commented Nov 10, 2018 at 9:16 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful -3 Save this answer. Show activity on this post. The official IUPAC name for ethyl with a double bond is ethene, but this is a relatively recent change. Before the official name (as an exception) was ethylene. I’m not sure about propene/propylene but am fairly confident the official IUPAC name for propyl double bond is propene and not propylene. However, these compounds are often referred to with the unofficial name as per tradition. For alkynes, the IUPAC name is, for ethyl, the name is ethyne. Don’t remember to specify the location of the double/triple bond (ex: 2 - ethene) For alkenes, the formula is C X n H X 2 X n+2 C X n H X 2 X n+2 and for alkynes is C X n H X 2 n C X n H X 2 n. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Jun 19, 2018 at 19:31 user7951 answered Jun 11, 2018 at 19:40 H. KhanH. Khan 165 2 2 silver badges 14 14 bronze badges 2 1 To be precise, the non-systematic name “ethylene” was still retained in the 1979 IUPAC recommendations but was no longer recommended in the 1993 recommendations.user7951 –user7951 2018-06-19 19:29:30 +00:00 Commented Jun 19, 2018 at 19:29 Is there a 2-ethene? For acyclic cmpds: alkanes, CnH2n+2; alkenes, CnH2n; alkynes, CnH2n-2.user55119 –user55119 2018-07-20 12:31:21 +00:00 Commented Jul 20, 2018 at 12:31 Add a comment| Your Answer Reminder: Answers generated by AI tools are not allowed due to Chemistry Stack Exchange's artificial intelligence policy Thanks for contributing an answer to Chemistry Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions nomenclature See similar questions with these tags. Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Linked 3Naming system -"ylene" Related 3Naming system -"ylene" 11Should double or triple bonds have preference for low locants in IUPAC nomenclature? 3Oxidation of Carbon atom in Alkene, Alkyne and Alkanes of functional groups? 4How is a side-group that contains a double bond named? 1Naming of bicyclo[2.2.1]hept-5-ene-1,4,5,6-tetramethyl-3-bromo-2-ethyl carboxylate 6Naming of ester alkoxide 0IUPAC nomenclature of cycloalkadienes with substituents and alkenes on the same carbons Hot Network Questions How do you emphasize the verb "to be" with do/does? Does the curvature engine's wake really last forever? What is the meaning and import of this highlighted phrase in Selichos? If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? How to locate a leak in an irrigation system? Another way to draw RegionDifference of a cylinder and Cuboid How to rsync a large file by comparing earlier versions on the sending end? My dissertation is wrong, but I already defended. How to remedy? Why is a DC bias voltage (V_BB) needed in a BJT amplifier, and how does the coupling capacitor make this possible? Is it ok to place components "inside" the PCB The rule of necessitation seems utterly unreasonable How can the problem of a warlock with two spell slots be solved? Calculating the node voltage Weird utility function в ответе meaning in context How do you create a no-attack area? What's the expectation around asking to be invited to invitation-only workshops? Exchange a file in a zip file quickly Languages in the former Yugoslavia Gluteus medius inactivity while riding How different is Roman Latin? Cannot build the font table of Miama via nfssfont.tex Why does LaTeX convert inline Python code (range(N-2)) into -NoValue-? Who is the target audience of Netanyahu's speech at the United Nations? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Chemistry Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
190609
https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:poly-graphs
Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
190610
https://www.youtube.com/watch?v=DfriBc6Mod4
OpenStax: Algebra and Trigonometry - Chapter 1, Section 3 | Radicals and Rational Exponents Scalar Learning 117000 subscribers 51 likes Description 3300 views Posted: 7 Apr 2022 Welcome to Huzefa’s explanation video of OpenStax Algebra and Trigonometry textbook. This is a full walkthrough of Chapter 1, Introduction to Prerequisites, Section 3, Radicals and Rational Exponents. Watch Huzefa as he reviews exercises 1-73 odd. Credit: To skip to a particular question, use the chapters below: 00:00 Introduction 00:15 Exercise 1 01:01 Exercise 3 01:31 Exercise 5 01:47 Exercise 7 02:10 Exercise 9 02:23 Exercise 11 03:11 Exercise 13 04:04 Exercise 15 04:25 Exercise 17 05:49 Exercise 19 06:33 Exercise 21 07:15 Exercise 23 08:07 Exercise 25 08:31 Exercise 27 09:11 Exercise 29 10:59 Exercise 31 12:02 Exercise 33 13:12 Exercise 35 13:37 Exercise 37 13:50 Exercise 39 14:50 Exercise 41 15:14 Exercise 43 15:43 Exercise 45 16:18 Exercise 47 17:15 Exercise 49 18:57 Exercise 51 20:36 Exercise 53 23:08 Exercise 55 23:47 Exercise 57 24:22 Exercise 59 24:52 Exercise 61 26:25 Exercise 63 27:55 Exercise 65 28:52 Exercise 67 31:45 Exercise 69 33:49 Exercise 71 35:32 Exercise 73 CHECK OUT our Super Awesome SAT Math Video Course: JOIN OUR TEST PREP DISCORD: SUBSCRIBE NOW! And give us a thumbs up if you liked this video. Need academic help? Learn more about the Scalar Learning Tutoring Team: Get more tips and tricks by following us! Check out our Math Music Videos! Music Videos for math concepts: Synthetic Division - Special Right Triangles - Imaginary Numbers - 3 comments Transcript: Introduction [Music] what's up everybody and welcome to open stacks algebra and trigonometry chapter 1 section 3 radicals and rational exponents let's do it what does it mean when a radical does not have an index is Exercise 1 the expression equal to the radicand explain so we're talking about the index of the radical we're talking about the little number that goes up here so when there is no index we assume that it's a 2. as such if i had a radical without an index and 16 for example this expression would not simply equal 16. this expression will be asking us to take the square root of 16 which really asks the question what times itself equals 16 that is of course 4. the only way this expression would ever be equal to itself is if i took a radical with the 5 and the index were 1. this really never happened so this is the one exception but pretty much assume no index and it's a 2. every number will have two square roots what is the principal square root so we're talking about a square root Exercise 3 let's say square root of 25. so when they're talking about it will have two square roots check it out the first square root that we think of is 5 because 5 times 5 is 25 the other root that also exists is negative 5. that's because negative 5 times negative 5 is still 25 that's because a negative times a negative is a positive when we talk about the principal square root it is the positive version for this problem Exercise 5 we're asked to take the square root of 256 meaning what number times itself equals 256. now lucky for us 256 is a perfect square and the square root of 256 is simply 16 boom done but the Exercise 7 problem like this first we're going to simplify the inside following order of operation so 9 plus 16 is of course 25 and then we have the square root of 4 times 25 so what is 4 times 25 that is 100 last but not least we're going to take the square root of 100 again it's a perfect square what times itself equals 100 that is 10 boom done here again we're taking the square root Exercise 9 of a perfect square 196 is a perfect square so what number times itself equals 196 it is 14 so the square root of 196 is 14 done here we're taking the Exercise 11 square root of a number that's not a perfect square but it does have perfect squares within it so i'm going to show you how to simplify this so we're going to do a prime factorization tree 98 is simply 2 times 49 and 49 is 7 times 7. so now i can rewrite this as the square root of 2 times 7 times 7. and when i'm looking to simplify i'm looking for perfect squares aka twins because again a perfect square is simply a number times itself to get that number well check this out i got a pair of sevens so i can take the square root of seven times seven again which is 49 and that is just a single seven so it's like those twins come out as a single number so it comes out like that so my final answer simplified is 7 times the square root of 2 boom done in this case i've Exercise 13 got a radical of a fraction now i can rewrite this as follows i can rewrite this as square root of 81 over square root of 5. now once i do that this is pretty cool because i can see that the square root of 81 is 9 because 9 times 9 is 81 and then i have a square root of 5 on the bottom now this is pretty much simplified but generally we don't want to have a radical in the denominator to get rid of that radical what i'm going to do is i'm going to multiply by the square root of 5 over the square root of 5. this is allowed because this is simply equal to one so we're not changing the value of the fraction on top we're going to get nine rad five on the bottom we're going to get just five because the square root times itself is just the number the other way to think about it is square root of five times square root of five is square root of 25 which of course is 5. this is the final answer done for this one we're going to try to simplify both of these first now 169 is actually a perfect square so the Exercise 15 square root of 169 is 13 because 13 times 13 is 169. likewise 144 is a perfect square that is 12 times 12 and then we add these together 13 plus 12 is 25 boom done in this one 162 is not a perfect square Exercise 17 so we're gonna try and simplify it first and then remove the radical from the denominator so 18 over now for 162 we're going to make another prime factorization tree and we've got 2 times 81 gives us 162. that's a prime number and then 81 is 9 times 9. now i can keep going but i can already see i got a pair of numbers technically i should keep going to do the prime factorization tree correctly 3 3 3 3 but i don't need to because this is really nice so check this out i can now say it's 2 times 9 times 9. there's a pair of numbers that comes out as a single 9. now i've got 18 over 9 rad 2. now i can simplify the 18 over nine and divide both by nine so i've got two and one so now i've got two over one times rad two which is just rad two last but not least i need to get that radical out of the denominator so i'm gonna multiply by rad two over rad two and on top i get two rad two on the bottom i get red two times rad two so again a radical times itself just becomes the number without the radical the other way to think about it is square root of two times square root of two is square root of four and square root of four is of course two last but not least i can simplify by dividing top and bottom by two boom boom and i'm left with square root of two over one or simply square root of two boom done so in this one we can't combine these Exercise 19 because i often think of radicals like variables when they're different it's just like different variables they're not like terms but what i can do is simplify 24 and recognize that 24 is simply 4 times 6 and we can even stop here because 4 is a perfect square then i can rewrite everything like so and then i can take the square root of 4 and that comes out the 6 stays trapped inside and that's 2 and that comes out here and multiplies the number in front the 6. so now i got 14 red six minus 12 red six now we've got the same radical these are now like like terms so now i can simply subtract the coefficients 14 minus 12 is two rad six boom done since 150 is not a perfect square i'm going to Exercise 21 break it up into prime factors so first i have 2 and 75 2 is prime then i have 3 and 25. now again i could go to 5 and 5 and complete the prime factorization but i kind of don't need to instead i'm going to stop at this point because i recognize that 25 is a perfect square so check this out i'm going to rewrite this as 2 times 3 times 25 then i'm going to take the actual square root of 25 which is 5 and that comes out there now i'm left with 5 and since 2 and 3 there's no more perfect squares left i can recombine them and we've got 5 times rad 6 boom done when we have a radical times a radical the cool thing is the insides Exercise 23 can actually multiply against each other so square root of 42 times square root of 30 equals square root of 42 times 30 like so now to simplify this i could multiply these together and get a giant number or instead what i can do is i can start to break these up into factors so for example i can take 42 and know that it's 7 times 6. i can also look at 30 and know that it's 6 times 5. now look at that again we have twins we have a perfect square as in 6 times 6. so we take the square root of that and that's just a single 6 which we bring out here since seven and five are both primes there's no more perfect squares within them so i'm left with six times and let's recombine seven times five which is the square root of 35 boom done now we have the square root of four over 225 Exercise 25 both of which are perfect squares so what i'm going to do is i'm going to separate the radicals which i'm allowed to do in these fractions and write it as the square root of 4 over the square root of 225 and then i'm going to take the individual square roots so what times itself is 4 that's 2 what times itself is 225 that's 15 there we go boom done again i'm going to split this up as the square root of 360 over the square root Exercise 27 of 361. this is pretty nice because the denominator is actually a perfect square 361 is 19 times 19. as such we get 19 on the bottom but the square root of 360 we're gonna have to break that up a little more so again i could do a prime factorization tree but right off the bat i'm noticing something cool 360 is 10 times 36. the reason why i like that is because 36 is a perfect square so i can take the square root of 36 which is of course 6 and now i'm left with 6 rad 10 over 19 done in this case all i want to do is get the radical out of the denominator Exercise 29 so i need to figure out how to do that but since it's not there by itself meaning it's not just 8 over square root of 17 it's 8 over 1 minus square root of 17. i can't just multiply by a single radical instead i have to multiply by something called the conjugate so the conjugate is the exact same thing it's 1 and square root of 17 but since there's a minus here i flip that to be a plus and since i can't just multiply by 1 plus square root of 17 on the denominator i have to multiply that by that on the numerator as well because then i have something over itself the same expression over the same expression since that equals 1 this is feasible because this will not actually change the value of this expression now i'm going to multiply and distribute and i get 8 plus 8 square root of 17 on top on the bottom we're going to foil i've got 1 times 1 which is 1. 1 times square root of 17 which is square root of 17. negative square root of 17 times 1. again that minus sticks to it so that gives us a negative rad 17. and last but not least i got negative red 17 times positive red 17 again the radicals will cancel out when it's the same number under those radicals and i'm left with negative 17. these two are opposites therefore they just cancel out like that and then i got 1 minus 17 which is negative 16. so i've got 8 plus 8 rad 17 over negative 16. last but not least i'm going to divide everything by negative 8 and i've got negative 1 minus red 17 over two or alternatively i can write this by pulling that negative out of the entire numerator and just throwing it in front like so leaving me with one plus rad 17 over two both answers are acceptable done in this one we've got an Exercise 31 actual index of three meaning we're taking the cube root of these numbers so instead of looking for twins now i'm looking for triplets so first let's break 128 up into whatever we can possible so 128 would be 2 times 64. 64 is 8 times 8 8 is simply 2 times 4 likewise here as well and 4 breaks up into 2 times 2. so what i'm left with for 128 is the cube root of one two three four five six seven twos so once again since i'm taking the cube root i'm looking for triplets so i see three twos there that comes out i see another three twos and that comes out and as they come out they multiply each other so then i get four cube root of two plus three cube root of two again now since we have the same radicals we can add the 4 and the 3 just like if we're adding 4x plus 3x that gives us 7 cube root of 2 for the win done so here we have 15 times the fourth root Exercise 33 of 125 over the fourth root of 5. so the first thing i'm going to do is i'm going to combine these into one fraction so this becomes 15 times the fourth root of 125 over 5. this is pretty cool because 5 actually goes into 125 so i can say it's now 15 times the fourth root of 25. now you might think well there's nothing that is to the fourth power within 25 so you might say okay we're done but in fact you can go a step further so check this out i can rewrite this as 15 times the fourth root of five squared then i can actually simplify this radical and the way i can do it is if there is a common factor between the power and the index i can actually divide by that greatest common factor and simplify this so since it's four and two and they're both divisible by two i can rewrite it as 15 to the second radical five to the first so i've simplified this and by the way i don't even need to write the two because it's the square root so that's implied i don't need to put the one so the final simplified form is 15 square root of five boom done for the following exercises simplify each expression so we're taking the square root of 400 x to Exercise 35 the fourth so we'll look at these two and take the square root of each of them independently so first the square root of 400 that's 20 that comes out like that and guess what x to the fourth has a perfect square root as well it is x squared because x squared times x squared is x to the fourth so our final answer is 20x squared boom done Exercise 37 here we've got 49 which is a perfect square and p which is not so i can take the square root of 49 which is 7 but p is going to stay trapped inside the square root prison so our final answer is 7 square root of p Exercise 39 so in this case we've got a rational exponent and i'm going to explain how to convert this into radical form so the numerator represents the power that m is being raised to the denominator represents the index of the radical so this is the same as the square root and again i don't need to put a 2 here because when there's nothing it's implied that it's the square root and then inside it's m to the fifth times square root of 289 now i could merge these but the thing is 289 is a perfect square so i'm going to take the square root of that which is 17. i'm going to place that out there then i'm going to look at m to the fifth and see if there's any perfect squares in that so of course m to the fifth is m to the fourth times m to the first i'll just say m and m to the fourth anything with an even exponent is a perfect square so the square root of m to the fourth is m squared so that can come outside so i'll write it in our final format with the coefficient first 17 m squared with a m trapped inside the square root boom done Exercise 41 in this first expression we can take out the b squared because b squared is a perfect square so the square root of b squared is of course just b so now we have 3b times the square root of a minus b times the square root of a since they're both the same radicals here we can subtract these coefficients so 3b minus 1b is simply 2 b square root of a Exercise 43 done so here the first thing i can do is i can actually simplify the x cubed over the x because it's basically 3 minus 1 that's what we do with division we subtract the exponents and that's 2 so i'm left with simply x squared on the top and the x is gone on the bottom now i can go through and start taking the square root of all of these because guess what they're all perfect squares so that's really nice so the square root of 225 is 15. square root of x squared is x and the square root of 49 is 7 boom done so again i want to try and find the Exercise 45 perfect squares within 50. i already know y to the 8th is a perfect square because it's y to an even exponent so i'm going to rewrite 50 as 2 times 25. i like that because i know 25 is a perfect square so i'm not going to try and break it down any further and y to the 8th is cool so guess what 25 the square root of 25 is 5 and the square root of y to the 8th is y to the fourth because y to the fourth times y to the fourth would give us y to the eighth so i'll write our final answer as five y to the fourth square root of two boom done so first things first here i'm going to simplify Exercise 47 this fraction by dividing 32 and 14 by 2 and we get 16 over 7d so i'm going to split this up as square root of 16 over square root of 7d and we can always split it up like that that is not a violation of any rule so then i'm going to say the square root of 16 is 4 and on bottom we have square root of 7d once again we need to get rid of the radical and the denominator so since it's a nice square root i'm going to multiply by itself so i'm going to multiply top and bottom by square root of 7 d and again this is allowed because a square root of 7 d over square root of 7 d it's the same numerator as denominator so that's like 1. you can always multiply by 1 doesn't change the value of this fraction here these guys multiply and when you multiply a radical times its exact self it just becomes what's under the radical which is 7d and on top we get 4 rad 7d for the win done Exercise 49 in this case this is a little interesting because we have a part rational part irrational so what we have to do is we have to multiply by the conjugate to get rid of the radical in the denominator we can also simplify the numerator which i'm going to save till the end so the conjugate in this case would be the same as this except you switch the minus with a plus so it's times 1 plus red 3x and we have to do the same on the top as well so first we multiply this one over like so and we get rad 8 plus rad 8 times 3 x which is 24 x over and then we got a foil so 1 times 1 which is 1 1 times rad 3 x which is rad 3x then negative red 3x times 1 which is minus red 3x and then negative red 3x times positive rad 3x is just going to be 3x but since it's a negative and a positive it's going to be negative now these radicals you notice they conveniently cancel out and that's what we wanted and then on the bottom we're left with 1 minus 3x the top we're pretty much good but we can extract more from these radicals so for example we know that 8 is 2 times 4 4 is a perfect square so we can take the square root of 4 and that's just a 2 that comes out and this 2 remains inside the radical likewise with 24 that is the same as 4 times 6. 4 is a perfect square the square root of 4 is 2 and that's 6 and the x stays trapped inside so we got 2 times rad 6 x this is the final answer done Exercise 51 first i'm going to rewrite these rational exponents in radical form so again the denominator represents the radical index the numerator represents the power so this is the same as the square root of w cubed and this is the same as the square root of w cubed as well so now recognizing that w cubed is the same as w squared times w in both cases i can take the square root of the w squared which is of course a perfect square the square root of w squared is w and what's left inside is a w likewise 32 is 2 times 16. that's pretty nice because i know that 16 is a perfect square so the square root of 16 is 4 and that 2 remains inside over here we're going to do the same thing we're going to pull out a w and leave a w inside and i recognize that 50 is also 2 times 25. now remember if you don't see these perfect squares right there you can just do a prime factorization tree and look for the doubles look for the 5 and the 5 because it would be 2 times 5 times 5. oh i got a 25s that's a perfect square take that out but here i'm just going to stop because i know 25 is a perfect square the square root of 25 is of course 5 and that 2 remains trapped inside now i'm going to simplify by multiplying these together remember the outsides multiply each other and the insides multiply each other so i have 4 w times the square root of 2w minus 5w times the square root of 2 w now since they share the same radical i'm just going to do 4w minus 5w which is negative w square root of 2 w boom Exercise 53 done in this case i'm actually going to simplify that 12 first because i know that 12 is the same as 4 times 3 and 4 is a perfect square so i can take the square root of 4 and rewrite the numerator as 2 square root of the 3 stays trapped inside and that x is going to stay there with it and on the denominator we've got 2 plus 2 red 3. now the main objective here is to get rid of this radical in the denominator and again since i have a rational plus this irrational portion i have to multiply by the conjugate the conjugate is the same as this but it's a minus instead of a plus once we get to this point we're going to simplify the numerator first by distributing boom and boom and we get four because the outsides multiply each other four rad three x minus two times two is four rad three x times three the insides multiply each other that's nine x over and now we gotta foil that denominator so two times two is four two times negative two root three is negative four root three two root three times two is positive four root 3 and 2 root 3 times negative 2 root 3 is negative 4 times 3 because the square root of 3 times square root of 3 is just 3. so and the 2 and the 2 make negative 4 because that minus so we get negative 4 times 3 which is negative 12. once again these middle terms cancel out and that's the point of multiplying by the conjugate so we have no radical and then we get 4 minus 12 which is negative 8. now on the numerator i can simplify this part because i have a perfect square in that nine the square root of nine is three so i can take a three out and remember it's going to multiply that four so three times four is twelve red x and over here we cannot simplify that so that's 4 rad 3 x last but not least if you recognize what all the coefficients they're divisible by 4. so dividing them all by 4 i get square root of 3x minus 12 divided by 4 is 3 rad x over negative 8 divided by 4 is negative 2 and just for fun if i want to get rid of that negative in the denominator i can take that negative out and apply it to the entire numerator and flip it just so we have a positive denominator it doesn't really matter one way or the other but just to show you then the 2 becomes positive this then becomes positive we'll throw that in the front and this becomes negative for the final answer done so here we're just trying to extract as Exercise 55 much as we can from the square root so i'm going to rewrite this as 125 is the same as 5 times 25 that's nice because 25 is a perfect square an n to the 10th well it's an even exponent so it must be a perfect square so the square root of 25 is five that comes outside the other five is going to stay trapped inside so the square root of 25 is of course five and the square root of n to the tenth what is that it's kind of like dividing this by two because n to the fifth times n to the fifth is n to the tenth so that's a perfect square we can remove there and then of course the five stays trapped inside boom done so in this case i could simplify the m over m squared to Exercise 57 just leave a m in the denominator but i'm not going to do that because then i would remove the ability to completely take everything out of the radical on the denominator so i'm just going to leave it as is and i'm going to separate it into a radical on top of a radical and the nice thing about 361 is it's a perfect square so the square root of 361 is 19. the square root of m squared is m so it's completely gone on top we have the square root of 81 that's another perfect square which is 9 and then the m stays trapped inside for the win Exercise 59 done so here everything is a perfect square that's super nice so i'm just gonna again separate it out you don't even really need to do this but i will just do this just for fun so the square root of 144 is 12 the square root of 324 is 18 and the square root of d squared is simply d now i can simplify 12 and 18 are both divisible by six so i'm going to divide them both by 6. 12 divided by 6 is 2 and 18 divided by 6 is 3. there's our final answer done in this case i could divide Exercise 61 the 162 and the 16 by 2 but i'm not going to do that because 16 is a perfect fourth power but i will do it with the x to the sixth and the x to the fourth because it's going to remove the x completely from the denominator so remember when i have x to the sixth over x to the fourth i subtract the exponents which gives me 6 minus 4 which is 2 and that 162 i'll leave as is for now and this is all over the fourth root of 16. and by the way if i would have simplified earlier it won't change the outcome i'm just trying to show you the path of least resistance so the fourth root of 16 is simply 2 because 16 is 2 times 2 times 2 times 2. so now we have a 2 on the bottom then on top i'm going to rewrite 162 as 2 times 81 and the x squared is simply x squared so the reason why i got the 81 is again you need to know this and you may not know these perfect fourths to recognize that 81 is special we can do a prime factorization tree like so so 81 is 3 times 27 27 is 3 times 9 and 9 is 3 times 3. so guess what 81 is 3 times 3 times 3 times 3. so it is a perfect fourth power so the fourth root of 81 is three now we're almost done that means i can take the fourth root of 81 which is of course three so now the final answer is three times the fourth root of two x squared over 2 for the win done now we've got cube roots here and here so i'm going to show you the Exercise 63 perfect cube within 128 with the prime factorization tree so 128 is 2 times 64. 64 is a perfect cube i'ma show you why 64 can be broken down into 4 times 16 and 16 is of course 4 times 4. guess what i got a 4 a 4 and a 4. that means 64 is 4 cubed there's your perfect cube in negative 16 it's going to be a little bit easier because it's negative 2 times negative 2 times negative 2 gives me negative 8. so that's the perfect cube root i can have there and then i can pull out a 2. first i'm going to rewrite it so i've got the cube root of 2 times 64 z cubed and of course since this is to the third power it's a perfect cube minus the cube root of 2 times negative 8 times z cubed so now i'm going to extract the perfect cube there the cube root of 64 is 4. i'm going to take the cube root of z cubed which is just z and then i've got the cubic root of 2 left inside minus the cube root of negative 8 which is negative 2 and the cube root of z cubed which is z and inside i've got the cubic root of 2. this is awesome because now i got the same cubic root here and here so i can combine these minus a negative becomes a plus so 4z plus 2z is 6z cubic root of 2 boom done a guy wire for a suspension bridge runs Exercise 65 from the ground diagonally to the top of the closest pylon to make a triangle we can use the pythagorean theorem to find the length of the guide wire needed the square of the distance between the wire and the ground on the pylon on the ground is ninety thousand feet the square of the height of the pylon is one hundred and sixty thousand feet so the length of the guy wire can be found by evaluating the square root of ninety thousand plus a hundred and sixty thousand what is the length of the guy wire so what i'm gonna do is just like every other example i'm gonna simplify first by adding those values underneath the square root so ninety thousand plus 160 000 is 250 000 and this just so happens to be a perfect square because 250 000 is the same as 500 times 500 so since we have a pair of twins of 500 and 500 the square root of this is simply 500 for the win Exercise 67 done so first of all i'm going to rewrite 2 to the one-half power as the square root of two again anything to the one half power that's just the square root of that number then i'm gonna simplify what's on the numerator as much as possible so remember eight is simply two times four and that's nice because we can take the square root of four which is 2. so i have 2 times the remaining square root of 2 minus the square root of 16. 16 is a perfect square squared of 16 is 4 over there's nothing we can do with this it's 4 minus radical 2. okay now i see that i have a radical in the denominator that's no good i want to get rid of that so what i'm going to do is i'm going to multiply by the conjugate of 4 minus rad 2. so i get 2 square root of 2 minus 4 over 4 minus square root of 2 times the same numbers here 4 radical 2 but instead of the minus a plus and same on the numerator now i'm going to remember i got this minus red 2 but i'm not going to write it until the next step so on top i'm going to foil so i get eight red two right the outside numbers interact they do not interact with the inside number plus two red two times red two well guess what rad two times red two just becomes a two a liberated 2. so now we have 2 times simply 2 which is 4. then we got negative 4 times 4 which is negative 16 and then i got negative 4 times rad 2 which is negative 4 rad 2 over 4 times 4 which is 16 4 times rad 2 which is 4 rad 2 negative red 2 times 4 which is minus 4 red 2 and of course negative red 2 times red 2 the red 2s just turn into a two but the negative makes it minus two like so now on the denominator these radicals four red two and negative four red two cancel cancel so then we have 16 minus two which is 14 on the denominator on the top we can combine these guys eight red two minus four red two is simply four rad two and four minus sixteen is negative twelve then to further simplify i can divide everything by two and i get two red two when i say divide by two i mean the outside numbers not the inside the radical numbers and then minus 6 over 7 and then we still got that minus square root of 2 over here last but not least i'm going to try and consolidate this into a single expression so i can get that nice common denominator by putting this over one and then multiplying the top and bottom by seven because i'm not changing anything right because seven over seven is one and since seven times one is seven i'm just gonna remove that one so now i got 2 red 2 minus 7 red 2. same radical so i can combine that so 2 minus 7 subtracting the coefficients is negative 5 rad 2 minus 6 over 7. there's the answer boom done so here Exercise 69 there's a lot of simplification that needs to happen so the first thing i'm gonna do is with these negative exponents i'm gonna move them down since when they're negative if i drop them to the denominator they'll become positive since there's nothing left on the numerator i'm gonna place a one on top and then i've got a to the seventh n squared times the square root of m squared c to the fourth now i'm not too worried about this negative three because what i'm going to do is i'm going to multiply across first and then start trying to extract things from the radicals and you'll see that i'll just be able to add this exponent to this one because they're both under the radical so they will interact so on top i have square root of m and cubed over a squared times a to the seventh we add the exponents a to the ninth and then we still have an n squared times the square root of m squared and then c to the negative three times c to the fourth we add negative three plus four which is just c now i'm going to start to simplify so n cubed is simply n squared times n so i can take the square root of that n squared which is n so that comes out and then the m and the n are left inside over a to the ninth n squared that m squared is a perfect square so i'm going to extract that as an m with a square root of c left inside so again i don't mind the radical on the numerator i just need to get rid of that rad c in the denominator so i'm going to multiply by rad c rad c because we're talking about square roots i can just multiply by itself and it'll turn into just c so last but not least i have n times the square root of m n and c because they're under the radical in multiplication they interact and they combine over a to the ninth n squared m and then rad c times rad c is simply c and because i have an n over n squared those can interact this cancels out with one of the ends so our final form is red m n c over a to the ninth n m c done here on top we've got a perfect square Exercise 71 in 64. the square root of 64 is eight so i'm going to bring that out and of course the y stays inside plus 4 rad y for the denominator 128 is 2 times 64. so that's pretty nice so i can take the square root of 64 which is 8 so i pull an 8 outside of that radical and the 2 is left inside with the y so now i need to get rid of this rad 2y on the denominator so i'm going to multiply by rad to y over red to y and then so on top we'll have to distribute distribute and again the radicals will multiply each other so i'll have 8 x rad 2 and then y times y is y squared plus 4 rad 2 y squared over rad 2 y times rad 2 y is 2 y so then i'll have 2 y times 8 which is 16 y now i got perfect squares within these radicals the square root of y squared is y the square root of y squared is y so now i'm going to rewrite this as 8 x y times rad 2 plus 4 y rad 2 over 16 y so even though they have the same radicals they have different variables in these numerators so i can't combine those but i can simplify by dividing every coefficient by 4 and every expression by y because they all have a 4 and a y in them so i can divide this one and i would get simply 2 x rad 2. i can divide this one by 4 y and i'd simply get rad 2 and on the bottom 16y divided by 4y is simply 4 Exercise 73 for the win done here we've got a cube root a 4th root a square root and a square root within another giant square root so there's a lot of work to be done here so first 64 is a nice perfect cube because it's four times four times four so i can take the cube root of 64 as four 256 is something that is to the fourth power which is four times four times four times four four times four is 16 times 4 is 64 times 4 is 256. so the fourth root of 256 is 4. over the square root of 64. that's a perfect square it's 8 times 8 so square root of 64 is 8. and 256 is a perfect square as well the square root of 256 is 16. all of this is still under a square root so 4 plus 4 is 8 8 plus 16 is 24. now i'm going to simplify this fraction to become 1 3 right divide top and bottom by 8 and i get square root of 1 3. now i'm going to separate it out and make it square root of 1 over square root of 3. well guess what the square root of 1 is just 1. and the square root of 3 is a problem i don't want a square root in the radical so i'm going to multiply by rad 3 over rad 3. now this will just become 3 because a radical times itself just becomes the number underneath the radical and 1 times square root of 3 is square root of 3 for the win done i hope you enjoyed this video and if you did please click that like button and if you want to see more from the scalar learning channel make sure to click subscribe thank you guys so much for joining and i'll see you in the next video take it easy you
190611
https://www.vocabulary.com/dictionary/cozen
SKIP TO CONTENT IPA guide Other forms: cozened; cozens; cozening To cozen is to mislead, defraud, or fool someone through lies. Cozen rhymes with dozen, and if you say you had two wrong answers on your math test, but you really had a dozen, you might be trying to cozen your parents. While not related in roots, the first part of cozen sounds like the slang word "cuz." If someone asks why you lied, you might say "Cuz I didn't want you to know the truth." And to cozen is to keep the truth hidden and deceive or cheat. Using a trick to get something is one way to cozen, and if you tell a partial truth, there's still a part lie or an attempt to cozen and mislead. Definitions of cozen be false to; be dishonest with synonyms: deceive, delude, lead on see moresee less types: show 18 types... hide 18 types... betray, sell deliver to an enemy by treachery cheat, chisel engage in deceitful behavior; practice trickery or fraud shill act as a shill flim-flam, fob, fox, play a joke on, play a trick on, play tricks, pull a fast one on, trick deceive somebody befool, fool, gull make a fool or dupe of betray, cheat, cheat on, cuckold, wander be sexually unfaithful to one's partner in marriage hoax, play a joke on, pull someone's leg subject to a playful hoax or joke ensnare, entrap, frame, set up take or catch as if in a snare or trap humbug trick or deceive double cross betray by double-dealing job profit privately from public office and official business shark play the shark; act with trickery rig, set up arrange the outcome of by means of deceit crib use a crib, as in an exam two-time carry on a romantic relationship with two people at the same time cook, fake, falsify, fudge, manipulate, misrepresent, wangle tamper, with the purpose of deception snooker fool or dupe fool around, play around commit adultery type of: victimise, victimize make a victim of 2. verb cheat or trick “He cozened the money out of the old man” see moresee less type of: acquire, gain, win win something through one's efforts cheat, chisel, rip off deprive somebody of something by deceit 3. verb act with artful deceit see moresee less type of: cheat, chisel engage in deceitful behavior; practice trickery or fraud Cite this entry Style: MLA MLA APA Chicago Copy citation DISCLAIMER: These example sentences appear in various news sources and books to reflect the usage of the word €˜cozen'. Views expressed in the examples do not represent the opinion of Vocabulary.com or its editors. Send us feedback Word Family Vocabulary lists containing cozen The Vocabulary.com Top 1000 The top 1,000 vocabulary words have been carefully chosen to represent difficult but common words that appear in everyday academic and business writing. These words are also the most likely to appear on the SAT, ACT, GRE, and ToEFL. To create this list, we started with the words that give our users the most trouble and then ranked them by how frequently they appear in our corpus of billions of words from edited sources. If you only have time to study one list of words, this is the list. Tricky Terms for April Fool's Day You'd be a fool not to learn these words related to pranks, jokes, and deceit. For more on the history of these words, read The Cunning, Risible Holiday of April Fool's Day. The Pearl John Steinbeck In this novella based on a Mexican folk tale, a poor diver rejoices when he finds an enormous pearl €” but the treasure may not be the blessing it seems to be. MORE VOCABULARY LISTS 2 million people are mastering new words. Master a word Sign up now (it€™s free!) Whether you€™re a teacher or a learner, Vocabulary.com can put you or your class on the path to systematic vocabulary improvement. Get started
190612
https://www.sheppardsoftware.com/SAT%20vocab%20web/SATV_wordlist_hmls/SAT_wordlist_11.htm
SAT/GRE Vocabulary Prep. Common Vocabulary Words with definitions. Page 11(501 - 550) To implement means to carry out, to provide with the means of carrying out. Prudent means careful, having foresight, sensible, planning ahead. Depravity refers to extreme wickedness, viciousness, or corruption. Generally visionary is used to refer to a dreamer, someone with impractical ideas about the future, someone who is very idealistic. Innate means inborn, inherent, naturally a part of. Judicious means sensible; wise; having, using, or showing good judgment. Unconscionable means unscrupulous, not controlled by conscience. A corollary is a natural consequence, something that follows, a logical extension. Maudlin means overly sentimental, sentimental in a weak and silly way. Laconic means concise, using few words. A mentor is a special counselor or teacher, a more experienced person who helps and/or sponsors a less experienced person. Ambiguous means having more than one meaning, or open to several interpretations, and therefore confusing, not clearly defined. To exalt means to glorify or praise highly, to honor or elevate. Figurative means not literal; based on figures of speech; metaphorical. Husbandry relates to farming and agriculture. Exacting means demanding of perfection, difficult, requiring great skill or precision, hard to please. Philanthropy refers to charity, to a love of mankind expressed through good deeds and helpfulness. Tentative means uncertain or experimental. Peripatetic means wandering, moving about, travelling from place to place, nomadic. Lugubrious means overly sad, sorrowful, exaggeratedly mournful, melancholy. Proprietary refers to ownership, especially of property. To advocate means to support, to speak or write in favor of, to recommend. Latent means present, but inactive or concealed. To mitigate means to make less severe, make more bearable. Sordid means filthy or dirty. Sensory means pertaining the senses or sensation. A euphemism is a gentler way of saying something. A polemic is a controversial argument. Indolent means lazy, disliking of work, idle. Ignominy means deep disgrace, public shame or the loss of one's good name. Chaff refers to that which is worthless. Overt means open to view, not hidden, apparent. Impeccable means flawless, precise, perfectly executed. An anachronism is something that is out of place in time, often referring to a throwback to an earlier time. Cosmopolitan means international, belonging to all parts of the world, comfortable anyplace, not limited to just one place, and, therefore, when referring to a person, sophisticated. A reprisal is a retaliation, an injury done in return for an injury, especially a military action taken in retaliation. Innocuous means harmless, not hurtful or injurious. A precursor is a forerunner; an early stage, which gives rise to a more important stage. Garrulous means talkative, often about insignificant things; chatty, using too many words. Mellifluous means sweetly or smoothly flowing; melodic; sweetened, as if with honey. Balm refers to something that heals. To loll is to lounge around, to recline or lean in a lazy manner. Magnanimous means forgiving and generous, having a noble spirit, being free from pettiness. A protagonist is a leading character in a play, story, or movie; a champion, or hero. To rebut means to refute, to disprove, to argue against, to contradict with evidence. Deleterious means harmful, injurious. Sacrilege means disrespect of the sacred, an intentional insult or injury to that which is sacred. To Inaugurate means to formally begin, especially to install into office with a ceremony. Loquacious means talkative. Mercurial usually means changeable, especially with regard to mood, fickle.
190613
https://webbook.nist.gov/cgi/cbook.cgi?ID=B6000488&Mask=80
Sodium thiocyanate Jump to content National Institute of Standards and Technology NIST Chemistry WebBook, SRD 69 Home Search Name Formula IUPAC identifier CAS number More options NIST Data SRD Program Science Data Portal Office of Data and Informatics About FAQ Credits More documentation Sodium thiocyanate Formula: CNNaS Molecular weight: 81.072 IUPAC Standard InChI:InChI=1S/CHNS.Na/c2-1-3;/h3H;/q;+1/p-1 Copy IUPAC Standard InChIKey:VGTPCRGMBIAPIM-UHFFFAOYSA-M Copy Chemical structure: This structure is also available as a 2d Mol file or as a computed3d SD file View 3d structure (requires JavaScript / HTML 5) Species with the same structure: sodium thiocyanate Information on this page: IR Spectrum References Notes Options: Switch to calorie-based units IR Spectrum Go To:Top, References, Notes Data compilation copyright by the U.S. Secretary of Commerce on behalf of the U.S.A. All rights reserved. Data compiled by:Coblentz Society, Inc. Condensed Phase Spectrum Notice: This spectrum may be better viewed with a Javascript and HTML 5 enabled browser. For Zoom 1.) Enter the desired X axis range (e.g., 100, 200) 2.) Check here for automatic Y scaling - [x] 3.) Press here to zoom Plot Help / Software credits Sodium Thiocyanate Infrared Spectrum 1000 1500 2000 2500 3000 3500 4000 4500 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Wavenumbers (cm-1) Transmitance Help The interactive spectrum display requires a browser with JavaScript and HTML 5 canvas support. Select a region with data to zoom. Select a region with no data or click the mouse on the plot to revert to the orginal display. Credits The following components were used in generating the plot: jQuery jQuery UI Flot Plugins for Flot: Resize (distributed with Flot) Selection (distributed with Flot) Axis labels Labels (Modified by NIST for use in this application) Additonal code used was developed at NIST: jcamp-dx.js and jcamp-plot.js. Use or mention of technologies or programs in this web site is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that these items are necessarily the best available for the purpose. Notice: Except where noted, spectra from this collection were measured on dispersive instruments, often in carefully selected solvents, and hence may differ in detail from measurements on FTIR instruments or in other chemical environments. More information on the manner in which spectra in this collection were collected can be found here. Notice: Concentration information is not available for this spectrum and, therefore, molar absorptivity values cannot be derived. Additional Data View scan of original (hardcopy) spectrum. View image of digitized spectrum (can be printed in landscape orientation). View spectrum image in SVG format. Download spectrum in JCAMP-DX format. | Owner | COBLENTZ SOCIETY Collection (C) 2018 copyright by the U.S. Secretary of Commerce on behalf of the United States of America. All rights reserved. | | Origin | MELLON INSTITUTE | | Source reference | COBLENTZ NO. 488 | | Date | Not specified, most likely prior to 1970 | | State | SOLID (NUJOL AND FLUOROLUBE MULLS) PURITY UNKNOWN | | Instrument | Not specified, most likely a prism, grating, or hybrid spectrometer. | | Resolution | 4 | | Sampling procedure | TRANSMISSION | | Data processing | DIGITIZED BY NIST FROM HARD COPY | This IR spectrum is from the Coblentz Society's evaluated infrared reference spectra collection. References Go To:Top, IR Spectrum, Notes Data compilation copyright by the U.S. Secretary of Commerce on behalf of the U.S.A. All rights reserved. No reference data available. Notes Go To:Top, IR Spectrum, References Data from NIST Standard Reference Database 69: NIST Chemistry WebBook The National Institute of Standards and Technology (NIST) uses its best efforts to deliver a high quality copy of the Database and to verify that the data contained therein have been selected on the basis of sound scientific judgment. However, NIST makes no warranties to that effect, and NIST shall not be liable for any damage that may result from errors or omissions in the Database. Customer support for NIST Standard Reference Data products. © 2025 by the U.S. Secretary of Commerce on behalf of the United States of America. All rights reserved. Copyright for NIST Standard Reference Data is governed by the Standard Reference Data Act. Privacy Statement Privacy Policy Security Notice Disclaimer (Note: This site is covered by copyright.) Accessibility Statement FOIA Contact Us
190614
https://www.youtube.com/watch?v=RLoto_i_dec
Prob & Stat, Lec 6A: Discrete Uniform Distribution (PMF, CDF, Mean, Variance, MGF (roll a fair die)) Bill Kinney 35000 subscribers 12 likes Description 389 views Posted: 16 Sep 2023 Roll a fair 6-sided die. Let X = the outcome (1, 2, 3, 4, 5, 6). Then X has a discrete uniform distribution with Probability Mass Function (PMF) f(x)=1/6 when x=1, 2, 3, 4, 5, 6 and f(x)=0 otherwise. We then find: a) the Cumulative Distribution Function (CDF), the mean (expected value) μ = E[X], the variance Var(X) = E[X^2]-(E[X])^2, and the Moment Generating Function (MGF) m(t). The MGF generates moments E[X^k] by computing its derivatives and plugging in zero. The standard deviation σ = √Var(X) is interpreted using the 2-standard deviation from the mean rule of thumb. (Probability and Statistics with Applications: A Problem Solving Text, by Asimow and Maxwell) Calculus-based Probability and Statistics for Engineers and Scientists, Lecture 6, Part 1. Also for Data Scientists and Actuarial Science majors. ProbabilityAndStatistics #UniformDistribution #DiscreteRandomVariable Links and resources 🔴 Subscribe to Bill Kinney Math: 🔴 Subscribe to my Math Blog, Infinity is Really Big: 🔴 Follow me on Twitter: 🔴 Follow me on Instagram: 🔴 You can support me by buying "Infinite Powers, How Calculus Reveals the Secrets of the Universe", by Steven Strogatz, or anything else you want to buy, starting from this link: 🔴 Check out my artist son Tyler Kinney's website: 🔴 Desiring God website: AMAZON ASSOCIATE As an Amazon Associate I earn from qualifying purchases. Transcript: our first example today as we delve into chapter three is the experiment of rolling one Fair six-sided die once roll a fair six-sided die and to let our random variable capital X B whatever you get the outcome a one a two a three a four a five or a six should we think of this in terms of sample spaces well we could we've done that before do we have to think of it in terms of sample spaces no we can think of it in terms of this new idea of a discrete random variable this is a discrete random variable I will always abbreviate RV with a corresponding distribution our book calls this a PDF for probability density function but again our book is 20 years old in more recent years if you look up books on probability Theory they will more typically call this a probability Mass function probability Mass function or pmf F and I do think that's better terminology okay here's why it's better terminology in a nutshell with discrete random variables the pmf the probability Mass function actually computes probabilities for you probability masses so to speak in chapter 4 with continuous random variables we will have a PDF we'll still use little f of x for it but when you plug in numbers into the PDF you don't get probabilities you get probability densities which we will ultimately see means you have to integrate PDFs to find actual probabilities it is quite analogous in physics to say you got a number line Point masses individual Point masses that you can for example find the center of mass for versus say a thin rod that's got a density function a mass per unit length that you have to integrate to find the mass or Center of mass form calc 2 kind of stuff you should have learned about that kind of stuff and talk to it is very much analogous to that what's the probability Mass function going to be in this case well if it's a fair six-sided die each side has the same likelihood of occurring 1 6. this is going to give the probability that capital x equals Little X don't get confused by two equals in here like this does f of x equal p parenthesis X and that equal Little X and parenthesis no these are two distinct quantities f of x and P of capital x equals Little X the value of f of x at the input Little X is the probability that the random variable equals that value Little X capital x is kind of like an abstract thing an idea so to speak what is a random variable really if you want to be extra technical about it like grad school level stuff it's actually a function but I'm not going to get into that Little X is a number but it's an arbitrary number so it's in a sense also a variable but what does this equal n equals 1 6 if Little X is one or two or three or four or five or six n equals zero elsewhere yes we do technically for some technical reasons we want this to be defined for all X it's going to equal 1 6 4 6 values of x if Little X is one two three four five or six and it's going to equal zero elsewhere for all other values of x I like to abbreviate this o w for otherwise sometimes they say elsewhere or for other X so really we're defining this probability Mass function this pmf for all values of X for all real values of x and so if we were to graph this function I was just talking about this in my calc 1 class yesterday it would be a graph that's got some removable discontinuities that with my calc class yesterday I was saying these are kind of like dumb removable discontinuities because there's holes there's points missing from the graph but their points are are elsewhere so to speak the graph would be mostly the horizontal axis except exactly one exactly two exactly three four five and six where the output at those numbers is 1 6. but elsewhere it's zero and I think what I taught calc 1 yesterday and talked about these is kind of like dumb removable discontinuities I was mentioning some applications and I was thinking in physics applications but here's an application of dumb discontinuities well in this case they're actually not so dumb they're important okay this graph represents the probability Mass function in this context and yes you've got six removable discontinuities where the limit as X approaches any one of these numbers is zero but the function value is 1 6. but to tell you the truth they're not dumb here they're essential this is what's going on this is what the model is about I did forget last time or I ran out of time to talk about cdfs cumulative distribution functions the capital F that you encountered in the reading and the homework let's talk about that now capital F of little X in both chapters three and four I will call this a cumulative distribution function cumulative distribution function CDF for short CDF some people even shorten these names further and they'll just call Little f a probability function and they'll call it capital f a distribution function well I think it is good to emphasize the word mass here to distinguish it from a density in chapter four and the word cumulative because it is accumulating probabilities in a sense it's the probability that your random variable capital x is less than or equal to the given number Little X and we can also Define this for all values of X as well word of warning back up here this equality is because the random variable is capital x is discrete that equality that right there is not going to work in chapter four with continuous random variables again I already mentioned that six minutes ago the PDF probability density function in chapter four when you plug in numbers doesn't give you probabilities you have to integrate it to get probabilities here because X is discrete little f of x is an actual probability it is a positive number in this case when X is one of these six numbers and is never negative it's always greater than or equal to zero you can't get Negative probabilities and it can't be bigger than one all the probability laws from chapter two are still going to hold this equation is true no matter what even with continuous random variables even in chapter four with continuous random variables cdfs capital f of x always equals that problem always always you might wonder are discrete random variables in continuous random variables all there is for random variables the answer is no there are mixed random variables that are both discrete and continuous continuous how is that possible I'll probably find time to talk about one example sometimes but not not today what does this equal evidently I have to use the pmf effectively add up its values yeah in the discrete case this equation is only going to work since X is discrete and um it's also only going to work because it's taking on whole number values with positive probabilities I have to be a little bit more careful if it was not the case and taking on only whole number values integer values with positive probabilities foreign okay and and I could write what I'm about to write a little bit more generally but for the sake of this example I'll just write what it is it's the sum of the values of little f for this example I'm going to call the input a little f for this summation t and I'm going to go ahead and just let T start at should I start it at 1 or 0 uh also started at one and do I go up to X well you saw it in the reading technically we should go to the uh uh the greatest integer in x which I like I write right like that I think the book writes it more like this in the book okay I could write this summation a little bit more generally I could actually start T at zero or even negative 1 or negative two or sort of think of this for all t as being less than or equal to that I mean it's a little funny with summations because you're implicitly assuming with the summation that t keeps going up by one each time and so it's a little funny it's not a real big deal we're the key things we're adding up values of little f what is the greatest injury in X well if x is a whole number the greatest integer in X is that whole number itself if x is an integer the greatest integer in X is X itself if X is not an integer like 4.5 or let's try 4.1 if X is 4.1 the greatest integer in 4.1 is 4. that's the greatest integer less than 4.1 greatest integer in 4.1 if x is 4.9 the greatest integer in 4.9 is also 4. that's the greatest integer that's still less than 4.9 once you get to 5 the greatest signature and 5 is 5. greatest integer I should have said less than or equal to 5. so effectively I mean in this example we would imagine mostly plugging in whole number values for X like one two three four five or six but what doing this allows us to do is it allows us to think of this CDF as being defined once again for all X it doesn't quite match T starting at 1 but effectively the graph of the CDF is defined for all values of x and it has jumped discontinuities at in this example these whole number values one through six it's zero when X is less than one at one it jumps up to 1 6. because the probability of being x capital x being less than or equal to 1 is the same as the probability of X equaling one for this example it technically stays at 1 6 until you get to two but when it jumps up to 2 6. one third what's the probability that when I roll a die I get less than or equal to 1.9 it's the same as the probability of getting less than or equal to one which is the same as the probability of getting a 1 which is 1 6. because I can't get 1.9 it is it turns out for the making the theory all work nicest together to think of this as being defined for all X and it keeps jumping up by 1 6 of a unit here would be at one half every time you get to the next whole number uh it's up to four it would be up to two-thirds and here we're going to get to five I've jumped up to 5 6 and when we get to six it would jump up to one and stay at one thereafter for all values of X bigger than 6. so technically speaking the CDF is defined for all X and it's in the discrete case it's going to be a step function with jump discontinuities in this example the jumps all have the same size 1 6 they go up 1 6 but in general they don't have to be the same size this distribution has a name this kind of distribution it's called The Uniform distribution hey because these outputs are all uniform they're all the same number 1 6 at these six discrete values of x this is called a uniform distribution you can generalize this to a general uniform distribution that's got n discrete values at which the probabilities are 1 over n does that make sense should catch what I said and discrete values at which the probabilities are one over n like if n is 100 there's 100 discrete values at which the probability is one over a hundred and obviously the sum of all the pmf values is one we can write the sum of the little f of x values is one I could write in general all X here though in this example X really goes from one to six that's equivalent for this example for this example to writing the sum of the values of little f of x as X varies from one to six is one okay all right what else did we talk about the other day how about the mean the variance and the standard deviation for this example still what would make sense for the mean hmm how about something right in the middle 3.5 oh that piece seven halves that seems like that should be the mean shouldn't it it's like a center it's like a center of mass yeah the formula for the mean can be thought of as the same as the formula for a center of mass when your total mass is one when you add up all these Point masses that have a mass of 1 6 you get a total mass of one it's like physics like that's the balance point for your seesaw or here Tyler if you prefer what's the mean mu also called the expected value of x we use both notations you should be completely comfortable with both notations and realize these are always the same thing mu equals e of x mu the mean equals the expected value of x always for and the given random variable I mean sometimes we might put a subscript on the MU to emphasize what the random variable is should I do that uh I won't bother but sometimes we will what does it equal in general is a sum over all X of x times the pmf for this example that becomes a sum X goes from one to six of x times 1 6. you can factor out the 1 6 and write it this way and obviously you can certainly add up the first six numbers you should maybe remember from calculus 2 maybe or maybe discrete math that the sum of the first n numbers in general is n times n plus one over two the first n positive integers that's what we got here the first six positive integers n is six this is n times n plus 1 over 2. oh six is cancel 7 halves the sum it's 42 over 2 is 21. 21 times 1 6 is 3 and a half just like I guessed a minute ago that's a the mean is a measure of central tendency and since the distribution is uniform these probabilities are uniformly spread across these six numbers that are equally spaced it makes sense that the mean is right there in in the middle now the means not a particular value of x that it can take on with a positive probability how do you interpret this mean you interpret it as the theoretical long run average take a die a faraday roll it record the outcome and then do that hundreds of thousands and maybe even millions of times and take the average outcome it should be very close to 3.5 now will it be 3.5 probably not exactly certainly will not be exactly with just one roll of the die right this is analogous to our first class period where we were doing the coin simulation and seeing the graph approach 0.5 for the probability of heads I call that the law of large numbers that's kind of something similar going on here with the mean what about the variance the variance is a theoretical measure of spread that's harder to interpret than the standard deviation but is sort of easier to work with abstractly because you don't have to work worry about the square roots the definition of the variance is an expected value as well but not of x it's an expected value of really a different random variable x minus U quantity squared now X is a random variable x minus its mean is another random variable right the if if the values that X can take on are one two three four five and six and turns out with equal probability so that the mean is 3.5 then the values that x minus mu could take on would be 1 minus 3.5 negative 2.5 2 minus 3.5 negative 1.5 negative 0.5 positive 0.5 positive 1.5 positive 2.5 would be the six values that x minus mu could take on for this example and they would have equal probability Square those numbers you know negative 2.5 and positive 2.5 both have a square equal to 6.25 negative 1.5 and positive 1.5 both have a square equal to 2.25 and negative 0.5 and positive 0.5 both have a square equal to 0.25 this random variable for our example can take on these three values and none others and the probability of each of those would be one-third right 2 times 1 6. this would have a have a um uniform distribution as well what should its mean be should be the average of those three numbers I I'm guessing the answer ahead of time here we'll see if we're right here add them up divide by three the average is 2.916 repeating is that really what we're going to get here I'll have to wait and see there's a theorem that gives What's called the computational formula for the variance expected value of x squared minus the expected value of x squared and those are again different things and the only time you ever get zero here when you subtract them is if x is a constant random variable taking on one value with probability one and every other value has probability zero that's a constant random variable it's not really random but we can think of it as random if we want still otherwise you're going to get a positive quantity here the expected value of x squared in general is always bigger than except in the case of constant random variables the expected value of x quantity squared it's actually a fairly important fact that's easy to derive from this because that's never negative it's called Jensen's inequality and has some applications Beyond probability and statistics but what's this well okay we could think of it this way again we could take the possible values of X and square them one squared is one 2 squared's four three squared's 9 4 squared sixteen five squared's 25 6 squared is 36 . what's the average of those numbers it's 15.16 repeating that looks like that's going to be what the expected value of x squared is in general this expected value is the sum over all X of x squared of x squared times the pmf values which for our example whoops for our example is going to be the sum X goes from one to six of x squared times 1 6. yeah that'll be 1 6 times what I just said 1 plus 4 plus 9 plus 16 plus 25 plus 36 if I plug those in my calculator right that was 91 over 6 that's 15.16 repeating 15 and 1 6. so using this formula now looks like we get 91 over 6 minus seven halves squared seven halves was the mean seven halves squared is 49 over four get a common denominator of 12 . we'll get 182 minus 147 over 12. hope I didn't make a mistake that'll be 35 over 12. I think this is right lo and behold 35 over 12. 2.916 repeating just like I guess right didn't I guess that that was the average of 0.25 3.25 and 6.25 you can generalize this to an arbitrary uniform distribution there is a handy dandy table in chapter three and then I'm going to let you use I'll give you a print out of it on exams it's on page 78 uh second option name uniform density really Mass 1 over m these are the values of little f of x when X takes on these n values these X's that you see have subscripts the subscripts are integers but the X's themselves don't have to be integers they were for the dice rolling example the skip this moment generating function thing the mean is the sum of the X I's divided by n it's an arithmetic average like you may have learned about when you learned about stats in your past the mean for uniform random variable discrete random variable is the same as the arithmetic average sometimes labeled X bar in statistics almost always labeled X bar in statistics and there's the variance that's the expected value of x squared dividing by N means is the same as multiplying each of those X I's by one over n minus the expected value of x quantity squared can this be simplified yeah but they're not bothering to you could Square this out and get a common factor of N squared the author's just not bothering to simplify this let me clarify anything you also learned about something else in your reading well lots of other things properties of expected values and variance the expected values one ones I mentioned last time you also learned about a mysterious kind of thing called the moment generating function moment generating function m g f for short we have pmfs cdfs mgfs in chapter four we'll have PDFs yes in chapter three in the book it's called PDF moment generating function for a given random variable x m sub X of T here we are emphasizing the name of the random variable is also an expected value um something really weird e raised to the T times capital x power X is a random variable if T is fixed if T is a fixed number then T times x would be a random word and e to the TX would be a random variable for any fixed T this is a random variable and I can find its expected value and call that m x of T but hey that's kind of like function notation is T really a fixed number or is it a variable when you think about calculating this do you think of it as fixed but then you say hey my answer depends on T so in the end this is a function of t okay well this is weird who would care about moment generating functions it would be a sum over all X you always put whatever's inside the square brackets here times little f of x I hope that pattern has become clear by now except you use the lowercase x now here distinction between uppercase and lowercase gets important here in conceptualizing all this and not misunderstanding it these are Big X's that's a lowercase x is there yep big x capital x is the random variable an abstract idea yes in grad school math it's a function you can forget that I ever said that uh Little X you think of as an honest-to-goodness number not really a random variable an ordinary variable maybe you might call it yeah whatever goes there you put it there except you change the random variable to a number change the capital x to a lowercase x and it gets multiplied by the pmf and you do summations like this for these expected values for discrete random variants with continuous random variables the summations are going to become minerals like magic is it really magic no it just feels like it the sum turns into an integral remember that from calc 2. right the Riemann sum poof turns into an integral like magic is it really magic no it's really in a sense a definition but the definition needs to be justified it's not really magic just feels like magic and it's fun to pretend it's magic but not really so for our example here coming back to the dice rolling example it would be this sum oops e to the TX times 1 6. which you can actually write out as first 1 6 e to the T plus 1 6 e to the two t right X is changing I'm putting the X in front of the T instead of after the T when I do this 1 6 e to the 3T 1 6 e to the 4T plus 1 6 e to the five t plus 1 6 e to the 16. this is the moment generating function for the dice rolling example here is the amazing thing about moment generating functions they generate moments that's a moment well the mean the expected value of x is called the first moment the expected value of x squared that it's in the computational formula for the variance is called the second moment the expected value of x cubed is called the third moment the expected value of x to the fourth power is called the fourth moment the expected value of x to the fifth is called the fifth moment blah blah blah blah and there's a theorem that why are those called moments uh it's a physics thing moment of inertia it's like maybe related to that you can look it up I don't know you could ask why are they called moments of inertia in physics that means it related to time it doesn't seem to be from my understanding the case moment of X this entire expected value is called the kth moment of x where K is a positive and integer one two three four five six Etc it's a theorem that this is the derivative of the moment generating function the K derivative evaluated at T equals zero the book writes that like this D DT with K's that stands for kth derivative case derivative of the moment generating function m x of t do that derivative and then evaluate it at T equals zero I'm putting an extra set of parentheses that the book doesn't put in there but I mean it's okay um um you might all also see people write this in an alternative notation that's a little quicker to write M sub K with a k m sub x with a K inside parentheses up there and then a zero here would be an alternative notation for kth derivative evaluated at zero But realize it is important to do the derivatives first and only plug in zero at the very end right don't plug in zero too early that would be a calculus one kind of mistake so what about our example here there's Our Moment generating function what's its first derivative MX Prime of T got a bunch of exponentials and with the last five of them I'm going to have to use the chain rule the derivative of that one is itself the derivative of this one is going to get the itself times an extra factor of 2 to give me 2 6 through 1 3 I'll write it as a 2 6. this one I'll write is a 3 6. then 4 6 then 5 6 then 6 6 which is of course one but there you go now plug in 0 and lo and behold we get one plus two plus three plus four plus five plus six all over six we get 3.5 21 over 6. just like before we did get the right answer but this theorem Works in general no matter what your random variable is how is this theorem proved well I will tell you to tell you the truth the book doesn't really prove it we will look at it after our break here it doesn't really prove it because it's doing a little hand wavy stuff that's what Dr Wetzel would call verification instead of proof let's go ahead and look at what the book calls a proof of this here it is as a theorem let MX of T be the moment generating function for the random variable X then yes this says the kth derivative with respect to T of the moment generating function evaluated at T equals zero equals the kth moment of x realize that these things are numbers for a given random variable in a given positive integer value of K the expected value of x to the K power the kth moment is a number and yes that derivative evaluated at zero is a number how is this quote unquote proved I said it's kind of hand wavy to prove this theorem let Z equal to e to the t x and use the McLaurin or Taylor series expansion for e to the TX which you should remember from calc 2 what isn't it called the Taylor series oh it's Taylor series centered at zero are sometimes called McLaren series two if it's not centered at zero then you call it a Taylor series for sure but anyway this is the Taylor series for e to the Z centered at zero I hope you remember that from calc 2 with those factorials there yes replace the Z's with t t times x that's the first hand wavy thing what an infinite series of random variables does that even make sense here's the second hand wavy thing take its expected value and assume you can do you can pull the e through term by term extend the linearity of the expected value operator to an infinite sum that's another hand wavy thing this is not really a true proof in the full rigorous mathematical sense of the word it's a verification or it's a reason you have to you have to prove that this makes sense and can be done with infinite sums of random variables we don't want to go there right we've got other things to do we're just trusting differentiating with respect to T can you do that on both sides as well I mean the series has got to converge and I mean this actually this series is an ordinary series from calc 2 because remember those E's are a bunch of numbers this is an ordinary power series from cow 2 and it's going to have if you remember from calc 2 a certain radius of convergence and interval of convergence and certainly there are proofs in real analysis that can be used to prove that you can differentiate power series term by term within the interval of conversions bad memories from calc 2. you'd improve all that stuff in count two you were just doing calculations like find the interval of convergence like you had to use the ratio tests and things like that in real analysis or at least more advanced real analysis than you may even get here as an undergrad you'd want to prove that you can differentiate such a power series term by term within its interval of convergence anyway they're just assuming you can do so differentiate with respect to T derivative of one is zero derivative of T times e to the X with respect to T is e of x it's actually e to the x e of x it's just a number derivative of t squared over two two factorial equals two times this number is bring down the power of two two divided by two is one that number e of x squared times t to the first yeah the derivative of that is that the derivative of that is that plug in t equals zero you'll get the first moment the mean Magic next page when the derivative yeah okay they say plug into equals zero there take the second derivative get this plug into equals zero get that the second moment it can be continued if you wanted to be Backstreet picky Maybe by induction that for an arbitrary positive integer value of K this is going to be true you get the kth moment so the first moment is the mean the second moment is not the variance well that that brings up a word of warning it's very common for people to actually think to accidentally think the variance equals the second moment like these two things are equal no no no no no no no no no no no no no no no no no no no no no no no no no no not not not not equal the variance is the difference not just the first thing the variance doesn't equal the second moment but you can use the second moment to find the variance by the way if you happen to know the variance and the mean you could also use this equation to solve for the second moment you could add the square of the mean to both sides that's something else you can do with this equation that I don't think I ever ask you a question like that but I have seen it come up an Actuarial exam problems okay so what's a moment generating function good for it's good for generating moments it's also good for something else maybe even more important you might say in chapter seven chapter seven might be my favorite chapter for identifying random variables usually you got a random variable you can find the moment generating function sometimes you got a moment generating function you want to figure out the random variables that actually has super important applications in statistics I'm thinking of chapter theorem in chapter seven perhaps my favorite theorem of the course I don't know there's a bunch to pick from this one right there well let X and Y be random variables with equal moment generating functions for all T in some open interval around zero then X and Y have the same distribution meaning they're essentially the same random variable one subtlety here just because two random variables X and Y have the same distribution doesn't mean when you actually observe their values in the random experiments that they will be the same values they have the same distribution means they have the same probability Mass function in the discrete case probability density function The Continuous case they have the same CDF they have the same mean they have the same variance they have the same standard deviation Etc which reminds me I forgot to find the standard deviation for a dice example what was the mean the mean was 3.5 million times well no okay the mean was 3.5 the standard the variance was 2.916 repeating yes the standard deviation is the square root of the variance about 1.7 I've been hinging that the standard deviation is easier to interpret here's one way it's easier to interpret almost all the probability of a distribution is going to be within two standard deviations of the mean it's called a rule of thumb it's not a very precise statement it's just and weigh the approximately true statement it's good enough for most applications almost all the probability or with Statistics almost all the data is within two standard deviations of the mean so if the mean is 3.5 and the standard deviation is 1.7 two standard deviations would be 3.4 almost all the probability for our example will be within a distance of 3.4 of 3.5 there's 3.5 3.4 units to the right would be almost to 7 3.4 units to the left would be almost to zero yeah 100 of the probability the values that give you positive probabilities in this case are within two standard deviations of the mean it's not even all and almost in this example it is 100 percent of the random variables values get taken on with positive positive probability are within two standard deviations the mean that rule of thumb applies though for any random variable but the word almost is uh kind of vague
190615
https://phys.libretexts.org/Courses/Coalinga_College/Physical_Science_for_Educators_(CID%3A_PHYS_14)/05%3A_Density_Mole_and_Molarity
5: Density Mole and Molarity - Physics LibreTexts Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode Physical Science for Educators (CID: PHYS 140) Coalinga College { "5.01:Introduction_and_Chapter_Objectives" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.02:_Density" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.03:_Lab_2_Density_of_sweet_drinks" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.04:_Concentration_of_Solutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.05:_Colligative_Properties" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.06:_Formula_Mass_and_the_Mole_Concept" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.07:_Determining_Empirical_and_Molecular_Formulas" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.08:_Mole_Calculations_in_Chemical_Reactions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.09:_Mole-Mass_and_Mass-Mass_Calculations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.10:_Molarity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.11:_Composition_of_Substances_and_Solutions(Exercises)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.12:_Other_Units_for_Solution_Concentrations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.13:_End_of_Chapter_Key_Terms" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Elemental_Beginnings-_Foundations_of_Physics_and_Chemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Units_Measurement_Graphing_and_Calculation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Atomic_Theory_and_Periodic_Table" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Phases_and_Classification_of_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Density_Mole_and_Molarity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Physical_and_Chemical_Reactions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Solutions_Acids_and_Bases_pH" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Energy_Physics_and_Chemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Motion" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Forces" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Electricity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Magnetism" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Transverse_and_Longitudinal_Waves" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Property_of_Sound_Doppler_Effect_and_Interferences" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_Electromagnetic_Radiation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16:_Reflections_and_Refraction_of_Waves" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "17:_Nuclear_Physics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "Physical_Science_for_Educators_(CID:_PHYS_14)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Physical_Science_for_Educators_Volume_1 : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Physical_Science_for_Educators_Volume_2 : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Wed, 11 Sep 2024 00:14:58 GMT 5: Density Mole and Molarity 95648 95648 Heather Evans { } Anonymous Anonymous 2 false false [ "article:topic-guide", "kinematics", "position", "velocity", "license:ccbyncsa", "source-phys-29878", "program:oeri" ] [ "article:topic-guide", "kinematics", "position", "velocity", "license:ccbyncsa", "source-phys-29878", "program:oeri" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Campus Bookshelves 3. Coalinga College 4. Physical Science for Educators (CID: PHYS 140) 5. 5: Density Mole and Molarity Expand/collapse global location Physical Science for Educators (CID: PHYS 140) Front Matter 1: Elemental Beginnings- Foundations of Physics and Chemistry 2: Units, Measurement, Graphing, and Calculation 3: Atomic Theory and Periodic Table 4: Phases and Classification of Matter 5: Density Mole and Molarity 6: Physical and Chemical Reactions 7: Solutions Acids and Bases pH 8: Energy Physics and Chemistry 9: Motion 10: Forces 11: Electricity 12: Magnetism 13: Transverse and Longitudinal Waves 14: Property of Sound, Doppler Effect and Interferences 15: Electromagnetic Radiation 16: Reflections and Refraction of Waves 17: Nuclear Physics Back Matter 5: Density Mole and Molarity Last updated Sep 11, 2024 Save as PDF 4.13: End of Chapter Key Terms 5.1: Introduction and Chapter Objectives Page ID 95648 ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents No headers 5.1: Introduction and Chapter Objectives 5.2: DensityDensity is a physical property that is defined as a substance’s mass divided by its volume. Density is usually a measured property of a substance, so its numerical value affects the significant figures in a calculation. Notice that density is defined in terms of two dissimilar units, mass and volume. That means that density overall has derived units, just like velocity. 5.3: Lab 2 Density of sweet drinks 5.4: Concentration of SolutionsSolution concentrations are typically expressed as molarities and can be prepared by dissolving a known mass of solute in a solvent or diluting a stock solution. The concentration of a substance is the quantity of solute present in a given quantity of solution. Concentrations are usually expressed in terms of molarity, defined as the number of moles of solute in 1 L of solution. 5.5: Colligative Properties 5.6: Formula Mass and the Mole ConceptThe formula mass of a substance is the sum of the average atomic masses of each atom represented in the chemical formula and is expressed in atomic mass units. The formula mass of a covalent compound is also called the molecular mass. A convenient amount unit for expressing very large numbers of atoms or molecules is the mole. Experimental measurements have determined the number of entities composing 1 mole of substance to be 6.022×10 23, a quantity called Avogadro’s number. 5.7: Determining Empirical and Molecular FormulasThe chemical identity of a substance is defined by the types and relative numbers of atoms composing its fundamental entities (molecules in the case of covalent compounds, ions in the case of ionic compounds). A compound’s percent composition provides the mass percentage of each element in the compound, and it is often experimentally determined and used to derive the compound’s empirical formula. 5.8: Mole Calculations in Chemical ReactionsBalanced chemical reactions are balanced in terms of moles. A balanced chemical reaction gives equivalences in moles that allow stoichiometry calculations to be performed. 5.9: Mole-Mass and Mass-Mass CalculationsMole quantities of one substance can be related to mass quantities using a balanced chemical equation. Mass quantities of one substance can be related to mass quantities using a balanced chemical equation. In all cases, quantities of a substance must be converted to moles before the balanced chemical equation can be used to convert to moles of another substance. 5.10: MolaritySolutions are homogeneous mixtures. Many solutions contain one component, called the solvent, in which other components, called solutes, are dissolved. An aqueous solution is one for which the solvent is water. The concentration of a solution is a measure of the relative amount of solute in a given amount of solution. Concentrations may be measured using various units, with one very useful unit being molarity, defined as the number of moles of solute per liter of solution. 5.11: Composition of Substances and Solutions (Exercises)These are homework exercises to accompany the Textmap created for "Chemistry" by OpenStax. 5.12: Other Units for Solution ConcentrationsIn addition to molarity, a number of other solution concentration units are used in various applications. Percentage concentrations based on the solution components’ masses, volumes, or both are useful for expressing relatively high concentrations, whereas lower concentrations are conveniently expressed using ppm or ppb units. These units are popular in environmental, medical, and other fields where mole-based units such as molarity are not as commonly used. 5.13: End of Chapter Key Terms 5: Density Mole and Molarity is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by LibreTexts. Back to top 4.13: End of Chapter Key Terms 5.1: Introduction and Chapter Objectives Was this article helpful? Yes No Recommended articles 5: Density Mole and MolarityThis page discusses key chemistry concepts such as density, solution concentration (molarity), formula mass, and the mole concept. It explains how to ... 20: Density Mole and Molarity 1.2: DisplacementKinematics is the study of motion without considering its causes. In this chapter, it is limited to motion along a straight line, called one-dimension... 2.2: DisplacementKinematics is the study of motion without considering its causes. In this chapter, it is limited to motion along a straight line, called one-dimension... 2.1: DisplacementKinematics is the study of motion without considering its causes. In this chapter, it is limited to motion along a straight line, called one-dimension... Article typeChapterLicenseCC BY-NC-SAOER program or PublisherASCCC OERI Program Tags kinematics position source-phys-29878 velocity © Copyright 2025 Physics LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us [email protected]. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 4.13: End of Chapter Key Terms 5.1: Introduction and Chapter Objectives
190616
https://www.aafp.org/pubs/afp/issues/2022/0100/p33.html
SARINA SCHRAGER, MD, MS, LASHIKA YOGENDRAN, MD, MS, CRYSTAL M. MARQUEZ, MD, AND ELIZABETH A. SADOWSKI, MD Am Fam Physician. 2022;105(1):33-38 Patient information: See related handout on adenomyosis, written by the authors of this article. Author disclosure: No relevant financial relationships. Adenomyosis is a clinical condition where endometrial glands are found in the myometrium of the uterus. One in three patients with adenomyosis is asymptomatic, but the rest may present with heavy menstrual bleeding, pelvic pain, or infertility. Heavy menstrual bleeding is the most common symptom. Adenomyosis is distinct from endometriosis (the presence of endometrial glands outside of the uterus), but the two conditions often occur simultaneously. Risk factors for developing adenomyosis include increasing age, parity, and history of uterine procedures. Most patients are diagnosed from 40 to 50 years of age, but younger patients with infertility are increasingly being diagnosed with adenomyosis as imaging modalities improve. Diagnosis of adenomyosis begins with clinical suspicion and is confirmed with transvaginal ultrasonography and pelvic magnetic resonance imaging. Treatment of adenomyosis typically starts with hormonal menstrual suppression. Levonorgestrel-releasing intrauterine systems have shown some effectiveness. Patients with adenomyosis may ultimately have a hysterectomy if symptoms are not controlled with medical therapy. Adenomyosis is a benign uterine disorder in which endometrial glands are found in the myometrium of the uterus. Adenomyosis is distinct from endometriosis, which is the presence of endometrial glands outside of the uterus. Adenomyosis is a poorly understood condition. | Clinical recommendation | Evidence rating | Comments | --- | Use transvaginal ultrasonography or pelvic magnetic resonance imaging to noninvasively diagnose adenomyosis.6,9 | C | Diagnostic accuracy studies | | Patients with adenomyosis not desiring pregnancy can use a levonorgestrel-releasing intrauterine system (Mirena) to help reduce heavy menstrual bleeding and pain.6 | B | Results from a limited cohort study showing decreased blood loss and pain | | Hysterectomy is definitive treatment of adenomyosis for women who are past childbearing age if other therapies are not effective.6 | C | Consensus opinion | Two theories prevail regarding the pathogenesis of adenomyosis.1,2 The first theory suggests that with injury of the endometrium, the basalis endometrium invaginates into the myometrium through an altered or interrupted junctional zone creating adenomyotic lesions. The tissue injury and repair theory may help explain why having a previous uterine procedure (e.g., cesarean delivery, dilation and curettage) increases the risk of subsequent adenomyosis. The second theory suggests that adenomyotic lesions arise from metaplasia of embryonic pluripotent Müllerian remnants. Adenomyosis can be classified as diffuse (involving a large area of endometrium) or focal.1 Epidemiology The diagnosis of adenomyosis was previously confirmed in only post-hysterectomy cases, and it was thought to predominate in patients older than 40 years. Improved imaging makes it clear that younger patients also have adenomyosis.1 In one study of 985 symptomatic patients seen in a gynecology clinic using specific ultrasound diagnostic criteria, adenomyosis had a 20.9% prevalence in the study population (including pre- and postmenopausal, nulligravid, and multiparous patients), with a range of 10% to 35% in histology reports after hysterectomy.3 A population-based study of 650,000 patients estimated the overall incidence of adenomyosis at 1%, or 29 per 10,000 person-years, over a 10-year period based on International Classification of Diseases, 10th ed. (ICD-10) coding.4 Of those with adenomyosis, 90.8% had associated clinical symptoms based on chart review.4 The incidence of adenomyosis in the study was highest among patients 41 to 45 years of age.4 The overall prevalence of adenomyosis in 2015 was 0.8%, with a high of 1.5% among patients 41 to 45 years of age.4 The large variations in estimating incidence and prevalence of adenomyosis could be because there are no standard histologic criteria for diagnosis, as well as the recent advent of laparoscopic surgery creating morcellating specimens that alter the arrangement of tissue, leading to a more difficult diagnosis.5 Table 1 lists populations in which adenomyosis is commonly diagnosed.1 | | | --- | | Most common Multiparous Older than 40 years Prior cesarean delivery Prior uterine surgery Increasingly diagnosed Infertile Younger than 40 years With dysmenorrhea, abnormal uterine bleeding, or both | | CASE A 42-year-old patient presents with a six-month history of chronic pelvic pain. The patient describes the pain as aching and deep in the pelvis. The patient has regular menses but notes that it has been getting progressively heavier with more dysmenorrhea for the past few years. The patient has a history of infertility, and the uterus feels mildly enlarged and tender on bimanual examination. Pelvic ultrasonography suggests diffuse adenomyosis. Clinical Presentation Up to one-third of patients with adenomyosis can be asymptomatic.5 Symptoms typically arise between 40 and 50 years of age.5 There is no pathognomonic sign or symptom of adenomyosis. Common symptoms include abnormal uterine bleeding (heavy menstrual bleeding and irregular menses) and dysmenorrhea (Table 2). Less common symptoms include dyspareunia and chronic pelvic pain.6,7 | | | --- | | Symptoms Abnormal uterine bleeding Chronic pelvic pain Dysmenorrhea Dyspareunia Signs Infertility Uterine enlargement and boggy consistency Uterine tenderness | | Heavy menstrual bleeding occurs in 40% to 60% of patients with adenomyosis.6 Heavy bleeding is likely caused by the increased surface area of the endometrium, subsequent increase in total volume of the endometrium and endometrial glands, or the increased vascularization of the lining of the endometrium and is directly correlated with the extent of myometrial invasion.5,6 Adenomyosis is part of the American College of Obstetricians and Gynecologists PALM-COEIN acronym for the evaluation of abnormal uterine bleeding in reproductive-aged patients (Table 3).8 | | | --- | | Structural (PALM) Polyps (endometrial or cervical) Adenomyosis Leiomyoma Malignancy and hyperplasia Nonstructural (COEIN) Coagulopathy Ovulatory dysfunction Endometrial Iatrogenic Not yet classified | | Dysmenorrhea occurs in 15% to 30% of patients with adenomyosis.6 It is postulated that dysmenorrhea is related to the increased number of oxytocin receptors in the endometrium and increased prostaglandin production contributing to uterine contractions causing dysmenorrhea.5,9 Adenomyosis in a patient with fibroids can cause more severe dysmenorrhea, dyspareunia, or chronic pelvic pain. If a patient with fibroids is having significant dysmenorrhea, dyspareunia, or chronic pelvic pain, an evaluation for adenomyosis may be warranted. Common signs of adenomyosis include uterine enlargement, uterine tenderness with boggy consistency, and infertility. Coexisting conditions such as leiomyomas could contribute to an enlarged uterus. As many as 50% of patients with adenomyosis have leiomyomas; 11% have endometriosis.6 In patients undergoing hysterectomy for fibroids, adenomyosis was reported in 15% to 57% of specimens.10 In 25% to 70% of patients with endometriosis, adenomyosis was also reported; in patients with deep endometriosis, the prevalence of adenomyosis was found to be 49% to 66%.10 Impaired fertility in adenomyosis is thought to be attributable to abnormal thickening of the junctional zone of the myometrium, abnormal uterine peristalsis, and altered sperm transport.10 Because patients are typically diagnosed with adenomyosis after childbearing age, the incidence of adenomyosis in patients with infertility is unclear. But, a meta-analysis found that patients with adenomyosis had a 28% decreased probability of clinical pregnancy using in-vitro fertilization/intracytoplasmic sperm injection vs. patients without adenomyosis.11 Diagnosis Diagnosis of adenomyosis is based on clinical suspicion as well as imaging. With improvements in transvaginal ultrasonography and pelvic magnetic resonance imaging (MRI), more cases of adenomyosis are being diagnosed.6,9 Similar to endometriosis, adenomyosis does not have any classic physical examination findings or laboratory studies that identify it as a likely diagnosis. On transvaginal ultrasonography, diffuse adenomyosis has a variable appearance. An attempt by gynecologists to define the various features resulted in the Morphological Uterus Sonographic Assessment criteria.12 These criteria have low reproducibility between imagers and unknown predictive value in the diagnosis of adenomyosis.13,14 Adenomyosis is considered when the uterus is globular in configuration and multiple areas of shadowing are visible, sometimes described as fan shaped, with difficulty differentiating the outer myometrium from the junctional zone and cystic changes in the junctional zone and myometrium12,13 (Figure 1). Additional features that may be seen include an irregular, or interrupted, junctional zone with hyperechoic islands. When multiple signs are present, the diagnosis is more certain; however, adenomyosis can be focal rather than diffuse. When focal adenomyosis occurs, only a few focal areas of shadowing might be visible, which can be confused with fibroids. In these cases, MRI can differentiate adenomyosis from fibroids with greater certainty (Figure 2). Both a systematic review and meta-analysis demonstrated that transvaginal ultrasonography has a sensitivity for diagnosing adenomyosis of 83.8% and a specificity of 63.9%.15 MRI has a sensitivity of 77% with a specificity of 89%, making it a better confirmatory diagnostic test than transvaginal ultrasonography.16,17 However, because of cost, transvaginal ultrasonography is the first-line imaging technique used in most patients with suspected adenomyosis. In cases where the diagnosis is in question, hysteroscopy may be a helpful adjunct tool. Several changes in the endometrium such as hypervascularization, endometrial defects, and submucosal hemorrhagic cysts can suggest a diagnosis of adenomyosis.10 Treatment Treatment of adenomyosis focuses on symptom control. There are no medical treatments approved by the U.S. Food and Drug Administration for adenomyosis; however, many medical therapies that are successful in treating endometriosis are also used off label for adenomyosis18 (Table 41,6,18,19). Definitive therapy for adenomyosis is hysterectomy if other therapies are not effective. In a large population-based study, more than 80% of women with adenomyosis had a hysterectomy, and almost 40% used chronic pain medications.4 Uterine artery embolization is a potential minimally invasive option to treat focal adenomyosis.20 Figure 3 shows an approach to diagnosis and treatment of adenomyosis.18–22 | Intervention | Mechanism of action | Adverse effects | --- | Nonsteroidal anti-inflammatory drugs | Decreased pain and abnormal bleeding from decreasing circulating prostaglandin | Potential for renal toxicity and gastrointestinal irritation | | Combined oral contraceptives | Atrophy of endometrial tissue causing decreased menstrual bleeding | Contraception, irregular bleeding, thromboembolism | | Progestins | Atrophy of endometrium | Irregular bleeding, weight gain | | Progestin-releasing intrauterine device | Atrophy of endometrium from local progestin | Irregular bleeding, contraception | | Gonadotropin-releasing hormone analogues | Hypoestrogenism, endometrial atrophy | Vasomotor symptoms, risk of osteoporosis | | Gonadotropin-releasing hormone antagonists | Endometrial atrophy, mild hypoestrogenism | Vasomotor symptoms | CASE RESOLUTION The use of MRI vs. treating symptoms empirically with a progestin-releasing intrauterine device is discussed with your patient. The patient elects to try the device and returns in six months reporting that pain is 50% better; however, one year later the patient reports worsening pain. After gynecology consultation and MRI confirmation of the diagnosis, the patient elects to have a hysterectomy. Data Sources: A PubMed search was completed in Clinical Queries using the key terms adenomyosis, diagnosis, and treatment. The search included meta-analyses, randomized controlled trials, clinical trials, and reviews. The Agency for Healthcare Research and Quality evidence reports, Clinical Evidence, Cochrane database, Essential Evidence Plus, Institute for Clinical Systems Improvement, and DynaMed were also searched. Search dates: November 20, 2020, and November 7, 2021. Continue Reading Copyright © 2022 by the American Academy of Family Physicians. This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. See permissions for copyright questions and/or permission requests. Copyright © 2025 American Academy of Family Physicians. All Rights Reserved.
190617
https://www.youtube.com/watch?v=U5byPezGLZ4
General Solution for sine Eddie Woo 1940000 subscribers 293 likes Description 24636 views Posted: 11 Jun 2013 12 comments Transcript: all right so with tan we went forward 180 or pi radius every time right when cosine we did something a little bit different we've got just enough time to do sign no time for a question this time uh let's just do the same angle what do you think okay now i pick the same angle here just so we can get on with seeing what kind of pattern do i have right and this is the trickiest of the lot that's why i left it till last so it's the same angle the same related angle okay so pi on six is still going to be my first solution okay so you can see in there pi on six now think normally you know we're going from zero to two pi so what's your other solution within that domain we're in radians so it's going to be now look see you can see again there's symmetry coming into play right here's pi radians okay no it's not it's not um pi on six i went forward right when i go from pi i go backwards just that little amount you can see how it's the same little that smidgen right so that's what gets me to five pi on six there are my first two solutions okay yep okay now i'm gonna get to hopefully i'm gonna answer it let me let me try and see what's happening though as i keep going right i went zero pi plus my angle then i went one pi minus my angle what happens to my next pair of solutions well it's 2 pi that's where my intercept is plus that little angle right 2 pi plus pi on 6. so again that's going to be 13 pi and six right yeah so 2 pi forward now to get my next solution i've got to go 3 pi and backwards so 3 pi is 18 pi on six so if i go back that's going to be 17 pi on six now let me try and list out these solutions and put it into a form which will show the pattern a little more clearly i've got this pine six right um i've got zero lots of pie and then i go forward pine six there's solution one now my next one i get one lot of pie but then i have to go backwards right three pi on six that's what leads me five times six then my next time i get two lots and i have to go forward that got me to 13 high and then i need three lots and then i go back with okay now at this point i think we're starting to see pattern but the tricky thing is well how do you write it um you could write it as two different parts you could say look all the even ones right you go forward so it's a bit like cosine without the minus and then all the odd ones you go backwards okay now far that away for later um we are going to do that later on it will actually become useful but what we're trying to get at this stage is just one line like this that just nice nicely sort of says the whole cell so how do i get this whole set sometimes it's got plus and sometimes it's got minus now um some of you have met this before okay some of you have seen this answer i'm trying to get across to you why the answer is actually a really ingenious way to go about it rather than just some weird awkward thing to remember okay as i go from one to the next okay i wanted to go plus minus plus minus plus minus okay so i'm going to introduce something which changes sign every time i go up by one okay it's called i'll just i'll just write it next to it okay it's called a switching factor okay so here's the way it works if i want to have a plus here i'm going to add on minus 1 to the power of whatever lot of pi i've got so in this case it's zero okay now you're like huh what does that mean well just stay with me okay anything to the power of zero it's just one right so this so far checks out it's going to be naught pi plus one lot of pi on six thumbs up now what happens when i take this switching factor into the next line i've got one lot of pi okay so i'm going to add on one lot of this switching thing okay you see that minus one to the power it goes back and forth as i go up each number okay so what am i gonna get here one lot of minus one which is what i wanted what happens next line two pi plus two lots of this future you're starting to see what's going on right because of this double negative i go back to plus right and this is three fights yeah it is so smart except you know a lot of people the first time they learn it's just like here's here's some forms go memorize them and and go off and and you know be fruitful multiplayer something like that right and you have no sense in this is actually an ingenious way to succinctly express something that's really a mess right so what's the what's the final solution at the end i'm going to use this green way of writing it okay and it's going to be x is equal to now what is this at the front i've got my counter right so that's my m and lots of pi right then i add this switching thing which sometimes takes me forward sometimes takes me backward whatever i need based on it uh times my little related angle pi on six right just like i have here and just like in our tan example is pi and four okay so in a nutshell those are the general solutions now i promised i would give you a nice neat table which um shows you how they all i don't need that and they all fit together and in general terms rather than in these specific examples okay so draw yourself up a two by three table two columns oh no i better just draw it for you because we'll get the wrong number i did something like this one two three and one down the bottom okay so this table i'm gonna give you it's all of the formulas this is the part that people have to memorize okay um for all of this i'm going to give it to you in radians and i'm going to give it to you in degrees as well because sometimes the question will be in degrees works but you know we've got to use them okay so let's do it for um tan was first then we did cars and then we did slime which is kind of funny because i usually do it in reverse order can you see now why i did 10 first because it's the easiest kind of general solution we'll do radians first and then we'll do degrees okay so what was it we said that you get your little related angle i'm gonna call that theta you know it's pi on four here it's part six i use pi on six again okay so i'm gonna say it's n pi plus theta okay where theta is that first solution the smallest solution acute solution that you get so therefore if we convert it to degrees what are you going to have it'll be 180 n degrees plus theta degrees that's not so hard right okay move on to cosine how did cosine work every even lot of pi two pi four pi six pi eight pi you take that and you go backwards and forwards okay so i'm gonna get two n pi plus or minus whatever that related angle is pi on six and four convert that over yes sir how do you know when to use plus minus so if i'm asked for the general solution you use both this is the general that's the general solution you're putting both this sort of catches it's a way of it's a faster way of writing buy on 6 point six thirteen pi and six rather listing them all out if i put in different values for m i'll get every single one out okay so it's both right here remember two pi is actually 360. so this is 360 m degrees plus or minus theta degrees last one for sign the weird but really clever one okay we start with n pi then we have to add this switching thing which takes you backwards and forwards right and then you multiply that by the little angle pi on six in this case how does it work over here it's just going to be 180 and sorry there's an end there same way okay and don't forget for all of these because we've introduced this m right yeah you must say what n is and later on you'll see i'll give you reasons why sometimes you introduce a second letter um but you need to actually say what they are and the fact that they're integers is crucially important okay
190618
https://www.savemyexams.com/a-level/further-maths/edexcel/17/core-pure/revision-notes/matrices/transformations-using-matrices/invariant-points-and-lines/
A LevelFurther MathsEdexcelCore PureRevision NotesMatricesTransformations using MatricesInvariant Points & Lines Invariant Points & Lines (Edexcel A Level Further Maths): Revision Note Exam code: 9FM0 Author Jamie Wood Last updated Invariant points What is an invariant point? When applying transformations to a shape or collection of points, there may be some points that stay in their original position; these are known as invariant points How can I find invariant points? If the point given by position vector , is invariant under transformation then we can say that This will create a system of simultaneous equations which can be solved to find the invariant point The origin (0,0) is always invariant under a linear transformation Examiner Tips and Tricks Where the question allows, use your calculator to help solve the simultaneous equations Test your found invariant point by multiplying it by the transformation matrix, and making sure you still end up with the same point (invariant) Worked Example Find any invariant points under the transformation given by . A line of invariant points What is a line of invariant points? If every point on a line is mapped to itself under a particular transformation, then it is a line of invariant points For example, a line of reflection is a line of invariant points How can I find a line of invariant points? Use the same strategy as for finding a single invariant point: If the point given by position vector , is invariant under transformation then we can say that This will create a system of simultaneous equations which can be solved to find the invariant point(s) If there is a line of invariant points, rather than solving to find a single solution (a point), the two equations will be able to simplify to the same equation This means that there are infinitely many solutions, and therefore infinitely many invariant points A line contains infinitely many points Your solution will be the equation of the invariant line e.g. y=3x Examiner Tips and Tricks It may not always be obvious that the two equations reduce to the same thing (they could be an awkward multiple of each other) Use your calculator’s simultaneous equation solver; it will tell you that there are infinitely many solutions Worked Example Find the equation of the line of invariant points under the transformation given by Invariant lines What’s the difference between a line of invariant points and an invariant line? If every point on a line is mapped to itself under a particular transformation, then it is a line of invariant points Every single point on the line must stay in the same place With an invariant line however, every point on the line must simply map to another point on the same line We are only concerned with the overall line; not the individual points How do I find an invariant line? We can use a similar strategy to finding invariant points, with two slight changes Use to write the original position vector as Write the transformed position vector as using the same idea Notice that the values of m and c will be the same, but different x and y coordinates This because it is a different point, on the same line For an invariant line under transformation we can write This will create a system of simultaneous equations which can be solved to find the invariant line(s) The first equation can be substituted into the second to give an equation in terms of the variable x and the constants m and c This equation can then be solved to find the values of m and c by equating the coefficients of x, and then equating the constant terms There may be multiple solutions for m and c if there are multiple invariant lines Worked Example Find the equation of any invariant lines under the transformation Unlock more, it's free! Join the 100,000+ Students that ❤️ Save My Exams the (exam) results speak for themselves: Read more reviews Test yourself Did this page help you? Previous:Geometric Transformations with MatricesNext:Roots of Polynomials More Exam Questions you might like Transformations using Matrices Roots of Polynomials Series Maclaurin Series Hyperbolic Functions Volumes of Revolution Methods in Calculus Vector Lines Vector Planes Polar Coordinates Author: Jamie Wood Expertise: Maths Content Creator Jamie graduated in 2014 from the University of Bristol with a degree in Electronic and Communications Engineering. He has worked as a teacher for 8 years, in secondary schools and in further education; teaching GCSE and A Level. He is passionate about helping students fulfil their potential through easy-to-use resources and high-quality questions and solutions. Download notes on Invariant Points & Lines
190619
https://www.purplemath.com/modules/factzero.htm
Published Time: Thu, 13 Mar 2025 18:47:06 GMT Factorials' Trailing Zeroes: How Many Are There? | Purplemath Skip to main content About these ads⚠ Report Ad Home Lessons HW Guidelines Study Skills Quiz Find Local Tutors Demo MathHelp.com Join MathHelp.com Login Select a Course Below Standardized Test Prep ACCUPLACER Math ACT Math ALEKS Math ASVAB Math CBEST Math CLEP Math FTCE Math GED Math GMAT Math GRE Math HESI Math Math Placement Test NES Math PERT Math PRAXIS Math SAT Math TEAS Math TSI Math VPT Math + more tests K12 Math 5th Grade Math 6th Grade Math Pre-Algebra Algebra 1 Geometry Algebra 2 College Math College Pre-Algebra Introductory Algebra Intermediate Algebra College Algebra Homeschool Math Pre-Algebra Algebra 1 Geometry Algebra 2 Search Factorials and Their Trailing Zeroes Purplemath You're probably reading this page because you've been assigned a seemingly impossible exercise, something along the lines of "Here's a really big number; consider its much (much!) bigger factorial and then figure out how many zeroes are at the end of the multiplied-out factorial." Content Continues Below MathHelp.com This "trailing zeroes in a factorial" exercise is pretty easy to answer once you think about it the right way. I couldn't find anything much useful on the Internet, so here's a little lesson on how to handle it. What are the trailing zeroes of a factorial? If we take the factorial of any number larger than 5, then there will be at least one zero at the end of the number. Why? Because 5!=1×2×3×4×5; in particular, 5!=(2×5)×(1×3×4), and (2×5)=10. The factorial of any larger number will have more copies of 2 and 5 (as factors of larger values, like 6 and 15), so there will be even more factors of 10 in these factorials. And every factor of 10 adds a zero to the end of the factorial expansion. Does every factorial have at least one zero at the end? Not every factorial has at least one zero at the end of it. However, as soon as you get to 5!, which contains a factor of 2 and a factor of 5, you'll get a trailing zero. Larger factorials will have more zeroes. So the only factorials which do not have any trailing zeroes are 0!, 1!, 2!, and 4!. This explains why there are trailing zeroes — because the factorials contain factors of 10 in them — but this doesn't tell us how to find the actual number of trailing zeroes. However, since the zeroes come from factors of 10, we can methodically figure out the actual number. In the example below, I'll go through the reasoning which will then create a method for quickly answering this question. Find the number of trailing zeroes in the expansion of 23! If I plug this into my calculator, it'll give me a result formatted in scientific notation, because the answer is too big for the calculator to display in its entirety. In practical terms, the calculator will show me the beginning of the number; unfortunately, I'm only caring about the end of the number (namely, the "trailing zeroes" part). So the calculator won't help. I'll try expanding the factorial: (By the way, yes, "zeroes" is a proper plural of "zero".) 1×2×3×4×5×6×7×8×9 ×10×11×12×13×14×15×16 ×17×18×19×20×21×22×23 Affiliate I know that a number gets a zero at the end of it if the number has 10 as a factor. For instance, 10 is a factor of 50, 120, and 1234567890; but 10 is only once a factor of each of these numbers, which is why each number has only one trailing zero. So I need to find out how many times 10 is a factor in the expansion of 23!. Advertisement I note that 5×2=10, so I need to account for all the products of 5 and 2 that exist in a given factorial's expansion. Looking at the factors in the above expansion, there are many more numbers that are multiples of 2 (namely, 2, 4, 6, 8, 10, 12, 14, …) than are multiples of 5 (namely, 5, 10, 15, …). No matter how many times that 5 is a factor of a given expansion, I know that 2 will be a factor many times more often. If I take all the numbers in the expansion that have 5 as a factor, I'll have way more than enough even numbers to pair with the factor-5 numbers to get factors of 10 — and another trailing zero on my factorial. So to find the number of times 10 is a factor, all I really need to worry about is how many times 5 is a factor in all of the numbers between 1 and 23. I can ignore the factors of 2. This is a very helpful simplification. So, looking at this exercise, how many multiples of 5 are between 1 and 23? There is 5, 10, 15, and 20, for four multiples of 5. Paired with 2's from the even factors, this makes for four factors of 10, so: 23! has four trailing zeroes In fact, if I were to go to the trouble of multiplying out this factorial, I would be able to confirm that 23! =25,852,016,738,884,976,640,000 does indeed have four trailing zeroes. I would also confirm that I really don't want to have to multiply things out; a logical method is going to be much better (that is, it is going to be much easier) than applying brute force. Content Continues Below Find the number of trailing zeroes in 101! Okay, I'll start by finding out how many multiples of 5 are to be found within the numbers (that is, the whole-number factors) from 1 to 101? There's 5, 10, 15, 20, 25,... Oh, heck; let's do this the short way: 100 is the closest multiple of 5 below 101, and 100÷5=20, so there are twenty multiples of 5 between 1 and 101. But wait: 25 is equal to 5×5, so each multiple of 25 has an extra factor of 5 that I need to account for. How many multiples of 25 are between 1 and 101? Since 100÷25=4, there are four multiples of 25 between 1 and 101. These will give me four more copies of 10, and thus four more trailing zeroes at the end of the factorial. Adding these, I get 20+4= 24 trailing zeroes in 101! This reasoning, of finding the number of multiples of 5 1=5, plus the number of multiples of 5 2=25, etc, extends to working with even larger factorials. Affiliate Affordable tutors for hire Find tutors Find the number of trailing zeroes in the expansion of 1000! Okay, there are 1000÷5=200 multiples of 5 between 1 and 1000. The next power of 5, namely 5 2=25, has 1000÷25=40 multiples between 1 and 1000. The next power of 5, namely 5 3=125, will also occur in the expansion, since 125<1000. Doing the division, I find that there are 1000÷125=8 multiples of 125 between 1 and 1000. The next power of 5, namely 5 4=625, also fits in the expansion, and occurs 1000÷625=1.6 times. Um, okay; decimal or fractional portions of a multiple don't make much sense in context, so I'll truncate; 625 occurs 1 time between 1 and 1000. I care only about the one full multiple of 625; I don't care about the 0.6 of a multiple, so I can safely ignore this. In total, I have 200+40+8+1 =249 copies of the factor 5 in the expansion, and thus: 1000! has 249 trailing zeroes Affiliate The example above highlights the general method for answering this question, no matter what factorial they give you. What are the steps for finding a factorial's trailing zeroes? Take the number that you've been given the factorial of. Divide by 5; if you get a decimal, truncate to a whole number. Divide by 5 2 = 25; if you get a decimal, truncate to a whole number. Divide by 5 3 = 125; if you get a decimal, truncate to a whole number. Continue with ever-higher powers of 5, until your division results in a number less than 1. Once the division is less than 1, stop. Sum all the whole numbers that you got in your divisions. This is the number of trailing zeroes. What do these steps look like, in application? What is an example of finding the number of trailing zeroes in a factorial? How many trailing zeroes would be found in 4617!, upon expansion? I'll apply the procedure from above: 5 1 : 4617 ÷ 5 = 923.4, so I get 923 factors of 5 5 2 : 4617 ÷ 25 = 184.68, so I get 184 additional factors of 5 5 3 : 4617 ÷ 125 = 36.936, so I get 36 additional factors of 5 5 4 : 4617 ÷ 625 = 7.3872, so I get 7 additional factors of 5 5 5 : 4617 ÷ 3125 = 1.47744, so I get 1 more factor of 5 5 6 : 4617 ÷ 15625 = 0.295488, which is less than 1, so I stop here. Then this factorial has 923+184 +36+7+1 equals 1151, so: 4617! has 1151 trailing zeroes. By the way, you can get the same result, if you keep track as you go, by just dividing repeatedly in your calculator by 5's: 4617 ÷ 5 = 923.4 (write down 923) 923.4 ÷ 5 = 184.68 (write down 184) 184.68 ÷ 5 = 36.936 (write down 36) 36.936 ÷ 5 = 7.3827 (write down 7) 7.3827 ÷ 5 = 1.47744 (write down 1) 1.47744 ÷ 5 < 1 (so don't write anything down) At which point, you're done doing divisions. Turn to your scratch paper where you've written down the whole numbers (namely, 923,184, 36, 7, and 1), and add them up to get 1151, as before. Can software like Excel find the trailing zeroes for me? In general, software like Excel won't help with this sort of computation, any more than your calculator could. Software customarily only stores fifteen or so digits of "accuracy", which is why, after a number gets sufficiently large, the display switches automatically to scientific notation. Since the software is only storing the first few leading digits, the remaining trailing digits have to be filled in with zeroes. If you attempt the first expansion above, "23!", in Excel, you'll get something with way more trailing zeroes than is actually correct. In other words, the computer will give you the wrong answer. So learn the concepts; don't try to cheat with software. URL: Select a Course Below Standardized Test Prep ACCUPLACER Math ACT Math ALEKS Math ASVAB Math CBEST Math CLEP Math FTCE Math GED Math GMAT Math GRE Math HESI Math Math Placement Test NES Math PERT Math PRAXIS Math SAT Math TEAS Math TSI Math VPT Math + more tests K12 Math 5th Grade Math 6th Grade Math Pre-Algebra Algebra 1 Geometry Algebra 2 College Math College Pre-Algebra Introductory Algebra Intermediate Algebra College Algebra Homeschool Math Pre-Algebra Algebra 1 Geometry Algebra 2 Share This Page Terms of Use Privacy / Cookies Contact About Purplemath About the Author Tutoring from PM Advertising Linking to PM Site licencing Visit Our Profiles © 2024 Purplemath, Inc.All right reserved.Web Design by
190620
https://www.dpmms.cam.ac.uk/~wtg10/FTA.html
How to discover a proof of the fundamental theorem of arithmetic. The usual proof. Here is a brief sketch of the proof of the fundamental theorem of arithmetic that is most commonly presented in textbooks. First one introduces Euclid's algorithm, and shows that it leads to the following statement: for any two integers x and y there exist integers h and k such that hx+ky=(x,y), where (x,y) is the highest common factor of x and y. Next, one deduces from this the result that if p is a prime and p divides ab then p divides a or p divides b. (Proof: if p does not divide a then (p,a)=1 so by the previous result we can find h and k such that hp+ka=1. Then hpb+kab=b. Since p divides ab and p obviously divides hpb, we deduce that p divides b.) Then, one deduces from this, by an easy inductive argument, that if p divides a1...ak then p divides ai for some i. Lastly, one takes a supposed minimal counterexample to the theorem. So let p1...pr=q1...qs, where the pi and qj are all primes and not the same ones up to a reordering. By minimality, no pi is equal to any qj (or we could divide through and get a smaller example). But p1 divides the product of the pi and hence the product of the qj. By Step 3, p1 divides some qj, which is nonsense as qj is a prime not equal to p1. How such an argument might have been discovered. If you did not know how to prove unique factorization, then what would possess you to define Euclid's algorithm, think of the clever trick to show Step 2 above, and put it all together? It seems that Euclid himself did something like that. (Although he did not write down a complete proof of unique factorization, he did get as far as Step 2, and must have known how to deduce from it the whole theorem.) Did he stumble on his ideas by accident, messing about with highest common factors and suddenly noticing what he could do with his results, did he start with the problem of unique factorization and develop his algorithm to deal with it, or was he simply an utter genius who could invent arguments of several steps all in one go? I am no historian, and do not even know whether the answer to this question is known. However, I hope to show that he could have been led naturally to his famous algorithm by first thinking about unique factorization. (I myself believe that he was led to it this way, but as I say this does not matter.) In the spirit of Polya, I shall try to make as explicit as possible the rules of thumb that I assume are part of a standard mathematical armoury. I shall probably be more explicit than Euclid himself would have been, but this is not a problem as I am not trying to be historical, and in any case it is quite possible to apply mathematical rules of thumb without ever being aware of them - just as it is possible to speak grammatically without formulating the rules of grammar. I shall give far more detail than any human reader is likely to need, because I am also interested in how one might teach computers to do mathematics. One guiding principle is so useful and well known that I shall state it before I even start. Principle 1. For questions to do with the positive integers, induction is a good idea. Very often, a good first line of an argument is, "Let us consider a minimal counterexample". We can now get to work. Guided by Principle 1 above, we do indeed consider a minimal counterexample. What does this mean? Since it is a counterexample, it must be a pair of distinct prime factorizations of the same number. Since it is minimal, it should not be possible to find a smaller example. We must therefore consider an expression p1...pr=q1...qs, where the pi and qj are prime numbers, and they are not the same sequence up to a reordering. Now it is clear that we cannot just stare at this expression and feel certain that it must be false. After all, why shouldn't the two sides of the expression be equal if say the primes involved are very large? We have to find some reason for them not being equal, and we don't have much to go on. Noticing that no pi is equal to any qj. When you don't have much to go on, the following embarrassingly obvious principle is often useful. Principle 2. Write down what you do have to go on and see what you can deduce from it. Is there anything we haven't used in this example? There is, because we have not yet used the minimality of the counterexample. How does one ever deduce anything from minimality? This question has an answer, which I shall formulate as another principle. Principle 3. If you want to exploit the minimality of an X, then think of methods for finding a smaller X. Given that a smaller X does not exist, any method you think of must fail. The fact that it fails has consequences that may be useful. How might we pass from p1...pr=q1...qs to a smaller example? Well, both sides represent some positive integer n, so a more general question is how we can pass from a positive integer n to a smaller one. There are various methods, such as subtracting 1, or subtracting other numbers. Notice, however, that there is not a lot we can say about n-1. Let us be guided by Principle 2, and ask ourselves what we know about n. We could also use the following equally obvious principle. Principle 4. If you are trying to find an example of an X, then write down what makes an X an X. In this case we are trying to find an integer smaller than n that can be factorized in two different ways. Now notice that n is not a specific integer like 1001, but is rather a hypothetical integer, and all we know about it is that it equals both p1...pr and q1...qs, where these are themselves hypothetical sequences of primes. This leads to another principle. Principle 5. If you are going to use a hypothetical object X to construct an object Y, then Y will itself be hypothetical and all you can use to construct Y is the hypotheses about X. Putting together Principles 4 and 5 in our particular example, we see that we are trying to construct from n=p1...pr=q1...qs a smaller integer which can also be written as a product of primes in two different ways, and that all we have to help us is the two ways of writing n. Principle 6. Relax the constraints. For example, if you are asked to do two things at once, see if you can do one of them and then build up to the second. We are trying to find an integer smaller than n which is a product of primes in two different ways. Can we at least use n to find a smaller integer which is a product of primes in one way? Notice that it is important here to use n in a genuine way. It is not good enough to say "what about the number 2?" So for this we apply Principle 5, and finally notice (of course actually we could have noticed this long ago, but I am trying to minimize the human skills I assume) that p2...pr is an integer smaller than n, constructed using the properties of n, with a prime factorization. We now return to the harder problem - constructing a smaller integer with two prime factorizations. It is clear that using the property n=p1...pr will not be enough, as every integer has at least one factorization. It is therefore very natural to consider a second integer, constructed using the property n=q1...qs, namely the integer q2...qs. We have got somewhere, since we now have an attempted construction of a smaller example. Indeed, we will have a smaller example if p2...pr=q2...qs. However, now (applying Principle 3) we remember that there is no smaller example. It follows that our attempt did not succeed. From this we deduce that p1 does not equal q1. Let us review the above argument, guided by the following principle. Principle 7. Having reached a mathematical conclusion, examine it carefully to see which hypotheses were essential and which were incidental. In this case, one sees readily that it did not matter that we considered p1 and q1. Indeed, since these are hypothetical primes about which we have said nothing, it is clear that there is no distinction between any of the pi or qj. Consequently, we can extend our above reasoning to achieve the following conclusion: no pi is equal to any qj. [It is interesting to think about how a computer would have to be set up in order to apply the reasoning of the above paragraph. Perhaps it would regard {1,2,...,r} and {1,2,...,s} as indexing sets, taking care to note that these indexing sets are sets and nothing more. Or perhaps it would consider pi and qj in the first place rather than p1 and q1.] Formulating the lemma p|ab => p|a or p|b. We have made some definite progress, since we now know that no pi is equal to any qj. However, at this point we hit a brick wall: there still appears to be absolutely no reason for the two products to be distinct, or any obvious way to find a smaller example. In a situation like this, it is often a good idea to fall back on the following principle, which I divide into two subprinciples. Principle 8. a. Try to prove something more specific and thus easier. b. Try to prove the simplest non-trivial special case you can find. What does a special case mean for our example? It surely does not mean choosing a particular pair of products of primes and proving that they are distinct. After all, that is just a matter of numerical checking and it will tell us nothing. (Perhaps it is not obvious in advance that it will tell us nothing, since there might be interesting patterns in the digits of the numbers arising in the calculations. However, if we searched for such patterns we would not find them.) Let us see where we stand and try to describe it precisely. If we are faced with the expression p1...pr=q1...qs and know that no pi is equal to any qj, then we feel there is nothing much to say because the statement is too general. If on the other hand we substitute values for r, s and all the pi and qj, then we can say something but we learn nothing because the statement is too specific. A natural next step is to try something in between. This had better be stated as a principle. Principle 9. If two attempted arguments fail for opposite reasons, then try to find an argument in between. What could that mean? It is not difficult to say in this case. We had problems if we specified none of the primes, and we also had problems if we specified all of them. So let us try specifying some of them. The simplest way of doing so is to specify just one. So let us choose a value for p1. What value should we try? Principle 8b seems to tell us that we should try p1=2. A simpler conjecture. Doing this, we can instantly deduce that none of the qj is equal to 2. We now seem to have a genuinely new problem to think about, simpler than the original one. We must show that it is not possible for 2p2...pr to equal q1...qs if none of the qj is equal to 2. Why might this be? Well, a minimally competent human mathematician can see the answer immediately: the left hand side is even and the right hand side is odd. How might a computer discover this? It could proceed as follows. First, it would reflect that it must use the information that p1=2, since that is all that separates us from the earlier, hopeless looking situation. Next, it would reflect that it must use the information that no qj is equal to 2, since if some qj did equal 2 (which as we know contradicts the minimality) then one could divide through and lose the information that one of the pi was equal to 2. It could then apply the following principle. Principle 10. Try to use as little information as possible. In other words, see how you get on with as few hypotheses as you can, and introduce extra ones only if they are necessary. We have already picked out two hypotheses that must be used. Using just those leads to the following conjecture: if 2 is a factor of m and n is a product of primes not equal to 2, then m does not equal n. Again, we human beings find this immediately obvious, because primes not equal to 2 are odd. But what could cause a computer to notice this? There are several possibilities, one of which is to appeal to Principle 1 again: consider a minimal counterexample. This will be an example of the form m=q1...qs, where m is even and each qj is a prime not equal to 2. The minimality tells us (by similar reasoning to our earlier minimality argument) that q1...qs-1 does not give rise to a counterexample. That is, it is not equal to an even integer. Let us write a=q1...qs-1. Then a is not divisible by 2, but aqs is. What does this tell us? It is time for another principle. Principle 11. Don't forget what it is you are trying to prove. In this case, I mean not the conjecture recently stated, but unique factorization. If unique factorization is true and 2 divides q1...qs, then 2 must equal one of the qj. In our case, the only way it can happen is if it equals qs. Can we prove this? We are asking the following question: if a is odd, qs is prime and aqs is even, does it follow that qs=2? Principle 12. Try the contrapositive. All right, if qs is a prime not equal to 2 and a is odd, does it follow that aqs is odd? Well, what does it mean to say that a is odd? The definition is that it is not divisible by 2. However, another, more positive definition is that a can be written 2b+1 for some integer b. [This would be a very low-level thing for a computer to notice, but I don't immediately see how it would do so.] Let us substitute this in to the expression aqs, to obtain (2b+1)qs=2bqs+qs. It is now an easy exercise [I leave it to the reader to think about how the computer would do it] to notice that this is odd if and only if qs itself is odd. However, we know it is even, so we know that qs is even, and hence equal to 2 as required. Back to formulating the main lemma. We now review the above argument. We see that what we ended up needing to prove was that if aqs was even and a was odd, then qs was equal to 2. We did this by showing that qs was even, and deducing (from the fact that it was a prime) that it was 2. Thus, the statement that led to our success was the following: if a and b are odd, then ab is odd. Having considered a special case, we now do the obvious thing. Principle 13. Once a special case is proved, try to generalize it. We have had some success starting with the assumption that p1=2. We could now see if we can do something similar with the assumption that p1=3. We would find that we could: indeed, just as with 2, the statement we would end up needing to prove would be that if neither a nor b is divisible by 3, then ab is not divisible by 3. Inspired by the proof of this for 2, we would write a=2x+1 or 2x+2 and b=2y+1 or 2y+2, check four cases (only three if we are feeling clever) and discover that the statement is true. We could then go on and prove similar statements for 5,7 and so on. We would then find ourselves faced with the following problem. For any specific prime p, we can prove, by a brute force argument, that if p|ab then p|a or p|b. However, our argument does not generalize. Still, we have at least formulated the lemma that we want to prove, and we know that it implies the whole theorem. Proving the lemma p|ab => p|a or p|b. To a mathematician who had not seen a proof of this statement, it would be a formidable problem to find one. Getting to the stage of formulating the lemma is definitely the easier part of proving the fundamental theorem of arithmetic. Once the lemma is formulated, we again find ourselves in the situation with which professional mathematicians are all too familiar: there seems to be absolutely no reason for the statement to be true in general, and yet one believes that it is. This kind of despair can be very productive, because it often leads to Principle 2. If there is not much to go on, then in a sense one's moves are forced, and that can make finding a proof (if it exists) easier rather than harder. In this case, in a desperate attempt to strengthen our hypotheses, let us apply Principle 1 again. We would like to consider a minimal counterexample. Principle 3 then tells us to start with a prime p dividing a product ab while dividing neither of the factors, and to attempt to build out of it a smaller example. In this case, it is not immediately obvious how to find a smaller example, so let us consider (guided by Principle 6) a more general problem: how can one construct any other example from a given one? That is, if p divides ab without dividing either a or b, how can one find any other example of a prime q dividing cd without dividing c or d? Let us try to do the simplest thing and just vary one of the numbers p, a or b. A moment's thought makes it clear that varying p is unlikely to be easy, so let us try to vary a. We must convert a into a new number a' in such a way that p divides a'b and does not divide a'. Let us try adding something to a. Given that we know nothing about a except that it is not a multiple of p, if we want to preserve this property, then we ought to add a multiple of p. As an experiment, let us try a'=a+p. We find that a'b=(a+p)b=ab+pb which is a multiple of p, since ab is. We can obviously repeat this process, which shows that adding any multiple of p to a gives an integer a' such that p|a'b but p divides neither a' nor b. Having had some success with addition, we could try multiplication. What happens if we replace a by ha? Well, we know that p divides hab, since it divides ab, but perhaps it divides ha. Of course, if p does not divide h, then we suspect that p will not divide ha either, but that is what we are trying to prove. But hang on! Our counterexample was supposed to be minimal, so as long as h is smaller than b we are all right. So we now have two methods of producing new examples: multiply by a number smaller than b, and add or subtract a multiple of p. The question now is: can we use these two processes to construct a smaller example? Stumbling on Euclid's algorithm. Just to make things absolutely clear, here is the problem we are now considering: given a prime number p and a positive integer a, can we construct a smaller positive integer a' from a by a combination of adding multiples of p and multiplying by positive integers less than b? The argument that allowed us to multiply no longer works if we have already changed a, so we should first multiply by some h between 1 and b-1 and then add tp for some t. Thus, we can reformulate the question more precisely: does there exist an integer combination a'=ha+tp which is positive and smaller than a, with h positive and smaller than b? Of course, t will have to be negative. The most obvious thing we could do is try subtracting p from a. If a is greater than p then this works. Hence, on the assumption that we have a minimal counterexample, we know that a is less than p. To make any more progress, we must use h. We are a little bit worried about the condition that h is less than b, so we would like to keep it as small as possible. On the other hand, we obviously need t to be non-zero, which forces ha to be greater than p. So a natural choice for h is the smallest integer with this property. We are thus led to the following question: let p be a prime, let 0 < a < p and let h be the smallest integer such that p < ha. (It is not possible for p to equal ha since p is a prime and a clearly cannot equal 1 if p does not divide b.) Is ha-p smaller than a? Remembering Principle 12, we consider the contrapositive: if ha-p is greater than or equal to a, does it follow that h is not the smallest integer such that p < ha? Clearly the answer is that h is not the smallest if and only if p < (h-1)a. But if ha-p is greater than or equal to a, then, rearranging, (h-1)a is greater than or equal to p and hence greater than p (again because p is prime and a is not 1). Have we suddenly finished the proof? Yes, provided that the h we chose is definitely less than b. Now we are assuming that p|ab, so we know that ab=sp for some positive integer s. Since p is prime, we know that s cannot equal 1. It follows that ab is at least as big as 2p. On the other hand, since a < p, it is clear that ha is not as big as 2p (either by the argument that ha-p < a < p or simply because the first time you go above p you certainly don't jump as far as 2p). Therefore, h < b as desired, and the proof is complete. Just a reminder of why the proof is complete: we started with a minimal counterexample and have used it to construct a smaller counterexample. This contradiction proves the lemma, and with it the theorem. I should perhaps remark that the algorithm implicit in the above proof is not quite the same as Euclid's algorithm, since we write p=ha-a' rather than p=qa+a'. Moreover, we pass from the pair (a,p) to the pair (a',p) rather than the pair (a',a). These differences make the algorithm less efficient for finding highest common factors, but they introduce ideas that lead easily to the usual version of the algorithm. Converting the above thoughts into the usual proof. As every mathematician knows, the order in which one thinks of a proof is not the order in which one writes it. These web pages are attempts to write proofs in their untidy, thinking order rather than their neat, logical order. This makes them more long-winded but perhaps easier to understand and memorize. Nevertheless, the process of reorganizing one's incoherent thoughts into a mathematics paper or textbook is an important one. Examining the above argument, one is naturally led to isolate Euclid's algorithm, as I hope to show. The trouble with arguments that start, "Consider a minimal counterexample" is that they often hide an interesting inductive process, almost pretending that it doesn't exist. A curious computer would have been told the following principle. Principle 14. If you have proved an argument by deriving a contradiction from the existence of a minimal counterexample, try rewriting the proof by starting with any example, repeatedly applying the method you used to construct a smaller example and seeing where you end up. In our proof, we started with a triple (p,a,b) and built a new triple (p,a',b), with 0 < a' < a and with a' of the form ha+tp. What happens if we repeat this process? The first thing to observe is that p and b are never altered, so let us forget about them for now. Where will the process of replacing a end? Well, the only extra facts that we used from time to time were that a was not equal to 1 and was not a multiple of p. It follows that if a is not a multiple of p then the process will end when we have created a new a that does equal 1. This new a is created from the original one by a succession of operations of the following form: multiply by some h and subtract some multiple of p. Doing two such operations results in doing a third: k(ha+tp)+up=(kh)a+(kt+s)p. Hence, we have managed to express 1 as an integer combination Ha+Tp for some integers H and T. This is exactly the consequence of Euclid's algorithm used in the usual proof. Now recall why it was that a should not equal 1. It was because if p divides ab and a=1, then p certainly divides b. But the whole point of the integer combination Ha+Tp was that if p divides ab without dividing either a or b, then it also divides (Ha+Tp)b without dividing Ha+Tp or b. Now let us assume merely that p divides ab without dividing a. Then we can find H and T such that Ha+Tb=1, and we know that we will now find that p divides b if we continue the argument. Let us do so: we know that Ha+Tp=1 and p divides (Ha+Tp)b. Thus p divides b. This is almost the argument in its standard form, but in order to prove that p divides (Ha+Tp)b we have used the fact that Ha+Tp was built in several steps from a. It is natural to ask whether we can prove it more directly, and as soon as we have asked that, we see that we can since p divides ab and p divides itself. We have recovered the trick used in step 2 of the usual proof. We haven't quite formulated Euclid's algorithm, since we assumed that p was prime, but it is a short step from here to asking the question, "What would happen if we did the same process starting with an arbitrary pair of integers?" Given that we have now proved the fundamental theorem of arithemetic, we can quickly discover the answer, and then, if we feel like it, we can prove this answer by generalizing the proof for p prime and not using the fundamental theorem of arithmetic after all.
190621
https://en.wikipedia.org/wiki/Discrete_Chebyshev_polynomials
Published Time: 2010-04-09T16:55:26Z Discrete Chebyshev polynomials - Wikipedia Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk The internet we were promised July 15: An important update for readers in the United States. You deserve an explanation, so please don't skip this 1-minute read. It's Tuesday, July 15, and our fundraiser won't last long. If you've lost count of how many times you've visited Wikipedia this year, we hope that means it's given you at least $2.75 of knowledge. Please join the 2% of readers who give what they can to help keep this valuable resource ad-free, up-to-date, and available for all. After nearly 25 years, Wikipedia is still the internet we were promised—an oasis of free and collaborative knowledge. By visiting Wikipedia today, you're choosing a free and fair internet: a space where you can find facts you need without being distracted by ads or the agendas of wealthy owners. Most readers don't donate, so your gift matters. If Wikipedia provides you with $2.75 of knowledge, please donate $2.75 right now—or consider a monthly gift to help all year long. Thank you. Proud host of Wikipedia and its sister sites How often would you like to donate? Once Monthly Annual Support Wikipedia year-round Thanks for your generous support Please select an amount (USD) The average donation in the United States is around$13. $2.75 $15 $25 $50 $100 $250 $500 Other amount Other [x] I'll generously add a little to cover the transaction fees so you can keep 100% of my donation. Please select a payment method Online Banking Credit / Debit Card Continue Donate one time Donate monthly Donate annually Please select an amount (minimum $1) We cannot accept donations greater than 25000 USD through our website. Please contact our major gifts staff at [email protected]. Please select a payment method Maybe later Can we follow up and let you know if we need your help again? The support and advice we get from donors in the United States is priceless, but many donors don't let us stay in touch. Will you commit today, this Tuesday, to staying in touch with the Wikimedia Foundation? Yes No Sorry to hear that. We don't email often; would you consider changing your mind? Thanks for changing your mind! We’ll respect your inbox. Your information is handled in accordance with our donor privacy policy, and each email you receive will include easy unsubscribe options. Continue Please select an email option Almost done: Please, make it monthly. Monthly support is the best way to ensure that Wikipedia keeps thriving. No thanks! I'll make a one-time donation of Yes, I'll donate each month Yes, I'll donate monthly, but for a different amount Thank you for your support! Enter your monthly donation amount Please select an amount (minimum $1) We cannot accept donations greater than 25000 USD through our website. Please contact our major gifts staff at [email protected]. Donate monthly Problems donating?|Frequently asked questions|Other ways to give|I already donated We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the USA and elsewhere. Donations to the Wikimedia Foundation are likely not tax-deductible outside the USA.We never sell your information. By submitting, you are agreeing to our donor privacy policy. The Wikimedia Foundation is a nonprofit, tax-exempt organization.We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the U.S. and elsewhere. The Wikimedia Foundation is a recognized public welfare institution (ANBI).If you make a recurring donation, you will be debited by the Wikimedia Foundation until you notify us to stop. We’ll send you an email which will include a link to easy cancellation instructions. Sorry to interrupt, but your gift helps Wikipedia stay free from paywalls and ads. Please, donate $2.75. No, but maybe later when I have more time Yes, I'll donate $2.75 How would you like to be reminded? We can send you an email or text message reminder to donate later. Send me an email Send me a text message Send me an email reminder We’ll gladly send you an email reminder and get out of your way so you can get back to reading. Email address Submit Please enter a valid email address i.e. [email protected] We never sell your information. By submitting, you are agreeing to our donor privacy policy. The Wikimedia Foundation is a nonprofit, tax-exempt organization.We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the U.S. and elsewhere. The Wikimedia Foundation is a recognized public welfare institution (ANBI).We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the USA and elsewhere. Donations to the Wikimedia Foundation are likely not tax-deductible outside the USA. Send me a reminder We’ll gladly send you a reminder and get out of your way so you can get back to reading. Please enter your mobile phone number and check the opt in checkbox to receive a text message reminder. Mobile phone number - [x] I would like to receive text messages such as donation reminders and appeals from Wikimedia at the number I have provided. By participating, you consent to receive recurring updates through automated text messages from Wikimedia to the phone number you provide. Message frequency varies. For text messages, Msg&Data rates may apply. Text STOP to cancel or HELP for help. Terms of Service and Privacy Policy. Submit Please enter a valid phone number e.g. (201) 555-0123 Please check the box to consent to receive messages. Thank you! We will send you a reminder. 🎉 Thank you for donating recently! 🎉 Your support means the world to us. We'll hide banners in this browser for the rest of our campaign. Close Other ways to give #### Donor-Advised Fund (DAF) Unlock tax benefits by directing your donation via your Donor-Advised Fund (DAF) #### Individual Retirement Account (IRA) Qualified Charitable Distributions from a tax efficient eligible IRA #### Workplace Giving Involve your employer and increase the impact of your donation More ways to give [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Elementary Definition 2 Advanced Definition 3 Connection with Spin Algebra 4 References Discrete Chebyshev polynomials [x] Add languages Add links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Add interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia Not to be confused with Chebyshev polynomials. In mathematics, discrete Chebyshev polynomials, or Gram polynomials, are a type of discrete orthogonal polynomials used in approximation theory, introduced by Pafnuty Chebyshev and rediscovered by Gram. They were later found to be applicable to various algebraic properties of spin angular momentum. Elementary Definition [edit] The discrete Chebyshev polynomial t n N(x){\displaystyle t_{n}^{N}(x)} is a polynomial of degree n in x, for n=0,1,2,…,N−1{\displaystyle n=0,1,2,\ldots ,N-1}, constructed such that two polynomials of unequal degree are orthogonal with respect to the weight function w(x)=∑r=0 N−1 δ(x−r),{\displaystyle w(x)=\sum {r=0}^{N-1}\delta (x-r),} with δ(⋅){\displaystyle \delta (\cdot )} being the Dirac delta function. That is, ∫−∞∞t n N(x)t m N(x)w(x)d x=0 if n≠m.{\displaystyle \int {-\infty }^{\infty }t_{n}^{N}(x)t_{m}^{N}(x)w(x)\,dx=0\quad {\text{ if }}\quad n\neq m.} The integral on the left is actually a sum because of the delta function, and we have, ∑r=0 N−1 t n N(r)t m N(r)=0 if n≠m.{\displaystyle \sum {r=0}^{N-1}t{n}^{N}(r)t_{m}^{N}(r)=0\quad {\text{ if }}\quad n\neq m.} Thus, even though t n N(x){\displaystyle t_{n}^{N}(x)} is a polynomial in x{\displaystyle x}, only its values at a discrete set of points, x=0,1,2,…,N−1{\displaystyle x=0,1,2,\ldots ,N-1} are of any significance. Nevertheless, because these polynomials can be defined in terms of orthogonality with respect to a nonnegative weight function, the entire theory of orthogonal polynomials is applicable. In particular, the polynomials are complete in the sense that ∑n=0 N−1 t n N(r)t n N(s)=0 if r≠s.{\displaystyle \sum {n=0}^{N-1}t{n}^{N}(r)t_{n}^{N}(s)=0\quad {\text{ if }}\quad r\neq s.} Chebyshev chose the normalization so that ∑r=0 N−1 t n N(r)t n N(r)=N 2 n+1∏k=1 n(N 2−k 2).{\displaystyle \sum {r=0}^{N-1}t{n}^{N}(r)t_{n}^{N}(r)={\frac {N}{2n+1}}\prod _{k=1}^{n}(N^{2}-k^{2}).} This fixes the polynomials completely along with the sign convention, t n N(N−1)>0{\displaystyle t_{n}^{N}(N-1)>0}. If the independent variable is linearly scaled and shifted so that the end points assume the values −1{\displaystyle -1} and 1{\displaystyle 1}, then as N→∞{\displaystyle N\to \infty }, t n N(⋅)→P n(⋅){\displaystyle t_{n}^{N}(\cdot )\to P_{n}(\cdot )} times a constant, where P n{\displaystyle P_{n}} is the Legendre polynomial. Advanced Definition [edit] Let f be a smooth function defined on the closed interval [−1,1], whose values are known explicitly only at points x k:= −1 + (2 k − 1)/m, where k and m are integers and 1 ≤ k ≤ m. The task is to approximate f as a polynomial of degree n<m. Consider a positive semi-definitebilinear form(g,h)d:=1 m∑k=1 m g(x k)h(x k),{\displaystyle \left(g,h\right){d}:={\frac {1}{m}}\sum {k=1}^{m}{g(x_{k})h(x_{k})},} where g and h are continuous on [−1,1] and let ‖g‖d:=(g,g)d 1/2{\displaystyle \left\|g\right\|{d}:=(g,g){d}^{1/2}} be a discrete semi-norm. Let φ k{\displaystyle \varphi {k}} be a family of polynomials orthogonal to each other (φ k,φ i)d=0{\displaystyle \left(\varphi {k},\varphi {i}\right){d}=0} whenever i is not equal to k. Assume all the polynomials φ k{\displaystyle \varphi {k}} have a positive leading coefficient and they are normalized in such a way that ‖φ k‖d=1.{\displaystyle \left\|\varphi {k}\right\|_{d}=1.} The φ k{\displaystyle \varphi _{k}} are called discrete Chebyshev (or Gram) polynomials. Connection with Spin Algebra [edit] The discrete Chebyshev polynomials have surprising connections to various algebraic properties of spin: spin transition probabilities, the probabilities for observations of the spin in Bohm's spin-s version of the Einstein-Podolsky-Rosen experiment, and Wigner functions for various spin states. Specifically, the polynomials turn out to be the eigenvectors of the absolute square of the rotation matrix (the Wigner D-matrix). The associated eigenvalue is the Legendre polynomial P ℓ(cos⁡θ){\displaystyle P_{\ell }(\cos \theta )}, where θ{\displaystyle \theta } is the rotation angle. In other words, if d m m′=⟨j,m|e−i θ J y|j,m′⟩,{\displaystyle d_{mm'}=\langle j,m|e^{-i\theta J_{y}}|j,m'\rangle ,} where |j,m⟩{\displaystyle |j,m\rangle } are the usual angular momentum or spin eigenstates, and F m m′(θ)=|d m m′(θ)|2,{\displaystyle F_{mm'}(\theta )=|d_{mm'}(\theta )|^{2},} then ∑m′=−j j F m m′(θ)f ℓ j(m′)=P ℓ(cos⁡θ)f ℓ j(m).{\displaystyle \sum {m'=-j}^{j}F{mm'}(\theta )\,f_{\ell }^{j}(m')=P_{\ell }(\cos \theta )f_{\ell }^{j}(m).} The eigenvectors f ℓ j(m){\displaystyle f_{\ell }^{j}(m)} are scaled and shifted versions of the Chebyshev polynomials. They are shifted so as to have support on the points m=−j,−j+1,…,j{\displaystyle m=-j,-j+1,\ldots ,j} instead of r=0,1,…,N{\displaystyle r=0,1,\ldots ,N} for t n N(r){\displaystyle t_{n}^{N}(r)} with N{\displaystyle N} corresponding to 2 j+1{\displaystyle 2j+1}, and n{\displaystyle n} corresponding to ℓ{\displaystyle \ell }. In addition, the f ℓ j(m){\displaystyle f_{\ell }^{j}(m)} can be scaled so as to obey other normalization conditions. For example, one could demand that they satisfy 1 2 j+1∑m=−j j f ℓ j(m)f ℓ′j(m)=δ ℓ ℓ′,{\displaystyle {\frac {1}{2j+1}}\sum {m=-j}^{j}f{\ell }^{j}(m)f_{\ell '}^{j}(m)=\delta {\ell \ell '},} along with f ℓ j(j)>0{\displaystyle f{\ell }^{j}(j)>0}. References [edit] ^Chebyshev, P. (1864), "Sur l'interpolation", Zapiski Akademii Nauk, 4, Oeuvres Vol 1 p. 539–560 ^Gram, J. P. (1883), "Ueber die Entwickelung reeller Functionen in Reihen mittelst der Methode der kleinsten Quadrate", Journal für die reine und angewandte Mathematik (in German), 1883 (94): 41–73, doi:10.1515/crll.1883.94.41, JFM15.0321.03, S2CID116847377 ^R.W. Barnard; G. Dahlquist; K. Pearce; L. Reichel; K.C. Richards (1998). "Gram Polynomials and the Kummer Function". Journal of Approximation Theory. 94: 128–143. doi:10.1006/jath.1998.3181. ^A. Meckler (1958). "Majorana formula". Physical Review. 111 (6): 1447. Bibcode:1958PhRv..111.1447M. doi:10.1103/PhysRev.111.1447. ^N. D. Mermin; G. M. Schwarz (1982). "Joint distributions and local realism in the higher-spin Einstein-Podolsky-Rosen experiment". Foundations of Physics. 12 (2): 101. Bibcode:1982FoPh...12..101M. doi:10.1007/BF00736844. S2CID121648820. ^Anupam Garg (2022). "The discrete Chebyshev–Meckler–Mermin–Schwarz polynomials and spin algebra". Journal of Mathematical Physics. 63 (7): 072101. Bibcode:2022JMP....63g2101G. doi:10.1063/5.0094575. Retrieved from " Categories: Orthogonal polynomials Approximation theory Hidden category: CS1 German-language sources (de) This page was last edited on 26 May 2025, at 16:13(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Discrete Chebyshev polynomials Add languagesAdd topic
190622
https://www.britannica.com/science/ellipse
SUBSCRIBE SUBSCRIBE Home History & Society Science & Tech Biographies Animals & Nature Geography & Travel Arts & Culture ProCon Money Games & Quizzes Videos On This Day One Good Fact Dictionary New Articles History & Society Lifestyles & Social Issues Philosophy & Religion Politics, Law & Government World History Science & Tech Health & Medicine Science Technology Biographies Browse Biographies Animals & Nature Birds, Reptiles & Other Vertebrates Bugs, Mollusks & Other Invertebrates Environment Fossils & Geologic Time Mammals Plants Geography & Travel Geography & Travel Arts & Culture Entertainment & Pop Culture Literature Sports & Recreation Visual Arts Image Galleries Podcasts Summaries Top Questions Britannica Kids Ask the Chatbot Games & Quizzes History & Society Science & Tech Biographies Animals & Nature Geography & Travel Arts & Culture ProCon Money Videos Introduction References & Edit History Related Topics Images Quizzes Numbers and Mathematics ellipse mathematics Print verifiedCite While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions. Select Citation Style Share Share to social media Facebook X URL Feedback Thank you for your feedback Our editors will review what you’ve submitted and determine whether to revise the article. External Websites Khan Academy - Intro to ellipses Texas Tech University - Department of Mathematics and Statistics - Inequalities for the Perimeter of an Ellipse (PDF) CiteSeerX - Why Ellipses Are Not Elliptic Curves (PDF) Engineering LibreTexts - The Ellipse NASA - Polar, Wind and Geotail Missions - Graph and Ellipse Wolfram MathWorld - Ellipse The Chinese University of Hong Kong - Department of Physics - Ellipses (PDF) OpenStax - College Algebra 2e - The Ellipse Story of Mathematics - Ellipse BCcampus Open Publishing - Ellipses Britannica Websites Articles from Britannica Encyclopedias for elementary and high school students. ellipse - Student Encyclopedia (Ages 11 and up) Written by Written by The Editors of Encyclopaedia Britannica Encyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree.... The Editors of Encyclopaedia Britannica Article History ellipse, a closed curve, the intersection of a right circular cone (see cone) and a plane that is not parallel to the base, the axis, or an element of the cone. It may be defined as the path of a point moving in a plane so that the ratio of its distances from a fixed point (the focus) and a fixed straight line (the directrix) is a constant less than one. Any such path has this same property with respect to a second fixed point and a second fixed line, and ellipses often are regarded as having two foci and two directrixes. The ratio of distances, called the eccentricity, is the discriminant (q.v.; of a general equation that represents all the conic sections [see conic section]). Another definition of an ellipse is that it is the locus of points for which the sum of their distances from two fixed points (the foci) is constant. The smaller the distance between the foci, the smaller is the eccentricity and the more closely the ellipse resembles a circle. A straight line drawn through the foci and extended to the curve in either direction is the major diameter (or major axis) of the ellipse. Perpendicular to the major axis through the centre, at the point on the major axis equidistant from the foci, is the minor axis. A line drawn through either focus parallel to the minor axis is a latus rectum (literally, “straight side”). The ellipse is symmetrical about both its axes. The curve when rotated about either axis forms the surface called the ellipsoid (q.v.) of revolution, or a spheroid. Britannica Quiz Numbers and Mathematics The path of a heavenly body moving around another in a closed orbit in accordance with Newton’s gravitational law is an ellipse (see Kepler’s laws of planetary motion). In the solar system one focus of such a path about the Sun is the Sun itself. For an ellipse the centre of which is at the origin and the axes of which are coincident with the x and y axes, the equation is x2/a2 + y2/b2 = 1. The length of the major diameter is 2a; the length of the minor diameter is 2b. If c is taken as the distance from the origin to the focus, then c2 = a2 - b2, and the foci of the curve may be located when the major and minor diameters are known. The problem of finding an exact expression for the perimeter of an ellipse led to the development of elliptic functions, an important topic in mathematics and physics.
190623
https://www.math.ksu.edu/~dbski/writings/decision.pdf
BER and the Hamming Codes 1 MAP and ML Decision Rules Throughout these notes, we shall stick entirely to binary linear block codes. Thus we shall be dealing with vector spaces over the binary field F. The following is deceptively simple. Definition 1.1. By a length n, k bit (binary) linear block code we mean a k-dimensional subspace C ⊆V , where V is an n-dimensional vector space over F. Sometimes we shall refer to C as an (n, k) linear block code. The idea behind codes is that we wish to transmit k-bit messages across a noisy channel; to do this with some enhanced reliability, we build some measure of re-dundancy into the code. Thus the greater n exceeds k, the greater is the degree of redundancy. Of course, this redundancy is at the expense of efficiency of the code, which is defined as the ratio Eff(C) = dim C dim V = k n. Later on, we’ll look more closely at the negative effect of too much redundancy on the performance of certain codes. Definition 1.2. The binary symmetric channel (BSC) with crossover prob-ability p takes a single binary input x (from the binary field F) and switches it to x + 1 with probability p. Throughout these notes, we shall assume that the BSC has crossover probability p < 1/2. The BSC is typically viewed according to the following picture: • 1 • 0 p p • 1 • 0 -- @ @ @ @ @ @ @ R 1 −p 1 −p 1 MAP AND ML DECISION RULES 2 A fruitful way to view the BSC is that it takes an input vector from the code (a “codeword”), say c ∈C, and adds a random noise vector e ∈V to c, producing the output vector ˆ c = c + e. This random vector has a distribution dictated by the channel: if e0 = (ϵ1, ϵ2, . . . , ϵn) ∈V , and if e0 has l components not equal to zero, then Prob(e = e0) = pl(1 −p)n−l. Now assume that V = Fn = {(a1, a2, . . . , an) | ai ∈F}, and let C ⊆V be a code. An important parameter associated with the code C is its weight (or minimal weight), wt(C). First of all, if v = (a1, a2, . . . , an) ∈V , set wt(v) = |{i | ai ̸= 0}|. Now set wt(C) = min 0̸=c∈C wt(c). As a result, if our code C has minimal weight wt(C) = m, then we will be able to detect any erroneously received word that differs in fewer than m coordinates from some codeword in C. Put differently, if the codeword c is sent and received as ˆ c = c+e, then when 0 < wt(e) ≤m −1 we will know that there is an error in the received codeword ˆ c, at which point we might ask for a retransmission. From the above discussion, we see that for a code C of minimal weight m, Prob(undetected error) = n X l=m n l  pl(1 −p)n−l = n m  pm + higher degree terms. However, codes are useful not just for error detection; they can also be useful for error correction. However, for this to work, we need a mechanism for taking a received vector v ∈V and deciding which codevector was actually sent. In other words, a decision rule is really just a mapping d : V →C. At this point, we introduce two commonly applied criteria for decision rules. First, however, it’s reasonable to assume that we’re trying to minimize the error in making our “decision” each time a vector is received, i.e., when we receive the vector v we want to minimize Prob(decision error | v is received). Therefore, our unconditional decision error probability is Prob(decision error) = X v∈V Prob(decision error | v is received)Prob(v is received). If d : V →C is our decision rule, then Prob(decision error | v is received) = 1 −Prob(d(v) is sent | v is received), 1 MAP AND ML DECISION RULES 3 which gives us the error probability Prob(decision error) = 1 − X v∈V Prob(d(v) is sent | v is received)Prob(v is received). This error probability is a minimum precisely when each conditional probability Prob(d(v) is sent | v is received) is a maximum. Note, however, that this conditional probability is dependent on the input distribution, i.e., on the individual probabilities Prob(c is sent), c ∈C: Prob(d(v) is sent | v is received) = Prob(v is received | d(v) is sent)Prob(d(v) is sent) Prob(v is received) = Prob(v is received | d(v) is sent)Prob(d(v) is sent) P c∈C Prob(v is received| c is sent)Prob(c is sent) where we have used Bayes’ Theorem for inverting the conditional probability. Definition 1.3 (The Maximum A Posteriori Rule). The decision rule d : V → C is called a maximum a posteriori rule, or MAP rule for short, if it maximizes each conditional probability Prob(d(v) is sent | v is received), v ∈V. Part of the difficulty in constructing MAP decision rules is that they are based on “reverse probabilities,” whose calculations involve Bayes’ Theorem. As such they are dependent on the input distributions. We consider a couple of simple, but telling examples. Example 1. Consider the simplest possible example of a code, viz., take C = F1 = {0, 1} = V . Assume that the BSC crossover probability is p and that the input distribution is given by Prob(0) = ϵ, Prob(1) = 1 −ϵ. Therefore, we have that Prob(0 is sent | 0 is received ) = Prob(0 is received | 0 is sent)Prob(0 is sent) Prob(0 is received ) = Prob(0 is received | 0 is sent)Prob(0 is sent) 1 P i=0 Prob(0 is received | i is sent)Prob(i is sent) = (1 −p)ϵ (1 −p)ϵ + p(1 −ϵ). 1 MAP AND ML DECISION RULES 4 Similarly, one computes Prob(1 is sent | 0 is received ) = p(1 −ϵ) (1 −p)ϵ + p(1 −ϵ) Prob(0 is sent | 1 is received ) = pϵ pϵ + (1 −p)(1 −ϵ) Prob(1 is sent | 1 is received ) = (1 −p)(1 −ϵ) pϵ + (1 −p)(1 −ϵ) Therefore, if 0 is received, what does an MAP decision rule tell us to “decide” as to what was sent? From the above it is evident that if (1 −p)ϵ ≤p(1 −ϵ), which is equivalent to saying that ϵ ≤p, then we should decide that 1 was sent. Note that this same condition implies (since p < 1/2) that pϵ ≤(1 −p)(1 −ϵ) which means that if 1 is received, then we should also decide that 1 was sent. That is to say, if ϵ ≤p, then we should always decide that 1 was sent! We leave the other cases to the reader to work out. Example 2. This time consider the one-dimensional code C = {(0, 0, 0), (1, 1, 1)} ⊆ V = F3 = {(a1, a2, a3) | ai ∈F}. As above, assume a crossover probability of p, and assume an input distribution Prob(0, 0, 0) = ϵ, Prob(1, 1, 1) = 1 −ϵ. Assume that the vector (1, 0, 0) was received. What is the best decision for what was sent ((0, 0, 0) or (1, 1, 1)) according to an MAP decision rule? Again, we must use Bayes’ Theorem to calculate the necessary conditional probabilities: Prob((0, 0, 0) was sent | (1, 0, 0) was received ) = Prob((1, 0, 0) was received) | (0, 0, 0) was sent )Prob((0, 0, 0) was sent ) Prob((1, 0, 0) was received ) = p(1 −p)2ϵ p(1 −p)2ϵ + p2(1 −p)(1 −ϵ), whereas, Prob((1, 1, 1) was sent | (1, 0, 0) was received ) = Prob((1, 0, 0) was received) | (1, 1, 1) was sent )Prob((1, 1, 1) was sent ) Prob((1, 0, 0) was received ) = p2(1 −p)(1 −ϵ) p(1 −p)2ϵ + p2(1 −p)(1 −ϵ). Thus, an MAP decision rule would tell us to decide that (0, 0, 0) was sent if p(1 −p)2ϵ ≥p2(1 −p)(1 −ϵ); since both p, 1 −p, ϵ > 0, we see that this condition is equivalent to 1 −p p ≥1 −ϵ ϵ . 1 MAP AND ML DECISION RULES 5 Again, the reader can complete this analysis. The next decision rule is much easier to implement as it involves only “forward probabilities” and therefore is independent of the input distribution. This is the so-called maximum likelihood decision rule, as follows: Definition 1.4 (The Maximum Likelihood Rule). The decision rule d : V →C is called a maximum likelihood rule, or ML rule for short, if it maximizes each conditional probability Prob(v is received | d(v) is sent), v ∈V. Note that for the each of the above two examples, an ML decision rule is unique and is given by d(0) = 0, d(1) = 1, in Example 1; and d(0, 0, 0) = d(1, 0, 0) = d(0, 1, 0) = d(0, 0, 1) = (0, 0, 0) and d(1, 1, 0) = d(1, 0, 1) = d(0, 1, 1) = d(1, 1, 1) = (1, 1, 1) in Example 2. Since only the forward probabilities of the channel are involved, we see that an ML decision rule doesn’t depend on the input distribution. However, as the above examples indicate, the two decision rules can differ. However, if the input distribution is uniform, then an ML decision rule is an MAP rule (and conversely): Lemma 1.1. Assume that the input distribution is uniform, i.e., that for each c ∈C, Prob(c is sent) = 1 |C|. Then any ML decision rule d : V →C is also an MAP rule, and conversely. Proof. Indeed, for any vector v ∈V , we have Prob(d(v) is sent |v is received) = Prob(v is received | d(v) is sent)Prob(d(v) is sent) Prob(v is received) = Prob(v is received | d(v) is sent) |C|Prob(v is received) . Therefore, we see that for fixed vector v ∈V , Prob(d(v) is sent |v is received) is a maximum if and only if Prob(v is received | d(v) is sent) is a maximum. The next result shows that decision rules based on minimal distance are maximum-likelihood decision rules (and conversely). 1 MAP AND ML DECISION RULES 6 Lemma 1.2. Let C ⊆V be a code. A decision rule d : V →C is a maximum-likelihood decision rule if and only if for each v ∈V , wt(v + d(v)) is chosen to be a minimum. Proof. This is obvious, as for any pair c ∈C, v ∈V , we have Prob(v is received | c is sent) = pwt(v+c)(1 −p)n−wt(v+c), which is a minimum if and only if wt(v + c) is a minimum. Definition 1.5. Let C ⊆V be a code, and assume that d : V →C is a decision rule. If d satisfies the partial homomorphism property: d(0) = 0, d(v + c) = d(v) + c, c ∈C, v ∈V, then d is called a standard array decision rule. Note that a standard array decision rule is uniquely determined by the values d(v1), d(v2) . . . , d(vr), where v1, v2, . . . , vr is a set of coset representatives for C in V . Furthermore, a standard array decision rule d : V →V satisfies d(c) = c, for all c ∈C. Lemma 1.3. Let C ⊆V be a code and let v1 = 0, v2, . . . , vr be a set of coset repre-sentatives of minimal weight, i.e., for each i = 1, 2, . . . , r, we have wt(vi) = min c∈C wt(vi + c). Then the standard array decision rule given by d(vi + c) = c, i = 1, 2, . . . , r, c ∈C, is an ML decision rule. Proof. This is virtually obvious since since if c′ ∈C is closer (in the sense of weight) to vi +c than is c, then wt(vi +c +c′) < wt(vi), contrary to vi having minimal weight among elements in vi + C. We shall call a decision rule constructed as in Lemma 1.3 a maximum likelihood standard array decision rule; Lemma 1.3 guarantees that such a decision rule really is an ML rule. Note, however, that a given coset v + C might not have a unique element of minimal weight, in which case an ML standard array decision rule is also not unique. As a simple example, consider the 4-dimensional vector space V = F4 and take the code C to be the 2-dimensional subspace generated by (1, 0, 1, 1), (0, 1, 0, 1). The four cosets of C in V are listed below: 2 BIT ERROR RATE (BER) 7 (0, 0, 0, 0) (1, 0, 1, 1) (0, 1, 0, 1) (1, 1, 1, 0) (1, 0, 0, 0) (0, 0, 1, 1) (1, 1, 0, 1) (0, 1, 1, 0) (0, 1, 0, 0) (1, 1, 1, 1) (0, 0, 0, 1) (1, 0, 1, 0) (0, 0, 1, 0) (1, 0, 0, 1) (0, 1, 1, 1) (1, 1, 0, 0) Note that each coset except the third contains a unique coset representative of minimal length. Thus, there are two maximum likelihood standard array decision rules d1, d2 : V →C, and are determined by d1(0, 0, 0, 0), d1(1, 0, 0, 0), d1(0, 1, 0, 0), d1(0, 0, 1, 0) = 0, and d2(0, 0, 0, 0), d2(1, 0, 0, 0), d2(0, 0, 0, 1), d1(0, 0, 1, 0) = 0. We shall have occassion to refer to this example several more times in the sequel. 2 Bit Error Rate (BER) We shall be sending k-bit (binary) messages across our noisy BSC with crossover probability p. We regard the k bit messages as vectors in the vector space M = Fk = {(a1, a2, . . . , ak) | ai ∈F}. These messages are to be encoded as n-bit codewords via some “encoding map” M = Fk E − →Fn = V. The image C = E(M) is the code. Thus, corresponding to the k-bit message word m is the n-bit codeword E(m). Owing to the noise in the channel, the received vector will have the form \ E(m) = E(m) + e, where e is the random error vector having distribution Prob(e = e0) = pl(1 −p)n−l, where l = wt(e0). The entire encoding/transmission/decision/decoding process can be viewed thus: M E, ∼ = encode - C (transmission) V ? . . . . . . . . . . . . . . . . . d decision rule - C D = E−1, ∼ = decode - M, m - E(m) E(m) + e ? - d(E(m) + e) - D(d(E(m) + e). 2 BIT ERROR RATE (BER) 8 That is to say, the intended message m gets encoded, sent, received, decided upon, and decoded as the message D(d(E(m) + e)) ∈M, where, as usual, e is the random error vector generated by the BSC. Next, for any vector m ∈M, denote by m(i) the i-th coordinate of m, i.e., m = (m(1), . . . , m(k)) ∈M, and consider the conditional probability Prob(D(d(E(m) + e))(i) ̸= m(i) | E(m) was sent), i = 1, 2, . . . , k. This is simply the probability that the final message word disagrees with the intended message word in the i-th message bit. Put somewhat differently, we may define the random variables Xi(m) = D(d(E(m) + e))(i) + m(i), and consider Prob(Xi(m) = 1), for i = 1, 2, . . . , k. We might ask the following questions about the random variables Xi(m), i = 1, 2, . . . , k. (i) For fixed m ∈M, are the random variables Xi(m), i = 1, 2, . . . , k, identically distributed? (ii) For fixed m ∈M, are the random variables Xi(m), i = 1, 2, . . . , k, independent? (iii) For fixed i, 1 ≤i ≤k, how do the Xi(m) depend on the message vector m ∈M? We would certainly expect the answers to the above questions to depend on the decision rule d : C →V . If we average the probabilities Prob(Xi(m) = 1) over i = 1, 2, . . . , k, then this defines the conditional bit error rate: BER(m) = 1 k k X i=1 Prob(Xi(m) = 1) = 1 k k X i=1 Prob(D(d(E(m) + e))(i) ̸= m(i) | E(m) was sent). In other words, given that the message m was the intended message, k · BER(m) is the expected number of bit errors in the message that actually turns up at the receiving end (after deciding and decoding). The (unconditional) bit error rate, is the weighted average over all possible messages1: 1This is equivalent to the symbol error rate Psymb given on page 20 of F.J. MacWilliams and N.J.A. Sloane’s book, The Theory of Error-Correcting Codes, North-Holland Publishing Company, Amsterdam, 1978. While they don’t explicitely say so, their definition is valid only for uniform input distributions. 2 BIT ERROR RATE (BER) 9 BER = 1 k X m∈M k X i=1 Prob(Xi(m) = 1)Prob(E(m) was sent) = 1 k X m∈M k X i=1 Prob(D(d(E(m) + e))(i) ̸= m(i) | E(m) was sent)Prob(E(m) was sent). Thus, kBER is the expected number of message bit error per transmission. The above notion of bit error rate is what one might more properly call the post-decoding bit error rate, which is an average of the error probabilities in the message bits. If instead we consider the average of the error probabilities in the code bits, we would obtain what would be called the post-decison bit error rate: BERpd = 1 n X m∈M n X i=1 Prob((d(E(m)+e))(i) ̸= E(m)(i) | E(m) was sent)Prob(E(m) was sent). In analogy with the above, nBERpd is the expected number of codebit errors per transmission. Next, we show that when we use a standard array decision rule d : C →V , the computations of BER and BERpd can be simplified considerably. Proposition 2.1. Let C ⊆V , be a code with dim C = k and dim V = n, and let d : C →V be a standard array decision rule. The the post-decision and post-decoding bit error rates are given by BER = 1 k P e∈V wt(Dd(e))Prob(e), and BERpd = 1 n P e∈V wt(d(e))Prob(e). Proof. This is pretty easy. First of all, note that Prob(D(d(E(m) + e))(i) ̸= m(i) | E(m) was sent) = Prob(D(E(m) + d(e))(i) ̸= m(i) | E(m) was sent) = Prob(m + D(d(e))(i) ̸= m(i) | E(m) was sent) = Prob(D(d(e))(i) ̸= 0 | E(m) was sent). However, the error vector e ∈V is generated by the BSC independently of which encoded message E(m) was sent, thus Prob(D(d(e))(i) ̸= 0 | E(m) was sent) = Prob(D(d(e))(i) ̸= 0). 2 BIT ERROR RATE (BER) 10 Therefore, BER = 1 k X m∈M k X i=1 Prob(D(d(E(m) + e))(i) ̸= m(i) | E(m) was sent)Prob(E(m) was sent) = 1 k X m∈M k X i=1 Prob(D(d(e))(i) ̸= 0)Prob(E(m) was sent) = 1 k k X i=1 Prob(D(d(e))(i) ̸= 0) = 1 k X e∈V wt(Dd(e))Prob(e) The proof of the corresponding recipe for BERd is entirely similar. In other words, we see that k · BER is the expected weight (measured in M) of the random vector Dd(e), e ∈V . Similarly, k · BERd is the expected weight (measured in V ) of the random vector d(e), e ∈V . It would be of interest to determine under what conditions BER = BERpd. In general, one wouldn’t expect them to agree, if only because BER at least ostensibly depends upon the encoding E : M →C, as well as on the decision rule d : V →C, whereas BERpd depends only upon the decision rule. However, if one uses systematic encoding (to be explained below), then one might reasonably inquire as to whether one might have BER = BERpd, say under the assumption of an ML standard array decision rule. In fact, this is what initially spurred my interest in this endeavor, for in my quest for the BER of the Hamming codes, I was referred by Michele Eile to the “standard reference” by J. H. van Lint, Coding Theory, Lecture Notes in Mathematics,” vol. 201, Springer-Verlag, New York, 1973, pp. 25–26. However, van Lint computes nBERpd for the Hamming codes and not kBER. I am still searching for a computation of BER, (or kBER) although it appears that for the Hamming codes (under the assumption of systematic encoding), BER = BERpd. I’ll give evidence for this in the next section. Example. We refer again to the example C ⊆V given on page 6 and compute its post-decision BER relative to the two ML standard array decision rules d1, d2 : V →C. We have 4BERpd = X e∈V wt(d1(e))Prob(e) = 4 X j=0 X e∈V,wt(e)=j wt(d1(e))Prob(e) = 2p(1 −p)3 + 17p2(1 −p)2 + 10p3(1 −p) + 3p4. The same result holds for the decision rule d2 : V →C. 2 BIT ERROR RATE (BER) 11 Definition 2.1. Let C ⊆V = Fn be a code. An encoding scheme E : M = Fk ∼ = →C is called systematic, or is a row echelon form encoding scheme, if and only if the k × n matrix G having rows r1, r2, . . . , rk is in row echelon form, where E(0, 0, . . . , 1, 0, . . . , 0) = ri ∈C. 1 in position i When this happens, a permutation of the colums can be applied to bring the matrix into the form G = h Ik×k . . . Ak×n−k i and that a codeword (a1, a2, . . . , an) ∈C will be decoded as D(a1, a2, . . . , an) = (a1, a2, . . . , ak) ∈M. To see how different encoding schemes can lead to different BERs, even with respect to the same decision rule, consider once again the example of the two-dimensional code, given on page 6. Thus, C ⊆V = F4 has basis {r1 = (1, 0, 1, 1), r2 = (0, 1, 0, 1)} and the encoding scheme E : (a1, a2) 7→a1r1 + a2r2 is systematic. With respect this choice of encoding and with respect to the ML standard array decision rule d = d1 given on page 7, one has 2BER = 4 X j=0 X e∈V,wt(e)=j wt(Dd(e))Prob(e) = p(1 −p)3 + 9p2(1 −p)2 + 5p3(1 −p) + p4 ≈p + 6p2, for small enough p. On the other hand, were one to take the non-systematic encoding E′ : (a1, a2) 7→ a1r′ 1 + a2 + r′ 2, where r′ 1 = r1 = (1, 0, 1, 1), r′ 2 = r1 + r2 = (1, 1, 1, 0), then the corresponding BER (relative to d = d1) is given by 2BER = 4 X j=0 X e∈V,wt(e)=j wt(D′d(e))Prob(e) = 2p(1 −p)3 + 7p2(1 −p)2 + 6p3(1 −p) + p4 ≈2p + p2, for small p. Therefore, we conclude that, at least for small enough values of p (i.e., for a good enough BSC), we see that the BER computed in terms of the ML standard array decision rule d = d1 and systematic encoding is less than the corresponding BER computed in terms of the non-systematic encoding scheme E′ : M →C.2 2The equivocation “for small enough p,” turns out not to be necessary, since one can show in this example that the BER for systematic encoding E is less than the BER with respect to E′ : M →C for all p between 0 and 1/2. 3 THE HAMMING CODES 12 Remark. Note that neither of the post-decoding bit error rates agree with the post-decision BER given on page 10. The following theorem would be highly desirable—I’ll state it as a conjecture. It should be known, but I’ve not seen any relevant discussions. Conjecture. Let C ⊆V = Fn be a code and fix an ML standard array decision rule d : V →C. Let E, E′ : M = Fk ∼ = →C be encoding schemes with E systematic. If BERE, BERE′ are the corresponding bit error rates, then BERE ≤BERE′ for small enough p.3 4 3 The Hamming Codes In principle, the Hamming Codes are very easy to describe. To this end, we fix an l-dimensional vector space W over the binary field F2. Let P be the set of nonzero vectors in W and let V = F⟨P⟩be the vector space with basis P. Thus we have the “tautological map” τ : V →W determined by v 7→v, v ∈P. We set H(W) to be the kernel of V →W, and call it the Hamming code on W.5 Thus the Hamming code on W fits into an exact sequence 0 - H(W) - F⟨P⟩ τ - W - 0, from which it follows that dim H(W) = 2l−l−1. Thus, it is obvious that the minimal weight of H(W) is 3, since no two (or fewer) nonzero vectors in W can be linearly dependent. (Note that the vector space V can be identified with the Boolean group (with symmetric difference as the operation) on the set of nonzero vectors of W.) Definition 3.1. Let C ⊂V be an (n, k) linear block code of minimal weight δ. We say that C is a perfect code if there exists r < δ such that V = [ c∈C Bc(r), disjoint union, where Bc(r) = {v ∈V | wt(v + c) ≤r}. For such a code we see that the ML decision rule is uniquely determined: if v ∈V we take d(v) = c, where c ∈C is the unique vector living in Bv(r), where r is chosen in accordance with the above. Furthermore, as a result of Lemma 1.3, we conclude that the ML decision rule is necessarily a standard array decision rule. This is all relevant, because of 3Again, this restriction on p might not be necessary. 4In the definition of BER in MacWilliams-Sloane, it is tacitly assumed that the encoding scheme is systematic. 5If n = 2l −1 and k = n −l, we sometimes call H(W) the (n, k)-Hamming code. 4 SECOND-ORDER BER FOR THE HAMMING CODES 13 Lemma 3.1. The Hamming code C = H(W) ⊆V is a perfect code of minimal weight 3. Proof. Since C has minimal weight 3, it already follows that for all c ̸= c′ in C, we have Bc(1) ∩Bc′(1) = ∅. Next, let dim W = l; it is clear that for all c ∈C, |Bc(1)| = n + 1 = 2l. Therefore, | [ c∈C Bc(1)| = (n + 1)|C| = (2l)(2n−l) = 2n = |V |. 4 Second-Order BER for the Hamming Codes Let W be an l-dimensional vector space over the binary field F, and let C = H(W) ⊆ V = F⟨P⟩, be the corresponding Hamming code, where, as above, P is the set of nonzero vectors of W. Relative to the unique ML standard array decision rule d : V →C and a systematic encoding scheme E : M = Fk →C, k = 2l −l −1, we shall compute the second order term (= coefficient of p2) in the bit error rate. Indeed, this makes sense, as Proposition 2.1 shows that the bit error rate, as well as the post-decision bit error rate are polynomials in the crossover probability p of the BSC. We wish to give an intrinsic characterization of systematic encoding schemes E : M →C. If n = 2l−1, then any fixed ordering of the vectors in P gives an isomorphism Fn →V = F⟨P⟩. For convenience, if S ⊆P is a subset, let [S] = P s∈S s ∈W. Next, let B = {w1, w2, . . . , wl} be a basis of W. For each j ≥2, let Sij ⊆B, i = 1, 2, . . . , l j  , be the distinct subsets of size j in B. Note that l X j=2 l j  = 2l −l −1, and that the vectors [Sij] ∈W are all distinct and none is in B. Next, for each nonzero vector w ∈P, let µw : F →V = F⟨P⟩be the w-th coordinate function. If Q ⊆P define the element ∈V by setting = X q∈Q µq(1) ∈V. Finally, define the vectors rij ∈V, 2 ≤j ≤l, 1 ≤i ≤ l j  by setting rij = . For example, consider the (7, 4)-Hamming code H(W), and so W is 3-dimensional and has basis B = {w1, w2, w3}. Take the ordering of the j-element subsets of B 4 SECOND-ORDER BER FOR THE HAMMING CODES 14 to be {w1, w2}, {w1, w3}, {w2, w3}, {w1, w2, w3}. Then relative to this ordering, the elements rij ∈V are the row vectors r12 = (1, 0, 0, 0, 1, 1, 0), r22 = (0, 1, 0, 0, 1, 0, 1), r32 = (0, 0, 1, 0, 0, 1, 1), r13 = (0, 0, 0, 1, 1, 1, 1). It is clear that these vectors form a basis of C = H(W) and that the corresponding matrix is in row echelon form. Thus, the encoding scheme E : (1, 0, 0, 0) 7→r12, (0, 1, 0, 0) 7→r22, (0, 0, 1, 0) 7→r32, (0, 0, 0, 1) 7→r13 is systematic. Finally, it is not hard to see that any systematic encoding scheme must arise in the above fashion. We let BER(2), BER(2) pd be the second-order bit error rate and post-decision bit error rate, respectively, of the Hamming code using the unique ML standard array decision rule d : V →C and a systematic encoding scheme E : M →C. Thus, kBER(2) = X e∈V, wt(e)=2 wt(Dd(e))Prob(e) = X e∈V, wt(e)=2 wt(Dd(e))p2(1 −p)n−2, and nBER(2) pd = X e∈V, wt(e)=2 wt(d(e))Prob(e) = X e∈V, wt(e)=2 wt(d(e))p2(1 −p)n−2. We prove below that at least the second order bit error rates do agree: Theorem 4.1. For the Hamming code H(W) with decision rule and encoding scheme as above, BER(2) = BER(2) pd = 3 2(n −1). Proof. Perhaps we should note first that neither BER nor BERpd contain nonzero linear terms in p. This is because error vectors of weight 1 are closest to the zero vector in C and hence if wt(e) = 1, then d(e) = 0, i.e., such errors get “corrected.” Next, note that if wt(e) = 2, then d(e) ∈C is necesarily a vector of weight 3. Therefore, 1 n X e∈V,wt(e)=2 wt(d(e)) = 3 n n 2  = 3 2 (n −1). On the other hand, note that each codeword c ∈C of weight 3 is d(e) for precisely three error vectors e ∈V of weight 2. Therefore, 1 k X e∈V,wt(e)=2 wt(Dd(e)) = 3 k X c∈C,wt(c)=3 wt(D(c)). 4 SECOND-ORDER BER FOR THE HAMMING CODES 15 Therefore, we have reduced the problem to that of showing X c∈C,wt(c)=3 wt(D(c)) = k 2 (n −1). We now recall the basis elements {rij | 2 ≤j ≤l, 1 ≤i ≤j} of C = H(W). Clearly the elements of C such that wt(D(c)) = 1 are the basis elements rij. However, those of weight 3 (in C) are precisely the basis vectors r12, r22, . . . , r(l 2)2, i.e., the number of vectors c ∈C of weight 3 and such that wt(D(c)) = 1 is l 2  . Next, the vectors in C with wt(D(c)) = 2 are of the form rij + ruv, where {i, j} ̸= {u, v}. Such a vector has weight 3 in C precisely when |Sij| = |Suv| + 1, and Sij ⊇Suv or when |Sij| = |Suv|−1, and Sij ⊆Suv. The number of such subsets can be enumerated thus: l 3 3 2  + l 4 4 3  + · · · + l l  l l −1  = l X s=3 l s  s s −1  . This quantity can be computed fairly easily. We have, by the Binomial Theorem, that (1 + x)l = l X s=0 l s  xs, and so l(1 + x)l−1 = d dx(1 + x)l = l X s=1 l s  sxs−1 = l X s=1 l s  s s −1  xs−1. Therefore, l X s=1 l s  s s −1  = l(1 + 1)l−1 = l2l−1, from which it follows that l X s=3 l s  s s −1  = l2l−1 − l 2 2 1  − l 1 1 0  = l2l−1 −l(l −1) −l = l(2l−1 −l). In other words, the number of codewords c ∈C of weight 3 with wt(D(c)) = 2 is l(2l−1 −l). Finally, the number of codewords c ∈C of weight 3 and having wt(D(c)) = 3 is equal to (# codewords in C of weight 3) − l 2  −l(2l−1 −l). Clearly, the number of codewords of weight 3 in C is equal to the number of 2-dimensional subspaces in W, which in turn is given by the Gaussian coefficient  l 2  2 = (2l −1)(2l −2) (22 −1)(22 −2) = 1 3 n 2  . 5 VAN LINT’S CALCULATION OF BERPD 16 Therefore, the number of codewords c ∈C of weight 3 and having wt(D(c)) = 3 is equal to 1 3 n 2  − l 2  −l(2l−1 −l). Putting all this together, we have X c∈C,wt(c)=3 wt(D(c)) = l 2  + 2l(2l−1 −l) + 3 1 3 n 2  − l 2  −l(2l−1 −l)  = k 2 (n −1) after some calculation! This completes the proof. 5 van Lint’s Calculation of BERpd The calculation of BERpd for the Hamming codes, first given by van Lint6 is actually fairly easy. To this end let C = H(W) be an (n, k)-Hamming code, where n = 2l −1, k = 2l −l −1. For each integer i = 0, 1, 2, . . . , n, let Ai = #( codewords of weight i). Thus, we have already seen that A0 = 1, A1, A2 = 0, A3 = 1 3 n 2  . We have nBERpd = X e∈V wt(d(e))Prob(e) = X c∈V X e∈Bc(1) wt(d(e))Prob(e) = n X i=0 X c ∈C wt(c) = i X e∈Bc(1) wt(d(e))Prob(e). Next, for a fixed codevector c ∈C of weight i, note that X e∈Bc(1) wt(d(e))Prob(e) = i2pi−1(1 −p)n−i+1 + ipi(1 −p)n−i + i(n −i)pi+1(1 −p)n−i−1 = iP(p, i), where we have set P(p, i) = ipi−1(1 −p)n−i+1 + pi(1 −p)n−i + (n −i)pi+1(1 −p)n−i−1. Therefore, it follows that nBERpd = n X i=0 iAiP(p, i) = n X i=0 iAi[ipi−1(1 −p)n−i+1 + pi(1 −p)n−i + (n −i)pi+1(1 −p)n−i−1]. 6Lecture Notes in Mathematics,” vol. 201, Springer-Verlag, New York, 1973, pp. 25–26. 5 VAN LINT’S CALCULATION OF BERPD 17 Therefore, we see that for the Hamming codes, the post-decision bit error rate depends solely on the so-called weight enumerator polynomial, which is given by A(x) = n X i=0 Aixi. Van Lint has also, computed A(x); we shall sketch his development. We begin by setting Ai = {c ∈C | wt(c) = i}. Notice that the weight i vectors v ∈V come from three sources: (1) Those that are alrady codevectors in Ai. (2) Those that are distance 1 from codevectors in Ai+1; note that each codevector gives rise to i+1 i  = i + 1 vectors of weight i. (3) Those that are distance 1 from codevectors in Ai−1. From the above, it follows that n i  = |Ai| + (i + 1)|Ai+1 + (n −i + 1)|Ai−1|, and so n i  = Ai + (i + 1)Ai+1 + (n −i + 1)Ai−1. Multiply both sides of this equation by xi and sum over i = 0, 1, . . . , n + 1: n+1 X i=0 n i  xi = n+1 X i=0 Aixi + n+1 X i=0 (i + 1)Ai+1xi + n+1 X i=0 (n −i + 1)Ai−1xi, and so n X i=0 n i  xi = n X i=0 Aixi + d dx n+1 X i=0 Ai+1xi+1 + n n+1 X i=0 Ai−1xi − n+1 X i=0 (i −1)Ai−1xi. This implies, of course, that (1 + x)n = A(x) + A′(x) + nxA(x) −x2 n X i=0 iAixi−1 = A(x) + A′(x) + nxA(x) −x2A′(x), which can be written as the first-order Bernoulli equation A′(x) + 1 + nx 1 −x2  A(x) = (1 −x)n 1 −x2 . Recall that the first-order Bernoulli equation y′+p(x)y = q(x) is solved by multiplying through by the integrating factor u(x) = e R p(x)dx, with the solution being y = 1 u(x) Z u(x)q(x)dx. 6 BER = BERPD FOR THE (7, 4)-HAMMING CODE 18 If we apply this to the above, the final result is that A(x) = 1 n + 1(1 + x)n + n n + 1(1 + x)(n−1)/2(1 −x)(n+1)/2. We return now to the computation of BERpd. If we set q = 1 −p, then we have nBERpd = qn n X i=0 iAi{xi + (n −i)xi+1 + ixi−1} = qn{((n −1)x2 + x + 1)A′(x) + (x −x3)A′′(x)} After some calculation, this ultimately boils down to nBERpd = n n + 1  (n −1) p2 1 −p + 1  1 −(1 + (n −1)p)(1 −2p)(n−1)/2 + p(1 −2p) 1 −p · n(n −1) n + 1 1 + (np2 −p2 + 4p −1)(1 −2p)(n−3)/2 In order to interpret the limiting value of the above, note first that the expression α = np represents the expected number of codebit errors in unencoded transmission of n codebits. So we fix α = np and let n →∞, p →0 in the above expression of nBERpd and find that nBERpd → 1 −(1 + α)e−α + α(1 −e−α) = α + 1 −(1 + 2α)e−α . In particular, if (1 + 2α)e−α) > 1 (which is roughly saying that α > 1.2564), we see that asymptotically nBERpd is greater than α, i.e., the Hamming codes do worse than with no coding at all ! 6 BER = BERpd for the (7, 4)-Hamming Code In the previous section, we saw that if A(x) = n P i=0 Aixi was the weight enumerator for the Hamming code C, then the post-decision bit error rate is given by BERpd = 1 n n X i=0 iAiP(p, i) where, P(p, i) = [ipi−1(1 −p)n−i+1 + pi(1 −p)n−i + (n −i)pi+1(1 −p)n−i−1]. 6 BER = BERPD FOR THE (7, 4)-HAMMING CODE 19 On the other hand, relative to decoding D : C →M, we have that the post-decoding bit error rate is BER = 1 k X e∈V wt(D(d(e))Prob(e) = 1 k X c∈C X e∈Bc(1) wt(D(d(e))Prob(e) = 1 k n X i=0 X c ∈C wt(c) = i X e∈Bc(1) wt(D(d(e))Prob(e) = n X i=0 X c ∈C wt(c) = i wt(D(c))P(p, i) = n X i=0 P(p, i) X c ∈C wt(c) = i wt(D(c)). Therefore, we see that BERpd = BER provided that we can show X c ∈C wt(c) = i wt(D(c)) = ki n Ai, for each i = 0, 1, . . . , n. For the (7, 4)-Hamming code, the weight enumerator polynomial is A(x) = 1 + 7x3 + 7x4 + x7. Verifying that X c ∈C wt(c) = 3 wt(D(c)) = 12, X c ∈C wt(c) = 4 wt(D(c)) = 16, and that X c ∈C wt(c) = 7 wt(D(c)) = 4, is entirely routine and can be left to the reader. 7 PERFORMANCE OF THE HAMMING CODES 20 7 Performance of the Hamming Codes This final section can best be thought of as an “engineering appendix,” as it docu-ments how electrical engineers view the performance of codes. First of all we need a way to compute the crossover probability of error for a BSC. This depends on a number of factors, including the mode of binary signaling, the mode of reception, the energy Eb per sent bit and the white noise spectral density, N0. A common assumption is to use what is called “bipolar signaling,” and invoke a theorem of elec-trical engineering which asserts that the probability of error (=crossover probability) is minimized precisely when the “matched filter” reception design7 is used, with the resulting error probability given by p = Q r 2Eb N0 ! , and where the Q function is defined by Q(z) = 1 √ 2π Z ∞ z e−λ2/2dλ. In applying this to, say, the (7, 4)-Hamming code, we must realize that the above calculation is predicated on having invested Eb joules in one bit. However, in the (7, 4)-Hamming code there is redundancy to the extent that seven actual electronic “bits” are sent for every four message bits. Since the BER is reflective of the message bits, then we must realize that in order to dedicate 1 Joule of energy to a message bit, we need to put in 7 4 Joules into each codebit (the physically transmitted bit). Therefore, if we write the bit error rate as a polynomial in the crossover probability p: BER = B(p), then in terms of Eb/N0 (the “engineering standard”) we would use BER = B  Q  p 8Eb/7N0  for the computation. In the graph below, the second-order approximation BER ≈9p2 = 9 h Q  p 8Eb/7N0 i2 , taken from Theorem 4.1 has been used. The resulting permormance graph, with the comparison taken against the unencoded transmission is as below: 7Such a filter is characterized by the fact that its impulse response is matched to a (reverse copy of) the known input signal. 7 PERFORMANCE OF THE HAMMING CODES 21 Performance of (7, 4)-Hamming Code vs. Unencoded Transmission (· · · ) 10−9 10−8 10−7 10−6 10−5 10−4 10−3 (BER) 10−2 10−1 1 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 Eb/N0 dB The way engineers read this graph is by saying, for example, that if one wants a bit error rate of no worse than 10−8, then one must arrange to send signals that are at least 11.5 dB over the background white noise for the Hamming codes and at least 12 dB over the background white noise for the unencoded transmission. Put differently, at a BER of 10−8 we see is roughly a 1/2 dB gain over unencoded transmission. Below, we have given the performance graph for the (15, 11)-Hamming code. In this case, at a BER of 10−8 we see approximately a 1.5 dB gain as compared with the unencoded transmission. Performance of (15, 11)-Hamming Code vs. Unencoded Transmission (· · · ) 10−12 10−10 10−8 10−6 10−4 10−2 1 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 Eb/N0 dB
190624
https://www.chegg.com/homework-help/questions-and-answers/constant-term-expansion-binomial-1-x-2x-6-20-160-256-1-q133091061
Solved The constant term in the expansion of the binomial | Chegg.com Skip to main content Books Rent/Buy Read Return Sell Study Tasks Homework help Understand a topic Writing & citations Tools Expert Q&A Math Solver Citations Plagiarism checker Grammar checker Expert proofreading Career For educators Help Sign in Paste Copy Cut Options Upload Image Math Mode ÷ ≤ ≥ o π ∞ ∩ ∪           √  ∫              Math Math Geometry Physics Greek Alphabet Math Calculus Calculus questions and answers The constant term in the expansion of the binomial (1x+2x)6 is201602561 Your solution’s ready to go! Enhanced with AI, our expert help has broken down your problem into an easy-to-learn solution you can count on. See Answer See Answer See Answer done loading Question: The constant term in the expansion of the binomial (1x+2x)6 is201602561 The constant term in the expansion of the binomial (1 x+2 x)6 is 2 0 1 6 0 2 5 6 1 There are 4 steps to solve this one.Solution Share Share Share done loading Copy link Step 1 Use the binomial expansion theorem to find each term. The binomial theorem states (a+b)n=∑k=0 n n C k⋅(a n−k b k) . ∑k=0 6 6!(6−k)!k!⋅(1 x)6−k⋅(2 x)k View the full answer Step 2 UnlockStep 3 UnlockStep 4 UnlockAnswer Unlock Previous questionNext question Not the question you’re looking for? Post any question and get expert help quickly. Start learning Chegg Products & Services Chegg Study Help Citation Generator Grammar Checker Math Solver Mobile Apps Plagiarism Checker Chegg Perks Company Company About Chegg Chegg For Good Advertise with us Investor Relations Jobs Join Our Affiliate Program Media Center Chegg Network Chegg Network Busuu Citation Machine EasyBib Mathway Customer Service Customer Service Give Us Feedback Customer Service Manage Subscription Educators Educators Academic Integrity Honor Shield Institute of Digital Learning © 2003-2025 Chegg Inc. All rights reserved. Cookie NoticeYour Privacy ChoicesDo Not Sell My InfoGeneral PoliciesPrivacy PolicyHonor CodeIP Rights
190625
https://intuendi.com/resource-center/inventory-carrying-costs/
Skip to content Date Categories Inventory Inventory Carrying Costs: Analysis, Calculation, and Reduction There are many expenses incurred when holding inventory in storage – the inventory carrying costs. These are made up of many costs such as capital, storage, administrative, security, and more. With so many things costing a business money, it goes without saying, the importance of calculating these costs is essential for optimal utilization of a company’s financial resources. This informative article will provide you with all the need-to-know basics when it comes to the topic of inventory carrying costs. We’ll be looking at what these costs are, how to calculate them, why they are important, and the many ways to help reduce them. What is Inventory Carrying Cost Inventory carrying cost, also known as inventory holding cost, is the total cost of owning and storing inventory over any span of time. It includes the capital tied up in inventory as well as the costs involved in maintaining the inventory, as well as the risks associated with storing it such as theft and spoilage. When it comes to inventory management, inventory carrying costs are a key factor affecting order quantity and optimal inventory levels. Holding excess inventory is wasteful if the inventory carrying costs are high, so the goal is always to minimize these costs without affecting turnover time. However, low inventory carrying costs mean a company can hold extra inventory to meet demand and avoid stockouts, which is beneficial. Detailed Analysis of Inventory Carrying Costs There are five main components that make up inventory carrying costs, with each being influenced and affected by different variables and factors. Let’s take a look at each one in more detail. Capital Cost Capital cost is the largest component of the total inventory carrying cost, making up roughly 40% to 60% of the total. It is the cost of the money invested in inventory, also known as the opportunity cost of inventory. This is because that capital could be making money if it were invested in stocks or bonds, instead of being tied up in inventory. Capital cost can be calculated by multiplying the average inventory value by the annual interest rate or the cost of capital. The average inventory value can be estimated by dividing the sum of the beginning and ending inventory values by two, or by using the economic order quantity (EOQ) formula. The annual interest rate or the cost of capital can be obtained from the market or the company’s financial statements. For example, suppose a company has an average inventory value of $100,000 and an annual interest rate of 10%. The capital cost of inventory is: ``` Capital Cost = Average Inventory Value x Annual Interest Rate Capital Cost = $100,000 x 0.10 Capital Cost = $10,000 ``` This means that the company is losing $10,000 per year by holding inventory instead of investing the money elsewhere. Storage Costs Storage costs make up an estimated 10% to 25% of the total inventory carrying cost. These costs are made up of how much you pay for security, rent, utilities, etc, of the premises you store your inventory in. This cost varies according to the size, location, etc, of the premises, as well as the type, quantity, etc, of your inventory. Storage costs can be calculated by multiplying the storage cost per unit of space by the space occupied by inventory. The storage cost per unit of space can be obtained from the lease agreement or the market price. The space occupied by inventory can be estimated by multiplying the number of units by the volume or weight per unit, or by using the cube per order index (COI) formula. For example, suppose a company pays $5 per square foot per month for renting a warehouse and has 10,000 units of inventory that occupy 0.5 cubic feet each. The storage cost of inventory is: ``` Storage Cost = Storage Cost per Unit of Space x Space Occupied by Inventory Storage Cost = $5 x 10,000 x 0.5 Storage Cost = $25,000 ``` This means that the company is spending $25,000 per month for storing inventory. Inventory Service Costs Inventory service costs make up about 5% to 15% of the total inventory carrying cost. Inventory service costs are made up of labor, equipment, handling, labeling, packaging, etc. These costs can be affected by the complexity, frequency, and quality standards of the inventory. Inventory service costs can be calculated by multiplying the service cost per unit of inventory by the number of units. The service cost per unit of inventory can be derived from the labor and overhead rates, or from the activity-based costing (ABC) method. The number of units can be obtained from the inventory records or the inventory turnover ratio. For example, suppose a company spends $0.50 per unit of inventory for handling, packaging, labeling, inspecting, testing, and audits, and has 10,000 units of inventory. The inventory service cost is: ``` Inventory Service Cost = Service Cost per Unit of Inventory x Number of Units Inventory Service Cost = $0.50 x 10,000 Inventory Service Cost = $5,000 ``` This means that the company is spending $5,000 to provide services to inventory. Inventory Risk Costs Inventory risk costs make up between 5% to 10% of the total inventory carrying cost. These costs include any potential losses of inventory, whether due to fire, theft, spoilage, or obsolescence. It also includes insurance, markdowns, write-offs, etc. Inventory risk costs can vary according to the shelf life or value of the inventory, as more valuable inventory requires more security. Inventory risk costs can be calculated by multiplying the risk cost per unit of inventory by the number of units. The risk cost per unit of inventory can be estimated by using historical data, the expected loss rate, or the insurance rate. The number of units can be obtained from the inventory records or the inventory turnover ratio. For example, suppose a company has a risk cost of $0.10 per unit of inventory due to obsolescence, deterioration, spoilage, shrinkage, theft, fire, flood, or other disasters, and has 10,000 units of inventory. The inventory risk cost is: ``` Inventory Risk Cost = Risk Cost per Unit of Inventory x Number of Units Inventory Risk Cost = $0.10 x 10,000 Inventory Risk Cost = $1,000 ``` This means that the company is losing $1,000 due to inventory risks. Operational and Administrative Costs Operational and administrative costs make up about 5% to 10% of the total inventory carrying cost. These costs include all the costs of managing the inventory, such as ordering, receiving, analyzing, etc, as well as the costs of labor, transportation, and software. The costs are all influenced by the efficiency, accuracy, and frequency of the inventory process. Operational and administrative costs can be calculated by multiplying the operational and administrative cost per unit of inventory by the number of units. The operational and administrative cost per unit of inventory can be derived from the labor and overhead rates, or from the activity-based costing (ABC) method. The number of units can be obtained from the inventory records or the inventory turnover ratio. For example, suppose a company spends $0.20 per unit of inventory for ordering, receiving, recording, tracking, reporting, and analyzing, and has 10,000 units of inventory. The operational and administrative cost is: ``` Operational and Administrative Cost = Operational and Administrative Cost per Unit of Inventory x Number of Units Operational and Administrative Cost = $0.20 x 10,000 Operational and Administrative Cost = $2,000 ``` This means that the company is spending $2,000 on managing and administering inventory. Calculating Inventory Carrying Costs To calculate the total inventory carrying cost, we need to add up all the components discussed above. The formula for inventory carrying cost is: Inventory Carrying Cost = Capital Cost + Storage Cost + Inventory Service Cost + Inventory Risk Cost + Operational and Administrative Cost Using the example values from the previous sections, the inventory carrying cost is: ``` Inventory Carrying Cost = $10,000 + $25,000 + $5,000 + $1,000 + $2,000 Inventory Carrying Cost = $43,000 ``` This means that the company is spending $43,000 per year for holding and maintaining inventory. To express the inventory carrying cost as a percentage of the average inventory value, we need to divide the inventory carrying cost by the average inventory value and multiply by 100. The formula for inventory carrying cost percentage is: Inventory Carrying Cost Percentage = x 100 Using the example values from the previous sections, the inventory carrying cost percentage is: ``` Inventory Carrying Cost Percentage = x 100 Inventory Carrying Cost Percentage = 43% ``` This means that the company is spending 43% of the average inventory value on holding and maintaining inventory. The importance of keeping the inventory carrying cost percentage as low as possible can not be emphasized enough, because the higher the cost, the lower the profits and liquidity of a business will be. While the costs can vary due to market conditions, the product, or the industry, it can always be kept lower with close monitoring and control. Financial Impact of Carrying Costs We have already established that inventory carrying costs have a huge effect on the financial health of a business, so why not delve deeper into the different situations in which this can occur? Income Statement: Inventory carrying costs reduce the gross profit and the net income of a company. They are part of the cost of goods sold (COGS), which is the direct cost of producing or purchasing the goods sold by a company. The higher the inventory carrying costs, the lower the gross profit margin and the net profit margin. For example, suppose a company has a revenue of $200,000, a COGS of $100,000 (including $43,000 of inventory carrying costs), and an operating expense of $50,000. The gross profit is: ``` Gross Profit = Revenue - COGS Gross Profit = $200,000 - $100,000 Gross Profit = $100,000 ``` The gross profit margin is: ``` Gross Profit Margin = x 100 Gross Profit Margin = x 100 Gross Profit Margin = 50% ``` The net income is: ``` Net Income = Gross Profit - Operating Expense Net Income = $100,000 - $50,000 Net Income = $50,000 ``` The net profit margin is: ``` Net Profit Margin = x 100 Net Profit Margin = x 100 Net Profit Margin = 25% ``` If the company could reduce its inventory carrying costs by 10%, the COGS would be $90,700, the gross profit would be $109,300, the gross profit margin would be 54.65%, the net income would be $59,300, and the net profit margin would be 29.65%. This shows how inventory carrying costs can affect the profitability of a company. Balance Sheet: Inventory carrying costs increase the inventory value and the total assets of a company. They are part of the current assets, which are the assets that can be converted into cash within a year. The higher the inventory carrying costs, the higher the current ratio and the working capital. However, this does not necessarily mean that the company is more liquid or solvent. In fact, inventory carrying costs can also increase the inventory turnover and the day’s sales of inventory (DSI), which are measures of how efficiently a company manages its inventory. The higher the inventory turnover and the lower the DSI, the better. Cash Flow Statement: Inventory carrying costs decrease the cash flow from operations and the free cash flow of a company. They are part of the changes in working capital, which are the changes in the current assets and liabilities that affect the cash flow. The higher the inventory carrying costs, the lower the cash flow from operations and the free cash flow. For example, suppose a company has a net income of $50,000, a depreciation of $10,000, an increase in inventory of $10,000, and a capital expenditure of $20,000. The cash flow from operations is: ``` Cash Flow from Operations = Net Income + Depreciation - Changes in Working Capital Cash Flow from Operations = $50,000 + $10,000 - $10,000 Cash Flow from Operations = $50,000 ``` The free cash flow is: ``` Free Cash Flow = Cash Flow from Operations - Capital Expenditure Free Cash Flow = $50,000 - $20,000 Free Cash Flow = $30,000 ``` If the company could reduce its inventory carrying costs by 10%, the increase in inventory would be $9,000, the cash flow from operations would be $51,000, and the free cash flow would be $31,000. This shows how inventory carrying costs can affect the cash flow of a company. Intangible Costs of Carrying Costs Not all inventory carrying costs can be measured, yet they can still have a negative effect on the reputation of a company when it comes to customer satisfaction and a competitive advantage. Loss of sales is one type of intangible cost. If you hold excess inventory you may struggle to sell all the items, simply because the items may have become stale, or a newer product may have been launched that the customers might prefer to purchase. This can negatively influence your revenue. A second type of unquantifiable cost is customer dissatisfaction. Holding inventory can result in damaged or obsolete products. If a customer were to receive an old or expired product it could damage the image of the business. A third example of a cost that is impossible to measure is the lack of flexibility. When a business holds too much inventory, it may struggle to adapt to market trends and technological advancements, which will reduce its flexibility and innovation. All of these would result in the loss of customers, thereby affecting profits. Why is Calculating Carrying Costs Crucial? Given the significant ways that inventory carrying costs can impact the financial health of a business, it is crucial to monitor and calculate these costs regularly. In doing so you can achieve three very important things. Firstly you can optimize inventory levels, because knowing the carrying costs will help you determine the optimal inventory level and order quantity, thereby minimizing the total inventory cost. Secondly, you can improve inventory management. Knowing the inventory carrying cost will help you improve the various aspects of inventory management such as inventory replenishment, inventory classification, and inventory valuation. The third aspect that can be improved significantly by knowing your inventory carrying costs is enhanced financial performance. Knowing the cost will show you where you can reduce the inventory carrying costs, which will enhance efficiency, profitability, liquidity, and cash flow, which are all financial performance indicators. This will make the company more competitive in the market. Factors Influencing Carrying Costs There are several internal and external factors that can influence inventory carrying costs. Let’s discuss the four largest factors in more detail. Safety Stock Safety Stock is the reserve or buffer inventory a company keeps to prevent stockouts in times of unexpected higher demand. A business needs to balance the trade-off between the extra costs of carrying more inventory, and the risk of losing sales and customers if they run out of stock. This will help them determine an optimal safety stock level, while also minimizing total inventory costs. The second factor is Cyclical or Seasonal Demand. Cyclical or Seasonal Demand Demand fluctuates considerably due to the weather, during the holidays, etc. During these times inventory levels need to be adjusted to accommodate the increased demand. This causes increased inventory carrying costs. During quieter periods there is less inventory demand, which means decreased carrying costs. It is important to forecast the seasonal or cyclical demand accurately to avoid unnecessary holding costs, as well as stockouts. In-transit Inventory The third factor is the In-transit Inventory. This is also known as pipeline inventory and refers to any inventory that is in the process of being transported from the warehouse to the suppliers, or from the supplier to the customer. A company needs to balance the trade-off of the risk, and the administrative and operational costs, with the benefit of a shorter lead time between receiving orders. They must also be sure to select the optimal frequency, transportation mode, and route, in order to keep costs to a minimum. And lastly, we have Dead or Obsolete Inventory. Dead or Obsolete Inventory Also known as unsalable inventory, this refers to inventory that is no longer sellable, due to product defects, changes in customer preferences, and technological advancements. This inventory takes up valuable space while tying up capital, and still requires administration, which all increases costs. It also reduces inventory turnover, so it is important for a company to write off obsolete inventory, and maintain a fresh and updated inventory. Strategies for Reducing Carrying Costs The thought of all these costs can be quite intimidating, but fret not, as there are many available strategies that a company can implement to reduce inventory carrying costs, by minimizing inventory and increasing inventory turnover! Let us first cover the tips and techniques one can employ to help minimize inventory: Just-in-time Inventory(JIT) is a technique that helps reduce inventory carrying costs by only ordering and receiving inventory when it’s needed. While this reduces operational costs and uses less space, it also increases the risk of stockouts and supply uncertainties. For this reason, the JIT method would only be suitable if you have a really good level of coordination and cooperation with your suppliers. Your forecasting and inventory would also need to be very accurate. Vendor-managed Inventory (VMI) is a technique where the duty of managing inventory is transferred from the buyer to the supplier. VNI helps reduce inventory carrying costs for the buyer by no longer having ordering, receiving, and tracking costs. The buyer has to be able to trust that the supplier will optimize the inventory level as well as the replenishment frequency. This technique requires clear information sharing as well as consistent performance measures. Drop-shipping is a technique that differs completely from the others, in that the buyer holds no inventory at all. This is because the supplier ships directly to the customer on the buyer’s behalf. No inventory carrying costs is a huge saving, but the downside of this technique is that the buyer has very little control over the product’s quality and delivery. This means that the buyer will have to rely heavily on the service and performance of the supplier. However, in order to lower one’s inventory carrying costs, reducing the amount of one’s inventory is not the only mechanism one can put to use. Speeding up inventory turnover time is another strategy used to reduce inventory carrying costs, with several ways to increase the turnover speed! Demand Management is a manipulating method used to shift the demand for a product or service. A company can use promotions or pricing to stimulate demand, thereby increasing sales volumes and speeding up inventory turnover. It can also use aggregation techniques to reduce the demand for its products if there are supply shortages. This helps them avoid disappointing customers. Supply Chain Integration can speed up inventory turnover by aligning all the partners in the supply chain, through an integrated flow of information, materials, and finances. Using radio frequency identification (RFID) could improve the information flow, cross-docking consolidation could improve the material flow, and payment terms or discounts to improve the financial flow. Read more on integrated supply chains. Lean Manufacturing is a philosophy aimed at reducing waste and increasing value in the production process. This is achieved by reducing defects, transportation, processing, and overproduction. Lean manufacturing can speed up inventory turnover by using three things: improving quality using the 5S method, improving efficiency by using value stream mapping, and improving flexibility by using rapid prototyping. And, of course, this section would not be complete without redesigning and reutilizing warehousing space in order to optimize the space used. Warehousing is a big contributing cost to inventory carrying costs, so it would make sense to maximize the warehouse by optimizing the space. There are two main ways to do this. Warehouse Layout can affect the productivity of a warehouse, as it influences the accessibility and utilization of the space. The layout of everything from the aisles to the racks, and to the forklifts is very important. Ways to improve accessibility would be by using the S-shaped or fishbone layout. The utilization could be optimized by using fixed slotting, or dynamic slotting. Lastly, productivity could be improved by implementing class-based storage, or by using the ABC analysis. Warehouse Automation can reduce costs by reducing staff, error costs, and equipment costs. With the use of technology and software all operations and processes can be automated. Unfortunately, this technology comes with a hefty price tag for the initial investment, maintenance, and integration costs. After weighing up the costs and benefits, a company that wants to invest in software could use systems such as the automated storage and retrieval system (ASRS), or the robotic process automation (RPA). In our modern age, technology has most often proven its use and ability in many facets of business management, and inventory carrying cost is no exception. Below are two examples of software applications that companies can employ to aid in their inventory management. Inventory Management Systems makes it possible to manage and control everything from order quantity to safety stock, while simultaneously optimizing inventory decisions and reducing errors. Together, these reduce inventory carrying costs and improve visibility and accuracy. Some examples of inventory management systems are the enterprise resource planning system (ERP), and the material requirement planning system (MRP). Warehouse Management Systems is a software that focuses on controlling every aspect of the warehouse itself. It manages everything from receiving, packing, and shipping, to optimizing the layout and improving efficiency. Excellent examples of warehouse management systems are the warehouse control system (WCS), and the warehouse execution system (WES). Suppliers provide a business with everything they need including raw materials and finished goods. Negotiating the terms of the supply contract can reduce inventory carrying costs. A business could negotiate to lower the price, shorten the delivery time, or extend the payment period. Customers are any person or business that purchases your goods. Negotiating with customers could reduce your inventory carrying costs. Examples of this would be selling slightly damaged goods at a reduced price, as customers are often happy to buy something of a reduced quality if it costs them less money. A business could also ask a customer to pay over a shorter period of time by offering a slight discount in return. Demand planning.Demand planning software helps insure you only hold the amount of stock needed to effectively meet customer demand. These systems use advanced AI and machine learning techniques to determine demand from sales history. Read more on these systems in our AI demand forecasting article. Common Mistakes in Carrying Cost Management Due to the complexity involved in inventory carrying cost management, there are several mistakes that can be made. These mistakes end up increasing your inventory carrying costs and reducing the overall performance of your inventory. One of the most common mistakes arises from obsolete methods and improper tool use. Examples of this include using paper records and doing manual calculations. Using these types of methods and tools can result in poor inventory decisions and high inventory carrying costs, simply because they are prone to unnecessary errors and inaccuracies. Thankfully, it is easy to avoid making this mistake by using modern methods and tools that can provide accurate and timely inventory data. Software applications and barcode scanners are two examples that can help you reduce your inventory carrying costs. Our next example relates to incorrect demand forecasting. If a company uses forecasting techniques that rely solely on historical data or guesswork, they will have future demand results that are unrealistic and inaccurate. Having unreliable forecasts leads a company to make inventory decisions that result in stockouts or excess inventory, which increases inventory carrying costs. To prevent your company from suffering this fate, you need to ensure that appropriate demand forecasting techniques are being used. Statistical methods, market research, and customer feedback are all techniques that can help reduce inventory-carrying costs through reliable forecasts. Misunderstanding market trends is an additional mistake to look out for. Market trends and customer preferences are unpredictable, and any rapid changes that occur can result in obsolete inventory and high inventory carrying costs. Things that influence customer demand can range from the economic cycle to the weather. Therefore, it is vital that a company does not misunderstand how market trends impact demand and the value of their product. By conducting regular market analysis and customer surveys, a company can gain valuable insights to guide their inventory decisions and help them reduce their inventory carrying costs. Furthermore, it is important to look out for errors involving inefficient inventory management and order fulfillment processes. If a company has low service levels, long lead times, poor quality control, and high order sizes, then there will undoubtedly be low inventory turnover and high inventory carrying costs. Customer dissatisfaction will also become a major concern. For this reason, a company must endeavor to improve its inventory management and order fulfillment process through creative problem-solving and the use of inventory management software. Once inventory turnover and performance have been corrected, inventory carrying costs will be lowered. But how exactly does inventory management software help to lower inventory carrying costs? The Role of Inventory Management Software in Reducing Costs Inventory management software is an incredibly beneficial tool that can be used to improve inventory performance, thereby reducing inventory carrying costs. To be specific, it provides three main advantages to inventory management. Firstly, it enhances data accuracy and visibility. It can collect, store, and analyze any inventory-related data. This includes the level of inventory, its current location, and its value. The software can also be linked to other locations and systems, such as the POS or ERP system. By compiling and sharing this inventory data, accuracy and visibility are increased, thus enabling a company to avoid errors that could result in over/understocking or misplacement of inventory. Secondly, it can provide support that optimizes decision-making. The software has access to many models and methods, including ABC analysis, the FIFO method, and the EOQ model. It can employ them to calculate things like the optimal inventory level, safety stock, reorder point, and inventory value. A company can then use these results to make decisions that maximize inventory performance and profitability while reducing inventory carrying costs. Thirdly, the software can improve the efficiency of inventory management by automating various processes. For example, it could be used to automate the inventory ordering, receiving, and storing processes. Aside from automation, it can be used to boost efficiency by communicating with the inventory equipment (e.g., barcode scanners or RFID tags) to expedite inventory operations. When software is used to automate these processes it reduces error costs, the need for human labor, and the amount of space required by a business. Thus, software improves the productivity of the business and the inventory performance. Who would not be satisfied with all these benefits?! Conclusions and Final Thoughts It is of utmost importance to be able to identify where you might be incurring additional inventory carrying costs and how they might be coming about to nip this issue in the bud! There are various techniques to minimize inventory, and thus their carrying costs, to improve warehousing and inventory management systems, and manners in which inventory turnover time can be optimized, and therefore, sped up. Familiarise yourself with the common mistakes companies experience when creating higher inventory carrying costs for themselves, so you are able to identify these errors before they become a major issue. Learn how AI driven demand planning software like Intuendi can help reduce inventory carrying costs. Learn More Written by Tanique Allers Content Marketing Specialist Content Marketing Specialist Related articles Inventory Turnover: Definition, Formula and Ratio read it Best Stockout Reduction Tools: Software & Solutions of 2025 read it How to Evaluate and Choose Demand Planning Software read it Cloud and Machine Learning: the New Inventory Management Era read it How to Save Time on Inventory Management: 6 Effective Strategies read it How to Identify and Avoid Surplus Inventory? read it Back home Achieve your goals faster. Request a demo today. There must be a better way. Yes, Intuendi. -82% planning error reduction -6% PO management process speed-up -15% excess stock reduction Intuendi needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy. Orchestration and automation for your entire supply chain. Get a demo Languages English SOLUTIONS Overview Labs Platforms Forecast Inventory Orders Resources Articles Case Studies Papers Documentation Glossary Integrations FAQ Company We use cookies to improve the experience on our site. Read the privacy policy. Functional Always active The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network. Preferences The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. Statistics The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you. Marketing The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes. Manage options Manage services Manage {vendor_count} vendors Read more about these purposes View preferences {title} {title} {title}
190626
https://www.ennonline.net/fex/70/en/maternal-mid-upper-arm-circumference-still-relevant-identify-adverse-birth-outcomes
Maternal mid-upper arm circumference: Still relevant to identify adverse birth outcomes in humanitarian contexts? | ENN Skip to main content Solr Search Index Main navigation Homepage About us ENN Focus and strategy Our values Our team Vacancies Our work Networks Projects Multimedia Resources News and events Field Exchange About Search Field Exchange Podcasts Write for Field Exchange Subscribe to Field Exchange en-net Register User guide Contact usEmail the ENN Office +44 (0)1865 372340 Donate to ENN Connect with us Explore further Solr Search Index DONATE Enable low bandwidth mode Disable low bandwidth mode English Français Deutsch Italiano Español en to receive email notifications Subscribe Maternal mid-upper arm circumference: Still relevant to identify adverse birth outcomes in humanitarian contexts? Published: 11 September 2023 By: Sonia Kapil, Mija Ververs Lisez cet article en français ici Sonia Kapil is a former Graduate Researcher at Emory University, Rollins School of Public Health Mija Ververs is a Senior Associate at the Center for Humanitarian Health, Johns Hopkins Bloomberg School of Public Health What we know: Maternal mid-upper-arm circumference (MUAC) was found to be a reliable indicator for risk of subsequent low birth weight (LBW) following a comprehensive review of anthropometric indicators in 2013. However, optimal MUAC cut-off thresholds to identify poor birth and maternal outcomes have remained contentious, with different thresholds being recommended. What this adds: This scoping review analyses evidence after 2012 to determine whether more recent data sheds further light on optimal MUAC cut-off thresholds to identify those at risk of negative outcomes. The findings highlight that a MUAC cut-off threshold of <23 cm is predictive for identifying pregnant women at risk of adverse birth outcomes, particularly LBW. Background Establishing a practical anthropometric measurement, with an appropriate cut-off threshold, to identify pregnant women as undernourished in humanitarian settings can assist in the implementation of necessary interventions to avoid unfavourable maternal and birth outcomes. A key gap in maternal nutrition is that there is currently no agreed-upon standard set in the Sphere Handbook that defines maternal acute undernutrition through an optimal, context-specific MUAC cut-off point (Sphere Association, 2018). In 2013, Médecins Sans Frontières Switzerland undertook an extensive literature review (Ververs et al, 2013) of articles published between January 1995 and September 2012 exploring anthropometric indicators that are able to identify pregnant women as acutely undernourished and at risk for adverse outcomes – including maternal mortality, low birth weight (LBW), intra-uterine growth restriction (IUGR), pre-term birth (PTB), small-for-gestational-age (SGA), and stunting at birth. The review concluded that maternal MUAC can be used as a reliable indicator of risk of LBW. Maternal MUAC was identified as the preferential indicator, as opposed to body mass index, maternal weight for gestational age, maternal weight gain, or maternal stature. Maternal MUAC has a strong association with birth weight, is a simple measurement to take, particularly in humanitarian contexts, and is independent from gestational age. The proposed conservative cut-off value to enrol pregnant women in nutritional programmes, most frequently supplementary feeding programmes, was a MUAC of <23 cm. This scoping review aims to analyse studies published after September 2012, specifically focusing on determining the specific MUAC cut-off threshold used to identify adverse birth and maternal outcomes to understand if a MUAC of <23cm should be used rather than a MUAC of <21cm as is used in some humanitarian nutrition programmes. Methodology Data were abstracted from a comprehensive literature search conducted primarily in the PubMed and Embase electronic databases on literature published between September 2012 and October 2022. Additional eligible studies were sought after reviewing the reference lists of identified articles. The focus was on MUAC cut-off thresholds to identify risk of adverse birth and/or maternal outcomes (outcomes are listed in Table 1). PRISMA guidelines facilitated the preparation of this research protocol. Inclusion criteria were: availability in full text, peer reviewed, in English, and focused on adult maternal anthropometry. This review was not specifically restricted to studies conducted in low- and middle-income countries or protracted humanitarian settings. Since individual studies were not comparable and different approaches were taken for study analyses, a meta-analysis was not conducted. Data were synthesised based on the results of each individual study, and quantitative results were extracted and organised in thematic tables. Duplicate publications and studies analysing the same study populations for similar outcomes were excluded. Additional exclusions consisted of results involving: twins, triplets, adolescents, substance abuse, anaemia, cigarette smoking, in-vitro fertilisation, drugs and hormones, disease, and obesity. The quality of studies was assessed using an adaptation of the Newcastle-Ottawa Quality Assessment Scale and the Joanna Briggs Institute Critical Appraisal Checklist. Table 1: Definitions of outcome measures used Findings A total of 5,099 articles was initially identified. This was narrowed down to seven suitable articles – which were categorised as either ‘good’ or ’fair’ quality – after multiple stages of review (Figure 1). The studies were conducted in Bangladesh, Cambodia, Ethiopia, Kenya, and India and included women delivering in hospitals (three studies), attending antenatal services (two studies), or part of nutrition interventions (two studies). Three studies were cross-sectional in nature, two were cohort studies, one was a randomised control trial, and one was an unmatched case-control study. According to the adapted quality assessments based on specific study type, all included studies were deemed of good or fair quality, with none being categorised as poor quality. Figure 1: Article identification and inclusion flowchart The seven studies (Table 2) demonstrate the specific maternal MUAC cut-off threshold values and the corresponding birth outcomes of LBW, IUGR, SGA, and stunted at birth; sufficient data were not provided on maternal outcomes. Five of these studies indicate a MUAC of <23 cm as strongly predictive for identifying pregnant women as at risk for at least one of these adverse birth outcomes, while one study uses a MUAC cut-off value of ≤ 23 cm and another study used <22 cm. None of these cut-off values are found to be associated with gestational age. Five of the studies looked at the adverse birth outcome of LBW, with evidence suggesting that a MUAC <23 cm was significantly associated with LBW. One study explored the birth outcome IUGR, and found a MUAC <23 cm measured at delivery to be associated with IUGR. Another study explored SGA and found no significant association with a MUAC <23 cm and SGA. Being stunted at birth was explored in two studies and a MUAC of <23 cm measured during the third trimester was associated with being stunted at birth, although this result was not statistically significant. No studies looked at outcomes such as PTB, maternal morbidity, and maternal mortality. Table 2: Studies post-September 2012 using maternal MUAC to identify adverse birth outcomes Statistically significant values are in bold (p<0.05) Discussion This scoping review explored recently published literature concerning appropriate MUAC cut-off thresholds that can identify pregnant women as undernourished and at risk for adverse birth outcomes. Most of the studies identified utilised a MUAC threshold of <23 cm to identify pregnant women at risk for the following birth outcomes: LBW, IUGR, SGA, and stunted at birth. Results on maternal morbidity and maternal mortality were not sufficiently available. All studies in Table 2 determined maternal MUAC cut-off values to be independent of gestational age, which is particularly important for humanitarian contexts since gestational age is often unknown for pregnant women in such emergency contexts. A more recent cross-sectional study from Ethiopia found MUAC <23 cm to be significantly associated with adverse birth outcomes (adjusted OR= 5.93, 95% CI: 3.49, 10.08) (Degno et al, 2021). However, this study was not included in this scoping review because it broadly references ‘adverse birth outcomes’ among study participants, rather than specifying MUAC to be associated with individual birth outcomes. The study from India (Vasundhara et al, 2019) that used a maternal MUAC cut-off value of ≤ 23 cm did not demonstrate any significant associations with LBW or SGA. Here, the usage of the ‘less than or equal to’ symbol (≤) is unclear as it leaves the specific threshold open for interpretation. Since measurements were not completed in millimetres but rather in centimetres, it is unclear whether values between 23.1 cm and 23.9 cm or 23.4 cm were included in this threshold criteria. The Sphere Handbook states to consider MUAC <21 cm as an appropriate cut-off for the selection of pregnant women at risk during emergencies (it also states that a MUAC of <23 cm indicates a moderate risk among pregnant women, although moderate risk is not defined (Sphere Association, 2018)). The findings from this review do not support the presented statement, as it has been demonstrated that there is a clear risk of low birthweight found with MUAC <23 cm, with potential associations with other birth outcomes; therefore, we are not in agreement that <21 cm is an appropriate cut-off value to indicate pregnant women as at risk during emergencies. Though both the previous Médecins Sans Frontières study from 2013 and this scoping review suggest using MUAC <23 cm for pregnant women as an indicator of risk for adverse birth outcomes in humanitarian contexts, we do not have sufficient information to determine if MUAC also can be used as an indicator predicting the potential benefit (e.g., improved foetal growth) of a certain nutritional intervention. Additional information is required on which nutritional interventions with enrolment based on MUAC can sufficiently avert adverse risks. Lastly, there is a substantial focus in the studies identified on MUAC and adverse infantile or foetal outcomes. However, there is significant need to have more information on adverse maternal outcomes. Limitations Limitations of the current literature include the lack of research on maternal outcomes. Additionally, these studies lack focus on humanitarian emergencies or conflict settings, although their findings are still applicable to such contexts. Additionally, only seven studies were included in the final results, meaning that conclusions need to be met with a certain degree of caution. This further highlights the need for more research on this important topic. Limitations of this scoping review include potential bias due to having only one quality assessment reviewer, lack of comparability between studies due to variations in sample size, methodology, and context, exclusion of studies not available in English that may contain valuable results, and the omission of non-peer-reviewed grey literature, which could have provided noteworthy data. Recommendation A recommendation for future research would be to study the enrolment of pregnant women in nutritional interventions based on the use of MUAC <23 cm in efforts to reduce the risk of adverse outcomes. Additionally, future studies should not solely focus on adverse birth outcomes but also maternal outcomes. Conclusion The currently available research supports using maternal MUAC as the most appropriate anthropometric measurement and rapid assessment tool for identifying pregnant women as acutely undernourished and potentially in need of nutritional intervention to prevent adverse birth outcomes. This is particularly noteworthy in resource-limited settings, such as protracted humanitarian settings or emergencies. An advantage of measuring MUAC is that it requires minimal training and is reliable in identifying nutritional status. Initially, there has been no universal absolute cut-off value identified; however, this review supports the specific cut-off threshold for maternal MUAC in this context as <23 cm. For more information, please contact Sonia Kapil at [email protected] Editor’s note The importance of developing guidance to treat maternal undernutrition in humanitarian contexts for both improved birth and maternal outcomes is currently being explored within a Women’s Nutrition Taskforce established within the Global Nutrition Cluster Technical Alliance. The results of this review are being used by the Taskforce to inform the development of operational guidance for women’s nutrition in humanitarian contexts. References Adane T & Dachew BA (2018) Low birth weight and associated factors among singleton neonates born at Felege Hiwot referral hospital, North West Ethiopia. African Health Sciences, 18, 4, 1204–1213. Degno S, Lencha B, Aman R et al. (2021) Adverse birth outcomes and associated factors among mothers who delivered in Bale zone hospitals, Oromia Region, Southeast Ethiopia. The Journal of International Medical Research, 49, 5. Haque MA, Choudhury N, Farzana FD et al. (2021) Determinants of maternal low mid-upper arm circumference and its association with child nutritional status among poor and very poor households in rural Bangladesh. Maternal & Child Nutrition, 17, 4, e13217. Kpewou DE, Poirot E, Berger J et al. (2020) Maternal mid-upper arm circumference during pregnancy and linear growth among Cambodian infants during the first months of life. Maternal & Child Nutrition, 16, S2, e12951. Nyamasege CK, Kimani-Murage EW, Wanjohi M et al. (2019) Determinants of low birth weight in the context of maternal nutrition education in urban informal settlements, Kenya. Journal of Developmental Origins of Health and Disease, 10, 2, 237–245. Siyoum M & Melese T (2019) Factors associated with low birth weight among babies born at Hawassa University Comprehensive Specialized Hospital, Hawassa, Ethiopia. Italian Journal of Pediatrics, 45, 1, 48. Sphere Association (2018) The Sphere Handbook: Humanitarian Charter and Minimum Standards in Humanitarian Response (4th ed.). Geneva, Switzerland. Tesfa D, Tadege M, Digssie A et al. (2020) Intrauterine growth restriction and its associated factors in South Gondar zone hospitals, Northwest Ethiopia. Archives of Public Health, 78, 89. Vasundhara D, Hemalatha R, Sharma S et al. (2020) Maternal MUAC and fetal outcome in an Indian tertiary care hospital: A prospective observational study. Maternal & Child Nutrition, 16, 2, e12902. Ververs MT, Antierens A, Sackl A et al. (2013) Which anthropometric indicators identify a pregnant woman as acutely malnourished and predict adverse birth outcomes in the humanitarian context? PLoS Currents, 5. Published 11 September 2023 By Sonia Kapil, Mija Ververs About This Article Issue: Field Exchange 70 (en) Article type: Original articles Jump to section Background Methodology Findings Discussion Limitations Recommendation Conclusion Editor’s note References Download & Citation Document maternal_mid-upper_arm_circumference.pdf(388.85 KB) Recommended Citation Reference this page Close Sonia Kapil, Mija Ververs .Maternal mid-upper arm circumference: Still relevant to identify adverse birth outcomes in humanitarian contexts?. Citation Tools Download to a citation manager Close The below files can be imported into your preferred reference management tool, most tools will allow you to manually import the RIS file. Endnote may required a specific filter file to be used. BibTeXBookendsEasyBibEndNoteMendeleyZotero RIS Page Tags Field Exchange Original articles Women's nutrition Share Article About us ENN's Focus & strategy Our values Our team Vacancies Annual reports and accounts Our work Resources Networks en-net News and events Field Exchange About Search Field Exchange Write for Field Exchange Podcasts Subscribe to Field Exchange Sign up to our newsletters ENN is a charity in the UK no. 1115156, and a limited company no. 4889844 | Privacy Notice | Safeguarding | Accessibility Statement Use of personal data and cookies We use cookies and process personal data for the following purposes: Functional, Embedded external content & Analytics. Customise Decline Accept
190627
https://en.wikipedia.org/wiki/Zonal_spherical_harmonics
Zonal spherical harmonics - Wikipedia Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Relationship with harmonic potentials 2 Properties 3 References Zonal spherical harmonics [x] Add languages Add links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Add interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia In the mathematical study of rotational symmetry, the zonal spherical harmonics are special spherical harmonics that are invariant under the rotation through a particular fixed axis. The zonal spherical functions are a broad extension of the notion of zonal spherical harmonics to allow for a more general symmetry group. On the two-dimensional sphere, the unique zonal spherical harmonic of degree ℓ invariant under rotations fixing the north pole is represented in spherical coordinates by Z(ℓ)(θ,ϕ)=2 ℓ+1 4 π P ℓ(cos⁡θ){\displaystyle Z^{(\ell )}(\theta ,\phi )={\frac {2\ell +1}{4\pi }}P_{\ell }(\cos \theta )} where P ℓ is the normalized Legendre polynomial of degree ℓ, P ℓ(1)=1{\displaystyle P_{\ell }(1)=1}. The generic zonal spherical harmonic of degree ℓ is denoted by Z x(ℓ)(y){\displaystyle Z_{\mathbf {x} }^{(\ell )}(\mathbf {y} )}, where x is a point on the sphere representing the fixed axis, and y is the variable of the function. This can be obtained by rotation of the basic zonal harmonic Z(ℓ)(θ,ϕ).{\displaystyle Z^{(\ell )}(\theta ,\phi ).} In n-dimensional Euclidean space, zonal spherical harmonics are defined as follows. Let x be a point on the (n−1)-sphere. Define Z x(ℓ){\displaystyle Z_{\mathbf {x} }^{(\ell )}} to be the dual representation of the linear functional P↦P(x){\displaystyle P\mapsto P(\mathbf {x} )} in the finite-dimensional Hilbert spaceH ℓ{\displaystyle {\mathcal {H}}{\ell }} of spherical harmonics of degree ℓ{\displaystyle \ell } with respect to the uniform measure on the sphere S n−1{\displaystyle \mathbb {S} ^{n-1}}. In other words, we have a reproducing kernel:Y(x)=∫S n−1 Z x(ℓ)(y)Y(y)d Ω(y),∀Y∈H ℓ{\displaystyle Y(\mathbf {x} )=\int {S^{n-1}}Z_{\mathbf {x} }^{(\ell )}(\mathbf {y} )Y(\mathbf {y} )\,d\Omega (y),\quad \forall Y\in {\mathcal {H}}_{\ell }} where Ω{\displaystyle \Omega } is the uniform measure on S n−1{\displaystyle \mathbb {S} ^{n-1}}. Relationship with harmonic potentials [edit] The zonal harmonics appear naturally as coefficients of the Poisson kernel for the unit ball in Rn: for x and y unit vectors, 1 ω n−1 1−r 2|x−r y|n=∑k=0∞r k Z x(k)(y),{\displaystyle {\frac {1}{\omega {n-1}}}{\frac {1-r^{2}}{|\mathbf {x} -r\mathbf {y} |^{n}}}=\sum {k=0}^{\infty }r^{k}Z_{\mathbf {x} }^{(k)}(\mathbf {y} ),} where ω n−1{\displaystyle \omega {n-1}} is the surface area of the (n-1)-dimensional sphere. They are also related to the Newton kernel via 1|x−y|n−2=∑k=0∞c n,k|x|k|y|n+k−2 Z x/|x|(k)(y/|y|){\displaystyle {\frac {1}{|\mathbf {x} -\mathbf {y} |^{n-2}}}=\sum {k=0}^{\infty }c_{n,k}{\frac {|\mathbf {x} |^{k}}{|\mathbf {y} |^{n+k-2}}}Z_{\mathbf {x} /|\mathbf {x} |}^{(k)}(\mathbf {y} /|\mathbf {y} |)} where x,y ∈ Rn and the constants c n,k are given by c n,k=1 ω n−1 2 k+n−2(n−2).{\displaystyle c_{n,k}={\frac {1}{\omega _{n-1}}}{\frac {2k+n-2}{(n-2)}}.} The coefficients of the Taylor series of the Newton kernel (with suitable normalization) are precisely the ultraspherical polynomials. Thus, the zonal spherical harmonics can be expressed as follows. If α = (n−2)/2, then Z x(ℓ)(y)=n+2 ℓ−2 n−2 C ℓ(α)(x⋅y){\displaystyle Z_{\mathbf {x} }^{(\ell )}(\mathbf {y} )={\frac {n+2\ell -2}{n-2}}C_{\ell }^{(\alpha )}(\mathbf {x} \cdot \mathbf {y} )} where c n,ℓ{\displaystyle c_{n,\ell }} are the constants above and C ℓ(α){\displaystyle C_{\ell }^{(\alpha )}} is the ultraspherical polynomial of degree ℓ{\displaystyle \ell }. The 2-dimensional case Z(ℓ)(θ,ϕ)=2 ℓ+1 4 π P ℓ(cos⁡θ){\displaystyle Z^{(\ell )}(\theta ,\phi )={\frac {2\ell +1}{4\pi }}P_{\ell }(\cos \theta )}is a special case of that, since the Legendre polynomials are the special case of the ultraspherical polynomial when α=1/2{\displaystyle \alpha =1/2}. Properties [edit] The zonal spherical harmonics are rotationally invariant, meaning that Z R x(ℓ)(R y)=Z x(ℓ)(y){\displaystyle Z_{R\mathbf {x} }^{(\ell )}(R\mathbf {y} )=Z_{\mathbf {x} }^{(\ell )}(\mathbf {y} )} for every orthogonal transformation R. Conversely, any function f(x,y) on S n−1×S n−1 that is a spherical harmonic in y for each fixed x, and that satisfies this invariance property, is a constant multiple of the degree ℓ zonal harmonic. If Y 1, ..., Y d is an orthonormal basis of Hℓ, then Z x(ℓ)(y)=∑k=1 d Y k(x)Y k(y)¯.{\displaystyle Z_{\mathbf {x} }^{(\ell )}(\mathbf {y} )=\sum {k=1}^{d}Y{k}(\mathbf {x} ){\overline {Y_{k}(\mathbf {y} )}}.} Evaluating at x = y gives Z x(ℓ)(x)=ω n−1−1 dim⁡H ℓ.{\displaystyle Z_{\mathbf {x} }^{(\ell )}(\mathbf {x} )=\omega {n-1}^{-1}\dim \mathbf {H} {\ell }.} References [edit] Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN978-0-691-08078-9. Retrieved from " Categories: Rotational symmetry Special hypergeometric functions This page was last edited on 5 March 2025, at 06:35(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Zonal spherical harmonics Add languagesAdd topic
190628
https://www.youtube.com/watch?v=u1qqVuYd3a0
Problem 9.12 - E&M Waves in Vacuum, Energy & Momentum in E&M Waves: Introduction to Electrodynamics Curious About Science 4440 subscribers 18 likes Description 1372 views Posted: 31 Aug 2023 ◉ Note that this only works if the two waves have the same k and ω, but they need not have the same amplitude or phase. See how the trig product identities work their way in? Messy, but the averages make this work. ◉ Nice results that will be used later. ☞ 📚📖📓= Griffiths, David J. “Chapter 9 Electromagnetic Waves.” 𝘐𝘯𝘵𝘳𝘰𝘥𝘶𝘤𝘵𝘪𝘰𝘯 𝘵𝘰 𝘌𝘭𝘦𝘤𝘵𝘳𝘰𝘥𝘺𝘯𝘢𝘮𝘪𝘤𝘴, Pearson, Boston, MA, 2014, pp. 382–435. • Happy Learning! ⇢ Share knowledge - tag a friend!! ⇢ Subscribe for more! ⇢ Don't forget to turn on video notifications! 📺Patreon📺 ➜ My goal is to help navigate the world of mathematics and physics through problems and examples. Curiousaboutscience 📷Instagram📷 ➜ 🤓Facebook🤓 ➜ 📱Twitter📱 ➜ 4 comments Transcript: all right uh so we've had a couple long questions let's go ahead and get a quick hitter in there our statement is in a complex notation there is a clever device for finding the time average of a product suppose that we have f r of T is equal to a cosine K dot R minus Omega t plus Delta a and uh G of R of T is equal to B cosine same thing with Delta B show that the time average F time average product F of G is equal to one half of the real part of f tilde uh star times uh G tilde star where the star denotes the complex conjugate all right cool so we see here that um the time average was the integral of 1 over T integral zero to T of the product so if we take the product the ABS come out front um here we use uh some more trig identities with some indifferences so to combine them and then we split up the intervals notice that the integral of uh you know the integral with respect to T of the cosine term the average goes to zero and then we're left with cosine of Delta a minus Delta B oh that has no time dependence so that's just T minus zero and we see that the T from the T minus zero term cancels with the T from the one over t and we see that the time average is A over B uh divided by 2 cosine Delta a minus Delta B so you see we get a one-half there and an a b which comes from the product cosine of the difference of the phases so we're setting up pretty good now we just got to verify this so meanwhile in the complex notation what we see here is that we have F tilde is equal to a e to the I uh dot product to K and R minus Omega t plus uh Delta a G uh till days equal the same thing but with b and Delta B so what we need to do is verify that one half real times the product of f and g star is equal to one I have a b cosine with the difference in phase so let's go ahead and take the complex conjugate of G um so instead of positive I we have negative I notice that A and B are just a product um and then we can combine the exponentials here what we need to show is that if we uh divvy up the um the exponent the excuse me the imaginary term from the exponential and everything else A and B can be pulled out which we do that in the next step we have I a that excuse me i k r and blue canceling with the negative IKR we also have the negative I Omega T and the red canceling with a positive I Omega T and all that's left here is I which is a common factor so we factor that out Delta a minus Delta B okay fair enough let's break this x complex exponential down into the sine and cosines and we see that the uh since well since we only want the real part we only take the cosine and sure enough we see that the product is equal to a b over 2 cosine with the difference of the phases so that's that's a really cool trick and it definitely will be used again in a couple questions
190629
https://www.youtube.com/watch?v=bLhxQIdbWW8
What is Integration by Parts - How to do Integration by Parts Learn Math Tutorials 139000 subscribers 13044 likes Description 1157704 views Posted: 20 Feb 2013 This tutorial demonstrates how to do integration by parts. Join this channel to get access to perks: :) 248 comments Transcript: hey everybody this is Paul in this tutorial I'm going to be doing an example of integration by parts so integration by parts is just another way that we can use to evaluate an integral and this method is typically taught around the beginning of a second semester calculus course so basically this is the general form of integration by parts and if you don't know what this means don't worry about it I'll kind of be translating what this means uh through an example here so for our example what we're going to do is we're going to look at the integral of x e to x with respect to X and uh we want to basically use integration by parts to do this so what we have to do is we have to label part of this U and the other part of it we need to label DV so usually what we want to do is we want to pick the part that will kind of turn into a constant if we take the derivative enough times for our U so if we take the derivative of x with respect to X it will turn into the constant one so we'll just go ahead and pick that for our U so we'll just put that over here U we decide is going to be X and since we chose X to be r u that means the rest of this stuff has to be the DV so DV is equal to e to the X DX and so now that we've got our U and our DV figured out out we also need to figure out what du is and we also need to figure out what V is so we can figure out what du is by taking the derivative of U with respect to X so derivative of U with respect to X is simply just one since the derivative of x with respect to X is one and then if we just were to multiply the left and right hand side of this by DX this would simply become du is equal to to DX so we've defined what du is now so now let's go figure out what V is going to be so if we were to just integrate the left and right hand side of this right here we would have the integral of DV is equal to e to the X DX the integral of e to the X DX and then if we integrate this side we simply have V if we integrate this one we have e to the X plus some constant but we're not to worry about the constant right now so this is our v v is equal to e to X now so now we have all the parts we need so basically this is going to equal U v u is equal to x v is equal to e to the X so this is our U V right here and then we just subtract the integral of V du V was e to the X and du was DX so this simply becomes x e to x minus the integral of e to x with respect to X which is simply e to the X and then we can go ahead and add our constant term here so this is our answer now and we can actually simplify this a little bit we can factor out an e to the X Out of these first two terms so this becomes e to x x - one and then that's plus some constant so we went ahead and just solv the integral of x e to the X with respect to X and it turns out that it's equal to e to X multiplied the quantity of x -1 plus some constant so this is the solution and we got that by using integration by parts so I hope this example helped you guys understand how to use integration by parts I appreciate you guys watching I hope you have an excellent day and if you haven't already don't forget to subscribe
190630
https://www.thermopedia.com/cn/content/990/
图书馆订阅: Guest NUCLEATE BOILING Kenning, D. B. R. DOI: 10.1615/AtoZ.n.nucleate_boiling 创建日期: 2 February 2011 最后修改日期: 4 February 2011 在 A-Z 索引中查看 浏览次数: 103440 When boiling occurs on a solid surface at low superheat, bubbles can be seen to form repeatedly at preferred positions called nucleation sites. Nucleate boiling can occur in Pool Boiling and in Forced-Convective Boiling. The heat transfer coefficients are very high but, despite many years of research, empirical correlations for the coefficients have large error bands. Much of the difficulty arises from the sensitivity of nucleate boiling to the microgeometry of the surface on a micron length scale and to its wettability; it is difficult to find appropriate ways of quantifying these characteristics. There is still disagreement about the physical mechanisms by which the heat is transferred so phenomenological models for nucleate boiling at present do no better, and often worse, than the empirical correlations. An empirical correlation of wide application has been given by Gorenflo (1991), based on the general scaling of fluid thermal and transport properties with reduced pressure p/pc and reduced temperature T/Tc. (see Reduced Properties.) Recent reviews of the voluminous research literature on mechanisms in boiling include those by Dhir (1990) and Fujita (1992). This article describes the features of nucleate boiling on which there is broad agreement and indicates the areas of disagreement and further development. The approach to modeling of nucleate boiling at low wall superheats has been to try to understand separately how many nucleation sites are active at a specified superheat, how bubbles grow and depart and how they influence heat transfer. We shall see that the processes are in fact linked, that wall superheat cannot be specified by a single value and that the flow conditions of the bulk liquid in pool boiling or convective boiling have some influence when nucleation sites are widely spaced in the so-called 'isolated-bubble' regime. First, however, we consider an idealized situation, the conditions for equilibrium of a small spherical vapor bubble of radius re in pure, uniformly superheated liquid and the consequences of departures from equilibrium. 'Superheat' and 'subcooling', which occur so frequently in the descriptions of boiling, are defined relative to the saturation temperature Tsat(p0) corresponding to the system pressure po, being the condition for equilibrium between liquid and vapor at an interface of zero curvature, Figure 1. A spherical bubble of finite radius r has an interface of curvature 2/r and this has two effects: (1) for mechanical equilibrium of the bubble interface there must be an excess internal pressure of 2σ/re to resist the collapsing membrane stress caused by the surface tension σ; (2) the vapor pressure for a given interfacial temperature is decreased (Kelvin equation) (1) where σ is surface tension, v specific volume of the liquid, M molecular weight and the universal gas constant. Figure 1. Equilibrium at plane and curved interfaces. There is a similar effect with exponent of opposite sign for the vapor pressure in equilibrium with a droplet of liquid. The effect is negligible for radii greater than about 10 nm. From (1) and (2) the vapor pressure must be greater than p0 by 2σ/re and the interface must be superheated, Figures 1 and 2(a). In a uniformly superheated liquid maintained at constant pressure the equilibrium of the bubble is unstable against any disturbance. A decrease in radius leads to a requirement for a higher vapor pressure for equilibrium; this cannot be provided so collapse continues. An increase in radius leads to a requirement for a lower vapor pressure and therefore a lower interfacial superheat. The resulting temperature gradient from the bulk liquid to the interface drives the heat flow that provides the latent heat for continued growth, Figure 2(b). A radial pressure difference is required to drive the motion of the liquid but this declines as growth proceeds, also the interfacial temperature approaches the saturation temperature as 2σ/re becomes negligible. Then the rate of growth of the bubble is controlled by the rate of heat transfer, which can be modeled approximately by transient conduction in the liquid: (2) (3) where λ1 is thermal conductivity of the liquid, κ1 thermal diffusivity of the liquid, h1g latent heat of evaporation, ρg density of the vapor, ρ1 density of the liquid and c1 specific heat capacity of the liquid. Figure 2. Unstable equilibrium and growth of a bubble nucleus. In homogeneous nucleation the unstable nuclei from which growth commences are supposed to be formed by the random fluctuations in local energy in the superheated liquid. The number distribution of clusters of high-energy molecules (i.e., bubbles) depends on their work of formation, including a contribution by surface free energy (surface tension). Some of the clusters will be above the critical size for unstable equilibrium, some below. By combining the expression for cluster size distribution with a model for rate of growth or collapse, the net rate of bubble nucleation can be predicted, Skripov (1974), Blander and Katz (1975). The rate is extremely sensitive to temperature, increasing by many orders of magnitude over a very small range of temperature so that an effective homogeneous nucleation temperature Tn can be calculated. Lienhard (1976) obtained an approximate generalization of the analyses in the form (4) For system pressures well below the critical pressure, the homogeneous nucleation temperature is approximately 0.91 Tc, whatever the value of p0, corresponding to very high superheats Tn – Tsat. These superheats can be achieved in very carefully-controlled experiments but generally bubbles nucleate at superheats far smaller than those predicted by homogeneous nucleation models, particularly at solid walls. When water boils at atmospheric pressure on a heated metal wall, bubbles appear at wall superheats of around 10K, compared to the superheat of 216K required for homogeneous nucleation. Similarly low superheats are required for the boiling of most other liquids on heated solid walls. Superheats approaching the homogeneous nucleation values can be achieved only by subjecting the liquid-wall system to prolonged periods of high pressure at low temperature (subcooling) or sometimes in the initiation of boiling of extremely well-wetting liquids such as fluorocarbons [Bar-Cohen, (1992)]. Wetting can be quantified by measurement of the Contact Angle θ between the liquid-vapor interface and the surface of the solid but the contact angle can exhibit hysteresis, its value depending on the direction and rate of motion of the contact line between liquid, vapor and solid, Figure 3. Microscopic examination, combined with the sensitivity of the boiling superheats to the previous temperature-pressure history, wettability and surface finish of the heated wall, provides strong evidence that the preferred sites for bubble formation are cavities with dimensions of a few microns or smaller, that trap and stabilize liquid-vapor interfaces at far larger radii of curvature than those of the nuclei in homogeneous nucleation. Thus 'nucleate boiling' is a misnomer, since new clusters of vapor phase do not have to be created. Instead bubbles form repeatedly from tiny reservoirs of continuously-maintained vapor. The processes inside such small cavities cannot be observed directly. Figure 3. Definition of contact angle Q; hysteresis. The supposed mechanisms by which liquid-vapor interfaces are stabilized are summarized in Figure 4 Most systems in which boiling occurs are initially filled with cold liquid so the liquid-vapor interface must be stabilized when subcooled liquid first enters the cavity, i.e., when the vapor pressure pg is less than the system pressure p0. This requires reversal of the curvature of the interface. This could occur at a region in the cavity that is so poorly wetted that the local contact angle greatly exceeds 90° (Figure 4a). However, contact angles measured on large plane surfaces generally range from nearly zero for cryogenic and fluorocarbon liquids on clean metals to around 70° for water on poorly cleaned stainless steel. Reversal of interfacial curvature when θ is much less than 90° requires a re-entrant geometry (Figure 4b). The presence of trapped or dissolved noncondensible gas can have a large effect on the stability of the interface under subcooled conditions by increasing the total pressure in the reservoir of gas plus vapor so that it exceeds the system pressure; the effect may be time-dependent as soluble gas diffuses between the interface and the interior of the liquid. When the temperature of the wall surrounding the cavity is increased, the vapor pressure increases until the curvature of the liquid-vapor interface reverses again and the trapped vapor becomes unstable to growth by evaporation (Figure 4c). The excess pressure and the corresponding superheat for equilibrium may go through several local maxima before the vapor finally emerges from the cavity and 'nucleates' the growth of a visible bubble, (Figure 4d). The highest of these superheats determines the wall superheat for the inception of bubble production. Maintaining production may then be possible at a lower superheat that only has to overcome the local maximum at the mouth of the cavity, position 6 in Figure 4d. This model explains the sensitivity of the onset of boiling of well-wetting fluids to pre-boiling conditions and the hysteresis between boiling curves for increasing and decreasing heat flux (Figure 5). Figure 4. Nucleation at a wall cavity. Figure 5. Boiling curve hysteresis. As an embryo bubble emerges from a cavity it encounters a large negative temperature gradient in the liquid surrounding it, resulting from the efficient heat transfer driven by the motion of previous bubbles produced by the cavity itself or by adjacent nucleation sites. This gradient has been modeled by transient or steady conduction into the liquid. It reduces the effective superheat at the interface of a spherical bubble at the mouth of a cavity (Figure 6) and limits the size range of cavities that can be active. When combined with information about the size distribution of cavities actually present on a surface (which may be difficult to obtain) and the further assumption that the wall superheat is uniform, this model should define the number of nucleation sites active at any superheat. Increasing the superheat should activate progressively smaller cavities, causing the steep gradient of the nucleate boiling curve. However, the model is oversimplified and does not take into account the inherent patchiness of nucleate boiling heat transfer which, in some circumstances, can lead to large local variations from the mean value of the wall superheat [Kenning (1992)]. Figure 6. Active size range of nucleation site Cavities which are stable traps for subcooled vapor prior to boiling may not be the only nucleation sites for bubbles once boiling has been established. Rather shallow cavities which are poor vapor traps may be 'seeded' with vapor from bubbles growing at more stable sites [Judd and Chopra (1993)]. Small bubbles bursting through the liquid layers under larger bubbles may produce clouds of tiny bubbles that act as secondary nucleation sites that are not associated with surface cavities [Mesler (1992)], by a process for which there is as yet no quantifiable model. Because of the various mechanisms by which nucleation sites can be created and interact, it is not possible to specify the number of active nucleation sites without also considering bubble motion and the localized processes of heat transfer. The mechanism of growth of a bubble in uniformly superheated liquid, described previously, is modified when nucleation occurs at a solid wall. Growth as a perfect hemisphere (Figure 7a) is prevented by the difficulty of displacing liquid from the solid boundary so a microlayer of liquid is left under the base of the bubble (Figure 7b). The curvature at the periphery of the bubble depends on the local viscous and inertial stresses. It is sometimes sharp enough to give the appearance of a contact angle between the bubble and the wall but there is no triple contact line so the properties of the wall can exert no influence. The thickness of the microlayer at the bubble boundary can be estimated from viscous boundary layer theory without detailed consideration of the bubble shape. As it grows, the bubble displaces liquid so by the time it reaches a point at distance R from the nucleation site in time t the liquid at R has been in motion for time t and the boundary layer of slow-moving liquid that is overtaken by the bubble is of thickness δRo, where (5) where ν1 is the kinematic viscosity of the liquid. Figure 7. Bubble growth and detachment. The bubble grows by transient conduction of heat to its interface, as in Equations (2) and (3) but modified by the temperature gradient in the liquid near the wall, and by additional conduction through the microlayer so approximately (6) From Equations (5) and (6) the initial thickness of the microlayer under a growing bubble increases approximately linearly with radius to a thickness ranging from a few microns for small, fast-growing bubbles to tens of microns under slow-growing bubbles in pool boiling at low wall superheats. Once formed, the microlayer decreases in thickness by evaporating into the bubble as heat is conducted from the superheated wall across the thin microlayer. As the bubble sticks further out from the wall into liquid that is less superheated, or even subcooled, its rate of growth decreases and it starts to move away from the wall under the combined influence of hydrodynamic and hydrostatic forces. In saturated pool boiling on a horizontal wall the bubble lifts off vertically and the periphery of the base of the bubble moves back towards the nucleation site (Figure 7c). Initially it moves over wall that is still covered by the microlayer but at small radii it may encounter a region where the microlayer has evaporated to dryness and then it would be appropriate to refer to a dynamic advancing contact angle at the base of the bubble. Cooper and Chandratilleke (1981) have presented nondimensional functions to describe the evolution from near-hemispherical to near-spherical shape during the growth of bubbles under various idealized conditions but analytical models for bubble growth often assume inaccurately that the bubble is a truncated sphere. Correlation of the departure size by a balance between buoyancy and surface tension forces with a static contact angle θ (Figure 7d) gives no more than the right order of magnitude for the radius: (7) Improvements in the understanding of bubble departure are to be expected from numerical modeling that takes account of changes in bubble shape and the associated liquid inertia that can drive bubbles away from the wall, even against a hydrostatic buoyancy force. In subcooled boiling the bubbles recondense, either after moving away from the superheated wall at low subcooling, or in close proximity to the wall at large subcooling of the bulk liquid. (See also Bubble Growth.) The overall mechanism of heat transfer must involve heat removal from the wall, followed by transport into the interior of the bulk liquid. In nucleate boiling, the bubbles somehow greatly reduce the thermal resistance that occurs close to the wall in heat transfer to a single-phase liquid. The mechanisms of heat removal from the wall, summarized in Figure 8, are generally supposed to be: conduction across the very thin microlayers under growing bubbles; quenching by relatively cold bulk liquid moving towards the wall as bubbles round off and detach, modeled by transient conduction into the liquid from 'areas of influence' on the wall about four times the maximum contact area of the bubbles; further localized convective cooling by the motion of bulk liquid in the wakes of departing bubbles; a general increase in turbulence in the liquid close to the wall. Figure 8. Mechanisms of heat transfer in nucleate boiling. Heat is transferred into the bulk liquid by the motion of bubbles away from the wall (latent heat transport), which may also carry some superheated liquid round each bubble, or by turbulent transport in the liquid. In subcooled boiling there may be a 'heat-pipe' effect of vapor evaporating at the base of bubbles and recondensing where the bubbles are in contact with the subcooled bulk liquid (Figure 8v). Mechanisms (1), (2) and (3) are concentrated round the nucleation sites and fluctuate as bubbles grow and depart so there must be some unsteady lateral conduction of heat in the wall, (Figure 9). Only a wall made of a material with infinite thermal diffusivity can have a uniform, steady superheat. In experiments using very thin, electrically heated walls the local variations in temperature are accentuated and can be measured by observing a layer of thermochromic liquid crystal on the back of the wall, Kenning (1992) and Kenning and Yan (1995). In pool boiling of water at low heat fluxes such measurements confirm that there is strong cooling by microlayer evaporation (1); they show that mechanisms (2) and (3) are less effective than the transient conduction 'quenching' model suggests and that they operate on a wall area of influence that is no bigger than the maximum projected areas of the bubbles (Figure 10); the general level of convective cooling (4) is several times the level expected for single phase convection. The localized cooling round the nucleation sites interacts with the processes of bubble nucleation and growth (Figure 11). The waiting between bubbles depends on the rate of recovery of the local wall superheat after the departure of a bubble. This recovery may be interrupted by cooling by bubbles growing at adjacent sites or by fluctuations in the general convective cooling so that sites produce bubbles intermittently. The spatial variations in wall superheat affect the rate of microlayer evaporation that helps to drive bubble growth. The model for nucleation site activity based only on the mean wall superheat, summarized in Figure 6, cannot represent the intricacies of the real nucleation processes on thin walls. On thicker walls the variations in superheat should be smaller but they can only be measured at a few locations by microthermometers. However, the variations can be modeled numerically on a supercomputer and preliminary studies suggest that they influence nucleate boiling, even on a wall of high thermal conductivity such as copper [Sadasivan et al (1994)]. This sort of study should improve our understanding of nucleate boiling but the fundamental difficulties of specifying the microgeometry and internal wetting characteristics of the nucleation sites will remain. Figure 9. Lateral conduction in the wail. Figure 10. Wall cooling during bubble growth 50-60: growth to maximum radius; 60–70: detachment; 70–80: rising bubble. Figure 11. Interaction between nucleation sites; influence of sites C, D, E on site A. As the heat flux and the mean wall superheat are increased in saturated pool boiling, the active nucleation sites become so numerous that their bubbles start to coalesce a short distance from the wall. There is a transition to 'fully developed' nucleate boiling, in which the wall is covered by a liquid-rich 'macrolayer' less than 1 mm thick through which thin stems of vapor are connected to an overlying cloud of large 'mushroom' bubbles (Figure 12). Heat transfer is assumed to occur by conduction across the unsteady macrolayer causing evaporation at the bases of the mushroom bubbles, and by evaporation at the wall into the vapor stems feeding the bubbles. The fluctuating solid-liquid-vapor contact lines at the bases of the stems may be zones of efficient heat transfer. Wayner (1992) has described the processes of flow and heat transfer in liquid films so thin that they are influenced by van der Waals forces. There is still debate about the mechanisms of heat transfer in fully-developed nucleate boiling. The boiling curve (heat flux vs. mean wall superheat) loses the sensitivity to the orientation of the wall that is evident at lower heat fluxes [Nishikawa et al (1984)] but it is still sensitive to the surface condition of the wall, so there is no discontinuity in the curve. It is unclear what role is played by the individual nucleation sites as the heat flux is increased and the macrolayer gets thinner. Nucleate boiling breaks down when the macrolayer can no longer be replenished with liquid at a sufficient rate, or when local dry spots are stabilized by the resulting local increase in wall superheat (see Burnout in Pool Boiling). Figure 12. Transition from partial nucleate boiling to fully-developed nucleate boiling. In forced-convective boiling the heated walls form confining channels through which liquid is forced by an externally-applied pressure gradient. The conditions that have received most experimental attention are flow inside vertical and horizontal tubes and flow outside bundles of horizontal tubes. Most experiments involve uniform electrical heating, which does not always represent well the boundary conditions for boiling in heat exchangers, where the source of heat is a hot fluid. The liquid is usually subcooled when it enters the heated region. Vapor is first generated by nucleate boiling; the wall must be superheated to a value that depends on its microgeometry and wettability in order to activate nucleation sites, as in pool boiling. This superheat may be generated by increasing the heat flux, by decreasing the liquid flow rate or by decreasing the system pressure (or perhaps by a combination of all three in industrial systems). In uniformly heated systems boiling is initiated near the downstream end of the heated channel, where the wall temperature is highest and the pressure is lowest, giving the highest wall superheat. With further increases in heat flux, for instance, the initiation point moves upstream and flow boiling develops on its downstream side through regions of subcooled boiling in which the vapor bubbles condense at or close to the wall, bubbly flow in which bubbles move into the bulk flow (even if it is still slightly subcooled, i.e., the averaged thermodynamic quality is still negative), then the flow regimes corresponding to higher vapor fractions described in the articles on Forced Convection Boiling and Two-phase Flow. A typical wall temperature profile along a uniformly heated channel is shown in Figure 13. The wall superheat is approximately constant in the subcooled nucleate boiling region; as the quantity of flowing vapor increases, the wall superheat decreases as other mechanisms of heat transfer come into effect. (Wall superheats generally refer to time-space average values, since little work has been done in flow boiling on the local variations in superheat that have been shown to be important in pool boiling.) Like pool boiling, flow nucleate boiling of well-wetting liquids can exhibit hysteresis, which can modify the axial distributions of wall superheat [Wadekar (1993)]. Figure 13. Temperature changes along a uniformly-heated channel. Design correlations usually treat flow boiling as a combination of nucleate boiling and convection. Care is required because nucleate boiling is expected to be a nonlinear function of the wall superheat and convection may be driven either by the wall to bulk temperature difference in subcooled flows or by the wall superheat in saturated flows; it is safer to combine heat fluxes at a given wall superheat, rather than heat transfer coefficients. The nucleate boiling contribution is often based on information obtained from pool boiling experiments so it is subject to the usual difficulties of specifying surface conditions; this makes it difficult to obtain accurate correlations or even to choose between correlation schemes. One such scheme is the simple addition of the nucleate boiling and liquid convective heat fluxes, which seems to work reasonably well for large nucleate boiling fluxes at large liquid subcooling [e.g., del Valle and Kenning (1985)], although the nucleate boiling flux depends on the subcooling. This is not surprising because subcooling has a large effect on bubble behavior, reducing size but increasing frequency. This simple scheme does not work when the mass fraction x of vapor becomes significant in saturated flow boiling. The presence of the vapor increases the convective heat flux by mechanisms that depend on the flow regime. The velocity and turbulence of the liquid near the wall may be increased, sliding bubbles may continuously create thin liquid microlayers analogous to the transient microlayers in pool boiling [Cornwell (1990)], the flow may oscillate in the slug-churn flow regime or liquid may flow on the wall as a thin but highly disturbed film in the annular flow regime. These effects may be represented approximately by multiplying the single-phase liquid heat transfer coefficient by an enhancement factor F that is a function of the local quality x and the fluid properties to obtain the convective heat flux . The changes in flow conditions at the wall should have an effect on the nucleate boiling heat flux at a given wall superheat. The nuclei are exposed to larger temperature gradients, which from Figure 6 may suppress the activity of some sites, and hydrodynamic forces cause bubbles to detach at smaller sizes than in pool boiling. On the other hand, nucleation may be aided by the seeding of unstable sites as bubbles slide along the wall or by the entrainment of microbubbles created at liquid-vapor interfaces in the interior of the two-phase flow. The details of these processes are not understood but it is assumed that they can be combined in a suppression factor S that is a function only of the local flow conditions and which reduces the basic nucleate boiling heat flux. Chen (1966) introduced a correlation scheme for heat , of the form (8) of which there have been many subsequent developments (see article on Boiling). However, some experimental data are better represented by setting equal to whichever is larger of or , [Kenning and Cooper (1989)]. This article has so far dealt with nucleate boiling on surfaces that are nominally smooth. We have seen that nucleation depends on microscopic cavities that are accidental consequences of the method of manufacture of the surface. Prolonged service may modify the nucleation characteristics by corrosion or by deposition of corrosion products or dissolved solids. For non-fouling service, tubing is commercially available with 'enhanced' surfaces designed to provide large but stable nucleation sites, sometimes combined with short fins to extend the surface area. Thome (1990) provides a detailed description of boiling on enhanced surfaces (see Augmentation of Heat Transfer, Two Phase). REFERENCES Bar-Cohen, A. (1992) Hysteresis phenomena at the onset of nucleate boiling, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 1–14. Blander, M. and Katz, J. L. (1975) Bubble nucleation in liquids, AIChE Journal 21, 833–848. Chen, J. C. (1966) Correlation for boiling heat transfer to saturated fluids in convective flow, Ind. Eng. Chem. Process Design and Development 5, 322–329. Cooper, M. G. and Chandratilleke, T. (1981) Growth of diffusion-controlled vapor bubbles at a wall in a known temperature gradient, Int. J. Heat Mass Transfer 24, 1475–1492 DOI: 10.1016/0017-9310(81)90215-5. Cornwell, K. (1990) The influence of bubbly flow on boiling from a tube in a bundle, Int. J. Heat Mass Transfer 33, 2579–2584 DOI: 10.1016/0017-9310(90)90193-X. del Valle, V. H. and Kenning, D. B. R. (1985) Subcooled boiling at high heat flux. Int. J. Heat Mass Transfer 28, 1907-1920 DOI: 10.1016/0017-9310(85)90213-3 . Dhir, V. K. (1990) Nucleate and transition boiling under pool and external flow conditions, Proc. 9th Int. Heat Transfer Conf., Jerusalem, 1, 129–156 DOI: 10.1016/0142-727X(91)90018-Q. Fujita, Y. (1992) The state-of-the-art nucleate boiling mechanism, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 83–98. Gorenflo, D. (1991) Behaltersieden, VDI-Warmeatlas, 6th edn., VD1-Verlag, Dusseldorf. Judd, R. L. and Chopra, A. (1993) Interaction of the nucleation processes occurring at adjacent nucleation sites, J. Heat Transfer 115, 955–962 . Kenning, D. B. R. and Cooper. M. G. (1989) Saturated flow boiling of water in vertical tubes, Int. J. Heat Mass Transfer 32, 445–458 DOI: 10.1016/0017-9310(89)90132-4 . Kenning, D. B. R. (1992) Wall temperature patterns in nucleate boiling, Int. J. Heat Mass Transfer 35, 73–86 DOI: 10.1016/0017-9310(92)90009-H. Kenning, D. B. R. and Yan, Y. (1996) Pool boiling heat transfer on a thin plate: features revealed by liquid crystal thermography, Int. J. Heat Mass Transfer 30, 3117-3137 published June 1996 DOI: 10.1016/0017-9310(96)00006-3. Lienhard, J. H. (1976) Correlation for the limiting liquid superheat, Chem. Eng. Science 31, 847–849 DOI: 10.1016/0009-2509(76)80063-2. Mesler, R. B. (1992) Improving nucleate boiling using secondary nucleation, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 43–48. Nishikawa, K., Fujita, Y. and Ohta, H. (1984) Effect of surface configuration on nucleate boiling heat transfer, Int. J. Heat Mass Transfer 27, 1559–1571 DOI: 10.1016/0017-9310(84)90268-0. Sadasivan, P., Unal, C. and Nelson, R. A. (1994) Nonlinear aspects of high heat flux nucleate boiling heat transfer, Los Alamos National Laboratory Reports TSA-6-94-R105, R106 . Skripov, V. P. (1974) Metastable Liquids, Wiley, New York. Thome, J. R. (1990) Enhanced Boiling Heat Transfer, Hemisphere, New York. Wadekar, V. (1993) Onset of boiling in vertical upflow, Heat Transfer-Atlanta, AIChE Symposium Series 295, 89, 293–299. Wayner, P. C. (1992) Evaporation and stress in the contact line region, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 251–256. 参考文献列表 Bar-Cohen, A. (1992) Hysteresis phenomena at the onset of nucleate boiling, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 1€“14. Blander, M. and Katz, J. L. (1975) Bubble nucleation in liquids, AIChE Journal 21, 833€“848. Chen, J. C. (1966) Correlation for boiling heat transfer to saturated fluids in convective flow, Ind. Eng. Chem. Process Design and Development 5, 322€“329. Cooper, M. G. and Chandratilleke, T. (1981) Growth of diffusion-controlled vapor bubbles at a wall in a known temperature gradient, Int. J. Heat Mass Transfer 24, 1475€“1492 DOI: 10.1016/0017-9310(81)90215-5. Cornwell, K. (1990) The influence of bubbly flow on boiling from a tube in a bundle, Int. J. Heat Mass Transfer 33, 2579€“2584 DOI: 10.1016/0017-9310(90)90193-X. del Valle, V. H. and Kenning, D. B. R. (1985) Subcooled boiling at high heat flux. Int. J. Heat Mass Transfer 28, 1907-1920 DOI: 10.1016/0017-9310(85)90213-3 . Dhir, V. K. (1990) Nucleate and transition boiling under pool and external flow conditions, Proc. 9th Int. Heat Transfer Conf., Jerusalem, 1, 129€“156 DOI: 10.1016/0142-727X(91)90018-Q. Fujita, Y. (1992) The state-of-the-art nucleate boiling mechanism, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 83€“98. Gorenflo, D. (1991) Behaltersieden, VDI-Warmeatlas, 6th edn., VD1-Verlag, Dusseldorf. Judd, R. L. and Chopra, A. (1993) Interaction of the nucleation processes occurring at adjacent nucleation sites, J. Heat Transfer 115, 955€“962 DOI: 10.1115/1.2911392. Kenning, D. B. R. and Cooper. M. G. (1989) Saturated flow boiling of water in vertical tubes, Int. J. Heat Mass Transfer 32, 445€“458 DOI: 10.1016/0017-9310(89)90132-4 . Kenning, D. B. R. (1992) Wall temperature patterns in nucleate boiling, Int. J. Heat Mass Transfer 35, 73€“86 DOI: 10.1016/0017-9310(92)90009-H. Kenning, D. B. R. and Yan, Y. (1996) Pool boiling heat transfer on a thin plate: features revealed by liquid crystal thermography, Int. J. Heat Mass Transfer 30, 3117-3137 published June 1996 DOI: 10.1016/0017-9310(96)00006-3. Lienhard, J. H. (1976) Correlation for the limiting liquid superheat, Chem. Eng. Science 31, 847€“849 DOI: 10.1016/0009-2509(76)80063-2. Mesler, R. B. (1992) Improving nucleate boiling using secondary nucleation, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 43€“48. Nishikawa, K., Fujita, Y. and Ohta, H. (1984) Effect of surface configuration on nucleate boiling heat transfer, Int. J. Heat Mass Transfer 27, 1559€“1571 DOI: 10.1016/0017-9310(84)90268-0. Sadasivan, P., Unal, C. and Nelson, R. A. (1994) Nonlinear aspects of high heat flux nucleate boiling heat transfer, Los Alamos National Laboratory Reports TSA-6-94-R105, R106 DOI: 10.1115/1.2836320. Skripov, V. P. (1974) Metastable Liquids, Wiley, New York. Thome, J. R. (1990) Enhanced Boiling Heat Transfer, Hemisphere, New York. Wadekar, V. (1993) Onset of boiling in vertical upflow, Heat Transfer-Atlanta, AIChE Symposium Series 295, 89, 293€“299. Wayner, P. C. (1992) Evaporation and stress in the contact line region, Proc. Engineering Foundation Conf. on Pool and External Flow Boiling, Santa Barbara, 251€“256. 返回顶部 © Copyright 2008-2025 其他产品中的相关内容 BegellHouse Inc. Annual Review of Heat Transfer AIHTC ICHMT
190631
https://openstax.org/books/university-physics-volume-1/pages/12-problems
Ch. 12 Problems - University Physics Volume 1 | OpenStax This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising purposes. Privacy Notice Customize Reject All Accept All Customize Consent Preferences We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below. The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ...Show more For more information on how Google's third-party cookies operate and handle your data, see:Google Privacy Policy Necessary Always Active Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data. Cookie oxdid Duration 1 year 1 month 4 days Description OpenStax Accounts cookie for authentication Cookie campaignId Duration Never Expires Description Required to provide OpenStax services Cookie __cf_bm Duration 1 hour Description This cookie, set by Cloudflare, is used to support Cloudflare Bot Management. Cookie CookieConsentPolicy Duration 1 year Description Cookie Consent from Salesforce Cookie LSKey-c$CookieConsentPolicy Duration 1 year Description Cookie Consent from Salesforce Cookie renderCtx Duration session Description This cookie is used for tracking community context state. Cookie pctrk Duration 1 year Description Customer support Cookie _accounts_session_production Duration 1 year 1 month 4 days Description Cookies that are required for authentication and necessary OpenStax functions. Cookie nudge_study_guides_page_counter Duration 1 year 1 month 4 days Description Product analytics Cookie _dd_s Duration 15 minutes Description Zapier cookies that are used for Customer Support services. Cookie ak_bmsc Duration 2 hours Description This cookie is used by Akamai to optimize site security by distinguishing between humans and bots Cookie PHPSESSID Duration session Description This cookie is native to PHP applications. The cookie stores and identifies a user's unique session ID to manage user sessions on the website. The cookie is a session cookie and will be deleted when all the browser windows are closed. Cookie m Duration 1 year 1 month 4 days Description Stripe sets this cookie for fraud prevention purposes. It identifies the device used to access the website, allowing the website to be formatted accordingly. Cookie BrowserId Duration 1 year Description Sale Force sets this cookie to log browser sessions and visits for internal-only product analytics. Cookie ph_phc_bnZwQPxzoC7WnmjFNOUQpcKsaDVg8TwnyoNzbClpIsD_posthog Duration 1 year Description Privacy-focused platform cookie Cookie cookieyes-consent Duration 1 year Description CookieYes sets this cookie to remember users' consent preferences so that their preferences are respected on subsequent visits to this site. It does not collect or store any personal information about the site visitors. Cookie _cfuvid Duration session Description Calendly sets this cookie to track users across sessions to optimize user experience by maintaining session consistency and providing personalized services Cookie dmn_chk_ Duration Less than a minute Description This cookie is set to track user activity across the website. Cookie cookiesession1 Duration 1 year Description This cookie is set by the Fortinet firewall. This cookie is used for protecting the website from abuse. Functional [x] Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features. Cookie session Duration session Description Salesforce session cookie. We use Salesforce to drive our support services to users. Cookie projectSessionId Duration session Description Optional AI-based customer support cookie Cookie yt-remote-device-id Duration Never Expires Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. Cookie ytidb::LAST_RESULT_ENTRY_KEY Duration Never Expires Description The cookie ytidb::LAST_RESULT_ENTRY_KEY is used by YouTube to store the last search result entry that was clicked by the user. This information is used to improve the user experience by providing more relevant search results in the future. Cookie yt-remote-connected-devices Duration Never Expires Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. Cookie yt-remote-session-app Duration session Description The yt-remote-session-app cookie is used by YouTube to store user preferences and information about the interface of the embedded YouTube video player. Cookie yt-remote-cast-installed Duration session Description The yt-remote-cast-installed cookie is used to store the user's video player preferences using embedded YouTube video. Cookie yt-remote-session-name Duration session Description The yt-remote-session-name cookie is used by YouTube to store the user's video player preferences using embedded YouTube video. Cookie yt-remote-fast-check-period Duration session Description The yt-remote-fast-check-period cookie is used by YouTube to store the user's video player preferences for embedded YouTube videos. Cookie yt-remote-cast-available Duration session Description The yt-remote-cast-available cookie is used to store the user's preferences regarding whether casting is available on their YouTube video player. Analytics [x] Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Cookie hjSession Duration 1 hour Description Hotjar sets this cookie to ensure data from subsequent visits to the same site is attributed to the same user ID, which persists in the Hotjar User ID, which is unique to that site. Cookie visitor_id Duration 9 months 7 days Description Pardot sets this cookie to store a unique user ID. Cookie visitor_id-hash Duration 9 months 7 days Description Pardot sets this cookie to store a unique user ID. Cookie _gcl_au Duration 3 months Description Google Tag Manager sets the cookie to experiment advertisement efficiency of websites using their services. Cookie _ga Duration 1 year 1 month 4 days Description Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. Cookie _gid Duration 1 day Description Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously. Cookie _fbp Duration 3 months Description Facebook sets this cookie to display advertisements when either on Facebook or on a digital platform powered by Facebook advertising after visiting the website. Cookie ga Duration 1 year 1 month 4 days Description Google Analytics sets this cookie to store and count page views. Cookie pardot Duration past Description The pardot cookie is set while the visitor is logged in as a Pardot user. The cookie indicates an active session and is not used for tracking. Cookie pi_pageview_count Duration Never Expires Description Marketing automation tracking cookie Cookie pulse_insights_udid Duration Never Expires Description User surveys Cookie pi_visit_track Duration Never Expires Description Marketing cookie Cookie pi_visit_count Duration Never Expires Description Marketing cookie Cookie cebs Duration session Description Crazyegg sets this cookie to trace the current user session internally. Cookie gat_gtag_UA Duration 1 minute Description Google Analytics sets this cookie to store a unique user ID. Cookie vuid Duration 1 year 1 month 4 days Description Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website. Performance [x] Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. Cookie hjSessionUser Duration 1 year Description Hotjar sets this cookie to ensure data from subsequent visits to the same site is attributed to the same user ID, which persists in the Hotjar User ID, which is unique to that site. Advertisement [x] Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns. Cookie test_cookie Duration 15 minutes Description doubleclick.net sets this cookie to determine if the user's browser supports cookies. Cookie YSC Duration session Description Youtube sets this cookie to track the views of embedded videos on Youtube pages. Cookie VISITOR_INFO1_LIVE Duration 6 months Description YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface. Cookie VISITOR_PRIVACY_METADATA Duration 6 months Description YouTube sets this cookie to store the user's cookie consent state for the current domain. Cookie IDE Duration 1 year 24 days Description Google DoubleClick IDE cookies store information about how the user uses the website to present them with relevant ads according to the user profile. Cookie yt.innertube::requests Duration Never Expires Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. Cookie yt.innertube::nextId Duration Never Expires Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. Uncategorized [x] Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet. Cookie donation-identifier Duration 1 year Description Description is currently not available. Cookie abtest-identifier Duration 1 year Description Description is currently not available. Cookie __Secure-ROLLOUT_TOKEN Duration 6 months Description Description is currently not available. Cookie _ce.s Duration 1 year Description Description is currently not available. Cookie _ce.clock_data Duration 1 day Description Description is currently not available. Cookie cebsp_ Duration session Description Description is currently not available. Cookie lpv218812 Duration 1 hour Description Description is currently not available. Reject All Save My Preferences Accept All Skip to ContentGo to accessibility pageKeyboard shortcuts menu Log in University Physics Volume 1 Problems University Physics Volume 1Problems Contents Contents Highlights Table of contents Preface Mechanics 1 Units and Measurement 2 Vectors 3 Motion Along a Straight Line 4 Motion in Two and Three Dimensions 5 Newton's Laws of Motion 6 Applications of Newton's Laws 7 Work and Kinetic Energy 8 Potential Energy and Conservation of Energy 9 Linear Momentum and Collisions 10 Fixed-Axis Rotation 11 Angular Momentum 12 Static Equilibrium and Elasticity Introduction 12.1 Conditions for Static Equilibrium 12.2 Examples of Static Equilibrium 12.3 Stress, Strain, and Elastic Modulus 12.4 Elasticity and Plasticity Chapter Review Key Terms Key Equations Summary Conceptual Questions Problems Additional Problems Challenge Problems 13 Gravitation 14 Fluid Mechanics Waves and Acoustics A | Units B | Conversion Factors C | Fundamental Constants D | Astronomical Data E | Mathematical Formulas F | Chemistry G | The Greek Alphabet Answer Key Index Search for key terms or text. Close Problems 12.1 Conditions for Static Equilibrium -------------------------------------- When tightening a bolt, you push perpendicularly on a wrench with a force of 165 N at a distance of 0.140 m from the center of the bolt. How much torque are you exerting relative to the center of the bolt? 25. When opening a door, you push on it perpendicularly with a force of 55.0 N at a distance of 0.850 m from the hinges. What torque are you exerting relative to the hinges? Find the magnitude of the tension in each supporting cable shown below. In each case, the weight of the suspended body is 100.0 N and the masses of the cables are negligible. 27. What force must be applied at point P to keep the structure shown in equilibrium? The weight of the structure is negligible. Is it possible to apply a force at P to keep in equilibrium the structure shown? The weight of the structure is negligible. 29. Two children push on opposite sides of a door during play. Both push horizontally and perpendicular to the door. One child pushes with a force of 17.5 N at a distance of 0.600 m from the hinges, and the second child pushes at a distance of 0.450 m. What force must the second child exert to keep the door from moving? Assume friction is negligible. A small 1000-kg SUV has a wheel base of 3.0 m. If 60% if its weight rests on the front wheels, how far behind the front wheels is the wagon’s center of mass? 31. The uniform seesaw is balanced at its center of mass, as seen below. The smaller boy on the right has a mass of 40.0 kg. What is the mass of his friend? 12.2 Examples of Static Equilibrium ----------------------------------- A uniform plank rests on a level surface as shown below. The plank has a mass of 30 kg and is 6.0 m long. How much mass can be placed at its right end before it tips? (Hint: When the board is about to tip over, it makes contact with the surface only along the edge that becomes a momentary axis of rotation.) 33. The uniform seesaw shown below is balanced on a fulcrum located 3.0 m from the left end. The smaller boy on the right has a mass of 40 kg and the bigger boy on the left has a mass 80 kg. What is the mass of the board? In order to get his car out of the mud, a man ties one end of a rope to the front bumper and the other end to a tree 15 m away, as shown below. He then pulls on the center of the rope with a force of 400 N, which causes its center to be displaced 0.30 m, as shown. What is the force of the rope on the car? 35. A uniform 40.0-kg scaffold of length 6.0 m is supported by two light cables, as shown below. An 80.0-kg painter stands 1.0 m from the left end of the scaffold, and his painting equipment is 1.5 m from the right end. If the tension in the left cable is twice that in the right cable, find the tensions in the cables and the mass of the equipment. When the structure shown below is supported at point P, it is in equilibrium. Find the magnitude of force F and the force applied at P. The weight of the structure is negligible. 37. To get up on the roof, a person (mass 70.0 kg) places a 6.00-m aluminum ladder (mass 10.0 kg) against the house on a concrete pad with the base of the ladder 2.00 m from the house. The ladder rests against a plastic rain gutter, which we can assume to be frictionless. The center of mass of the ladder is 2.00 m from the bottom. The person is standing 3.00 m from the bottom. Find the normal reaction and friction forces on the ladder at its base. A uniform horizontal strut weighs 400.0 N. One end of the strut is attached to a hinged support at the wall, and the other end of the strut is attached to a sign that weighs 200.0 N. The strut is also supported by a cable attached between the end of the strut and the wall. Assuming that the entire weight of the sign is attached at the very end of the strut, find the tension in the cable and the force at the hinge of the strut. 39. The forearm shown below is positioned at an angle θ θ θ with respect to the upper arm, and a 5.0-kg mass is held in the hand. The total mass of the forearm and hand is 3.0 kg, and their center of mass is 15.0 cm from the elbow. (a) What is the magnitude of the force that the biceps muscle exerts on the forearm for θ=60°?θ=60°?θ=60°? (b) What is the magnitude of the force on the elbow joint for the same angle? (c) How do these forces depend on the angle θ?θ?θ? The uniform boom shown below weighs 3000 N. It is supported by the horizontal guy wire and by the hinged support at point A. What are the forces on the boom due to the wire and due to the support at A? Does the force at A act along the boom? 41. The uniform boom shown below weighs 700 N, and the object hanging from its right end weighs 400 N. The boom is supported by a light cable and by a hinge at the wall. Calculate the tension in the cable and the force on the hinge on the boom. Does the force on the hinge act along the boom? A 12.0-m boom, AB, of a crane lifting a 3000-kg load is shown below. The center of mass of the boom is at its geometric center, and the mass of the boom is 1000 kg. For the position shown, calculate tension T in the cable and the force at the axle A. 43. A uniform trapdoor shown below is 1.0 m by 1.5 m and weighs 300 N. It is supported by a single hinge (H), and by a light rope tied between the middle of the door and the floor. The door is held at the position shown, where its slab makes a 30°30°30° angle with the horizontal floor and the rope makes a 20°20°20° angle with the floor. Find the tension in the rope and the force at the hinge. A 90-kg man walks on a sawhorse, as shown below. The sawhorse is 2.0 m long and 1.0 m high, and its mass is 25.0 kg. Calculate the normal reaction force on each leg at the contact point with the floor when the man is 0.5 m from the far end of the sawhorse. (Hint: At each end, find the total reaction force first. This reaction force is the vector sum of two reaction forces, each acting along one leg. The normal reaction force at the contact point with the floor is the normal (with respect to the floor) component of this force.) 12.3 Stress, Strain, and Elastic Modulus ---------------------------------------- 45. The “lead” in pencils is a graphite composition with a Young’s modulus of approximately 1.0×10 9 N/m 2.1.0×10 9 N/m 2.1.0×10 9 N/m 2. Calculate the change in length of the lead in an automatic pencil if you tap it straight into the pencil with a force of 4.0 N. The lead is 0.50 mm in diameter and 60 mm long. TV broadcast antennas are the tallest artificial structures on Earth. In 1987, a 72.0-kg physicist placed himself and 400 kg of equipment at the top of a 610-m-high antenna to perform gravity experiments. By how much was the antenna compressed, if we consider it to be equivalent to a steel cylinder 0.150 m in radius? 47. By how much does a 65.0-kg mountain climber stretch her 0.800-cm diameter nylon rope when she hangs 35.0 m below a rock outcropping? (For nylon, Y=1.35×10 9 Pa.)Y=1.35×10 9 Pa.)Y=1.35×10 9 Pa.) When water freezes, its volume increases by 9.05%. What force per unit area is water capable of exerting on a container when it freezes? 49. A farmer making grape juice fills a glass bottle to the brim and caps it tightly. The juice expands more than the glass when it warms up, in such a way that the volume increases by 0.2%. Calculate the force exerted by the juice per square centimeter if its bulk modulus is 1.8×1 0 9 N/m 2,1.8×1 0 9 N/m 2,1.8×1 0 9 N/m 2, assuming the bottle does not break. A disk between vertebrae in the spine is subjected to a shearing force of 600.0 N. Find its shear deformation, using the shear modulus of 1.0×1 0 9 N/m 2.1.0×1 0 9 N/m 2.1.0×1 0 9 N/m 2. The disk is equivalent to a solid cylinder 0.700 cm high and 4.00 cm in diameter. 51. A vertebra is subjected to a shearing force of 500.0 N. Find the shear deformation, taking the vertebra to be a cylinder 3.00 cm high and 4.00 cm in diameter. Calculate the force a piano tuner applies to stretch a steel piano wire by 8.00 mm, if the wire is originally 1.35 m long and its diameter is 0.850 mm. 53. A 20.0-m-tall hollow aluminum flagpole is equivalent in strength to a solid cylinder 4.00 cm in diameter. A strong wind bends the pole as much as a horizontal 900.0-N force on the top would do. How far to the side does the top of the pole flex? A copper wire of diameter 1.0 cm stretches 1.0% when it is used to lift a load upward with an acceleration of 2.0 m/s 2.2.0 m/s 2.2.0 m/s 2. What is the weight of the load? 55. As an oil well is drilled, each new section of drill pipe supports its own weight and the weight of the pipe and the drill bit beneath it. Calculate the stretch in a new 6.00-m-long steel pipe that supports a 100-kg drill bit and a 3.00-km length of pipe with a linear mass density of 20.0 kg/m. Treat the pipe as a solid cylinder with a 5.00-cm diameter. A large uniform cylindrical steel rod of density ρ=7.8 g/cm 3 ρ=7.8 g/cm 3 ρ=7.8 g/cm 3 is 2.0 m long and has a diameter of 5.0 cm. The rod is fastened to a concrete floor with its long axis vertical. What is the normal stress in the rod at the cross-section located at (a) 1.0 m from its lower end? (b) 1.5 m from the lower end? 57. A 90-kg mountain climber hangs from a nylon rope and stretches it by 25.0 cm. If the rope was originally 30.0 m long and its diameter is 1.0 cm, what is Young’s modulus for the nylon? A suspender rod of a suspension bridge is 25.0 m long. If the rod is made of steel, what must its diameter be so that it does not stretch more than 1.0 cm when a 2.5×10 4-kg 2.5×10 4-kg 2.5×10 4-kg truck passes by it? Assume that the rod supports all of the weight of the truck. 59. A copper wire is 1.0 m long and its diameter is 1.0 mm. If the wire hangs vertically, how much weight must be added to its free end in order to stretch it 3.0 mm? A 100-N weight is attached to a free end of a metallic wire that hangs from the ceiling. When a second 100-N weight is added to the wire, it stretches 3.0 mm. The diameter and the length of the wire are 1.0 mm and 2.0 m, respectively. What is Young’s modulus of the metal used to manufacture the wire? 61. The bulk modulus of a material is 1.0×10 11 N/m 2.1.0×10 11 N/m 2.1.0×10 11 N/m 2. What fractional change in volume does a piece of this material undergo when it is subjected to a bulk stress increase of 10 7 N/m 2?10 7 N/m 2?10 7 N/m 2? Assume that the force is applied uniformly over the surface. Normal forces of magnitude 1.0×10 6 N 1.0×10 6 N 1.0×10 6 N are applied uniformly to a spherical surface enclosing a volume of a liquid. This causes the radius of the surface to decrease from 50.000 cm to 49.995 cm. What is the bulk modulus of the liquid? 63. During a walk on a rope, a tightrope walker creates a tension of 3.94×1 0 3 N 3.94×1 0 3 N 3.94×1 0 3 N in a wire that is stretched between two supporting poles that are 15.0 m apart. The wire has a diameter of 0.50 cm when it is not stretched. When the walker is on the wire in the middle between the poles the wire makes an angle of 5.0°5.0°5.0° below the horizontal. How much does this tension stretch the steel wire when the walker is this position? When using a pencil eraser, you exert a vertical force of 6.00 N at a distance of 2.00 cm from the hardwood-eraser joint. The pencil is 6.00 mm in diameter and is held at an angle of 20.0°20.0°20.0° to the horizontal. (a) By how much does the wood flex perpendicular to its length? (b) How much is it compressed lengthwise? 65. Normal forces are applied uniformly over the surface of a spherical volume of water whose radius is 20.0 cm. If the pressure on the surface is increased by 200 MPa, by how much does the radius of the sphere decrease? 12.4 Elasticity and Plasticity ------------------------------ A uniform rope of cross-sectional area 0.50 cm 2 0.50 cm 2 0.50 cm 2 breaks when the tensile stress in it reaches 6.00×10 6 N/m 2.6.00×10 6 N/m 2.6.00×10 6 N/m 2. (a) What is the maximum load that can be lifted slowly at a constant speed by the rope? (b) What is the maximum load that can be lifted by the rope with an acceleration of 4.00 m/s 2?4.00 m/s 2?4.00 m/s 2? One end of a vertical metallic wire of length 2.0 m and diameter 1.0 mm is attached to a ceiling, and the other end is attached to a 5.0-N weight pan, as shown below. The position of the pointer before the pan is 4.000 cm. Different weights are then added to the pan area, and the position of the pointer is recorded in the table shown. Plot stress versus strain for this wire, then use the resulting curve to determine Young’s modulus and the proportionality limit of the metal. What metal is this most likely to be? | Added load (including pan) (N) | Scale reading (cm) | --- | | 0 | 4.000 | | 15 | 4.036 | | 25 | 4.073 | | 35 | 4.109 | | 45 | 4.146 | | 55 | 4.181 | | 65 | 4.221 | | 75 | 4.266 | | 85 | 4.316 | An aluminum (ρ=2.7 g/cm 3)(ρ=2.7 g/cm 3)(ρ=2.7 g/cm 3) wire is suspended from the ceiling and hangs vertically. How long must the wire be before the stress at its upper end reaches the proportionality limit, which is 8.0×10 7 N/m 2?8.0×10 7 N/m 2?8.0×10 7 N/m 2? PreviousNext Order a print copy Citation/Attribution This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission. Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax. Attribution information If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: Access for free at If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution: Access for free at Citation information Use the information below to generate a citation. We recommend using a citation tool such as this one. Authors: William Moebs, Samuel J. Ling, Jeff Sanny Publisher/website: OpenStax Book title: University Physics Volume 1 Publication date: Sep 19, 2016 Location: Houston, Texas Book URL: Section URL: © Jul 8, 2025 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University. Our mission is to improve educational access and learning for everyone. OpenStax is part of Rice University, which is a 501(c)(3) nonprofit. Give today and help us reach more students. Help Contact Us Support Center FAQ OpenStax Press Newsletter Careers Policies Accessibility Statement Terms of Use Licensing Privacy Policy Manage Cookies © 1999-2025, Rice University. Except where otherwise noted, textbooks on this site are licensed under a Creative Commons Attribution 4.0 International License. Advanced Placement® and AP® are trademarks registered and/or owned by the College Board, which is not affiliated with, and does not endorse, this site.
190632
https://mrsperkeysclassroom.weebly.com/unit-5---ratios-rates-and-percents.html
Unit 5 - Ratios, Rates and Percents - Mrs. Perkey's Classroom Mrs. Perkey's [email protected] Home Homeroom Math Courses Math 7 > Semester 1 > Unit 1 - Expressions and Problem Solving Unit 2 - Adding and Subtracting Rational Numbers Unit 3 - Multiplying and Dividing Rational Numbers Unit 4 - Problem Solving with Rational Numbers Unit 5 - Ratios, Rates and Percents Unit 6 - Proportional Relationships Semester 2 > Unit 1 - Expressions and Equations Unit 2 - Two-Dimensional Geometry Unit 3 - Three-Dimensional Geometry Unit 4 - Statistics Unit 5 - Probability Pre-Algebra > Semester One > Unit 1 - The Basics Unit 2 - Addition and Subtraction Unit 3 - Multiplication and Division Unit 4 - Operations and Rates Unit 5 - Proportion and Percent Semester Two > Unit 6 - Geometry Basics Unit 7 - Perimeter and Area Unit 8 - Solid Figures Unit 9 - Probability Unit 10 - Statistics Resources Resource Documents School Calendar Meet Your Teachers Mrs. Perkey Mrs. Kryza Ratios and Rates Khan Academy - Video - Ratios Introduction Math is Fun - What is a Ratio? Khan Academy - Video - Introduction to Rates Virtual Nerd - What are Rates and Unit Rates? Softschools - Ratios Coloring Game Arcademic Skill Builders Unit Rates AAA Math - Unit Rates Math is Fun - Unit Price Game Math Help - Unit Rates Shmoop Video - Unit Rates IXL - Unit Rate Practice Khan Academy - Solving Unit Rates Work with Percent Khan Academy - Fraction to Decimal Math is Fun - Converting Fractions to Decimals Math is Fun - Converting Fractions to Percents Math is Fun - Converting Percents to Fractions Math is Fun - Converting Decimals to Fractions www.mathsisfun.com/converting-decimals-fractions.html Math is Fun - Converting Decimals to Percents www.mathsisfun.com/converting-decimals-percents.html Percent Increase or Decrease Math is Fun - Percentage Change Skills You Need - Percentage Change Percent Error Math is Fun - Percentage Error Science Notes - Calculate Percent Error Multi-Step Percent Problems Khan Academy - Solving Percent Problems IXL - Multi-step Percent Problems Practice Sofa Tutor - Multi-Step Problems Math Games - Percents with Multi-Step Problems Simple Interest Virtual Nerd - What is the Formula for Simple Interest Math Boot Camps - Simple Interest Formula and Examples Study.com - How to Find Simple Interest Rates: Definition, Formula and Examples Powered by Create your own unique website with customizable templates.Get Started
190633
https://www.acs.org/content/dam/acsorg/msc/downloads/chapter-3/lesson-1/ch3-l1-lesson-plan.pdf
www.acs.org/middleschoolchemistry 1 ©2023 American Chemical Society Chapter 3, Lesson 1: What is Density? Key Concepts • Density is a characteristic property of a substance. • The density of a substance is the relationship between the mass of the substance and how much space it takes up (volume). • The mass of atoms, their size, and how they are arranged determine the density of a substance. • Density equals the mass of the substance divided by its volume; D = m/v. • Objects with the same volume but different mass have different densities. Summary Students will observe a copper and an aluminum cube of the same volume placed on a balance. They will see that the copper has a greater mass. Students will try to develop an explanation, on the molecular level, for how this can be. Students are then given cubes of different materials that all have the same volume. Students determine the density of each cube and identify the substance the cube is made from. Objective Students will be able to calculate the density of different cubes and use these values to identify the substance each cube is made of. Students will be able to explain that the size, mass, and arrangement of the atoms or molecules of a substance determines its density. Evaluation The activity sheet will serve as the “Evaluate” component of each 5-E lesson plan. The activity sheets are formative assessments of student progress and understanding. A more formal summative assessment is included at the end of each chapter. Safety Make sure you and your students wear properly fitting goggles. Materials for Each Group • Cubes marked A–H that you will share with other groups • Balance that can measure in grams • Calculator Materials for the Demonstration • Copper cube and aluminum cube of the same volume • Balance www.acs.org/middleschoolchemistry 2 ©2023 American Chemical Society Notes about the materials Cubes For this lesson, you will need a set of cubes of different materials that are all the same volume. These sets of cubes are available from a variety of suppliers. You will want a set of cubes that contains Copper, Brass, Steel, Aluminum, PVC, Nylon, Oak, and Pine or Poplar. In the activity, each group will need to measure the mass of each of the eight cubes. Groups will need to measure and record their data for a cube and pass it along to another group until each group has used each of the cubes. Balances Use a simple, plastic, two-sided balance that looks like a seesaw for the demonstration. Have students use any balance that can measure in grams. Metric ruler Students will use a metric ruler in the engage portion of the activity when they measure the length, width, and height of a cube along with you. About this Lesson This is the first lesson in which students see models of molecules that are more complex than a water molecule. Some of these molecules may look a little intimidating. Let students know that they do not need to memorize or draw these molecules. For the purpose of this chapter, students only need to think about the size and mass of the atoms that make up the molecule and how they are arranged in the substance. ENGAGE 1. Do a demonstration to show that cubes of the same volume but made of different metals have different masses. Question to investigate Do cubes of exactly the same size and shape, have the same mass? Materials for the demonstration • Copper cube and aluminum cube of the same volume • Balance Procedure Place the copper and aluminum cube on opposite sides of a simple balance. Expected results The copper cube will have a greater mass than the aluminum cube. www.acs.org/middleschoolchemistry 3 ©2023 American Chemical Society 2. Lead a discussion about why the copper cube has a greater mass than the aluminum cube. Tell students that both cubes are exactly the same size, and both are solid with no hollow spots. Explain that the aluminum cube is made of only aluminum atoms and the copper cube is made of only copper atoms. Ask students: • How can two objects, which are exactly the same size and shape, have a different mass? Help students understand that the difference in mass must have something to do with the atoms in each cube. There are three possible explanations about the copper and aluminum atoms in the cubes that could explain the difference in mass. • Copper atoms might have more mass than aluminum atoms. • Copper atoms might be smaller so more can fit in the same volume. • Copper and aluminum atoms might be arranged differently so more copper atoms fit in the same size cube. Explain that any one of these explanations alone, or two or three together, could be the reason why the copper cube has more mass. Give each student an activity sheet. Students will record their observations and answer questions about the activity on the activity sheet. The Explain It with Atoms & Molecules and Take It Further sections of the activity sheet will either be completed as a class, in groups, or individually, depending on your instructions. Look at the teacher version of the activity sheet to find the questions and answers. 3. Project an illustration and use the pictures of the copper and aluminum atoms to introduce the concept of density. Have students turn to the illustration of copper and aluminum cubes and their atoms on their activity sheet. Show students the image Aluminum and Copper Atoms www.acs.org/middleschoolchemistry/simulations/chapter3/lesson1.html www.acs.org/middleschoolchemistry 4 ©2023 American Chemical Society Explain to students that the copper and aluminum atoms are arranged in the same way in their cubes. Copper atoms are a little larger than aluminum atoms. This means there are fewer copper atoms in the copper cube than aluminum atoms in the aluminum cube. But copper atoms have much more mass than aluminum atoms. So even though there might not be as many copper atoms, their extra mass makes up for it and makes the copper cube heavier than the aluminum cube of the same size and shape (volume). Note: There are different ways of measuring the size of atoms, and in close cases the results are not always in agreement. This is true with copper and aluminum. Some sources report copper as larger by some measures and some report aluminum as larger. For the purposes of this lesson, we will treat copper as the larger atom. Explain to students that this idea of how heavy something is compared to the amount of space it takes up is called density. The density of an object is the mass of the object compared to its volume. The equation for density is: Density = mass/volume or D = m/v. Each substance has its own characteristic density because of the size, mass, and arrangement of its atoms or molecules. 4. Show animations and demonstrate how to measure volume and mass of a cube. Explain to students that volume is a measure of the amount of space an object takes up. It is always in three dimensions. To find the volume of an object like a cube or a box, you measure the length, width, and height and then multiply them (V = l × w × h). If measured in centimeters, the answer will be in cubic centimeters (cm3). Note: Students often confuse volume and area. Check their understanding to make sure they know the difference. Make sure they understand that area is measured in two dimensions (length × width) with an answer in cm2. Area is a measure of the amount of surface. But volume is measured in three dimensions (length × width × height) with an answer in cm3. Volume is a measure of the entire object, including the surface and all the space the object takes up. Show the animation Cube. www.acs.org/middleschoolchemistry/simulations/chapter3/lesson1.html While the animation is playing, you can demonstrate the measuring process with a cube and ruler. Have students measure along with you to confirm the volume of the cubes. www.acs.org/middleschoolchemistry 5 ©2023 American Chemical Society Volume The cubes are 2.5 centimeters on each side. Show students that in order to calculate the volume, you multiply the length (2.5 cm) × width (2.5 cm) × height (2.5 cm) to get 15.625 cm3. Rounding this number to 15.6 cm3 is accurate enough and will make the density calculations easier. Record the volume of the cube in cubic centimeters (cm3). Mass Demonstrate how to use the balance that students will be using to measure the mass of the cube. Record the mass of the cube in grams (g). Density Show students how to calculate density by dividing the mass by the volume. Point out that the answer will be in grams per cubic centimeter (g/cm3). EXPLORE 5. Have students calculate the density of eight different cubes and use the characteristic property of density to correctly identify them. Student groups will not need to measure the volume of the cubes. The volume of each cube is the same, 15.6 cm3 , and is given in their chart on the activity sheet. They will need to measure the mass of each of the eight different cubes and calculate their densities. Students will use their values for density to identify each cube. Note: The densities students calculate may not be exactly the same as the given densities in this chart. However, their calculations will be close enough that they should be able to identify most of the cubes. Question to investigate Can you use density to identify eight cubes made of different materials? Materials for the class • Set of eight cubes of equal volume • Calculator Teacher preparation Use a piece of masking tape and a permanent marker to mark the eight cubes with the letters A–H. www.acs.org/middleschoolchemistry 6 ©2023 American Chemical Society Materials for each group • Cubes marked A–H that you will share with other groups • Balance that can measure in grams • Calculator Procedure 1. The volume of each cube is given in the chart. It is 15.6 cm3. 2. Find the mass in grams of each cube using a scale or balance. Record this mass in the chart. 3. Trade cubes with other groups until you have measured the mass of all eight cubes. 4. Calculate the density using the formula D = m/v and record it in the chart. Sample Volume (cm3) Mass (g) Density (g/cm3) Material A 15.6 B 15.6 C 15.6 D 15.6 E 15.6 F 15.6 G 15.6 H 15.6 5. Compare the value you found for density with the given value in the chart below to identify which cube is made out of which material. Write the name of the material in your chart for cubes A–H. Expected results: Student values for density for each cube will not be exact, but Material Approximate density (g/cm3) Aluminum 2.9 Brass 8.8 Copper 9.3 Steel 8.2 PVC 1.3 Nylon 1.2 Oak 0.7–0.9 Pine or poplar 0.4–0.6 www.acs.org/middleschoolchemistry 7 ©2023 American Chemical Society will be close enough that they should be able to identify each of the cubes. You may notice that the approximate densities given for each cube in this lesson are slightly different than those given in the cube set. Most of this difference is probably due to the value for the volume of each cube. Since it is likely that these are 1-inch cubes, each side should be 2.54 cm. We rounded to 2.5 cm because students can make this measurement more easily. EXPLAIN 6. Discuss how the mass, size, and arrangement of atoms and molecules affect the densities of metal, plastic, and wood Explain to students that each substance has its own density because of the atoms and molecules it is made from. The metal, plastic, and wood cubes that students measured each have their own unique density. In general, the density of metal, plastic, and wood can be explained by looking at the size and mass of the atoms and how they are arranged. Project the image Metal. www.acs.org/middleschoolchemistry/simulations/chapter3/lesson1.html Most common metals like aluminum, copper, and iron are more dense than plastic or wood. The atoms that make up metals are generally heavier than the atoms in plastic and wood and they are packed closer together. The difference in density between different metals is usually based on the size and the mass of the atoms but the arrangement of the atoms in most metals is mostly the same. Project the image Plastic. www.acs.org/middleschoolchemistry/simulations/chapter3/lesson1.html Most plastics are less dense than metal but can have similar density to wood. Plastics are made from individual molecules bonded together into long chains called polymers. These polymer chains are arranged and packed together to make the plastic. One common plastic, polyethylene, is made up of many individual molecules called ethylene which bonded together to make the long polymer chains. Like most plastics, the polymers in polyethylene are made of carbon and hydrogen atoms. www.acs.org/middleschoolchemistry 8 ©2023 American Chemical Society Ethylene molecule The carbon and hydrogen atoms are very light, which helps give plastics their relatively low density. Plastics can have different densities because different atoms can be attached to the carbon-hydrogen chains. The density of different plastics also depends on the closeness of packing of these polymer chains. Project the image Wood. www.acs.org/middleschoolchemistry/simulations/chapter3/lesson1.html Wood is made mostly from carbon, hydrogen, and oxygen atoms bonded together into a molecule called glucose. These glucose molecules are bonded together to form long chains called cellulose. Many cellulose molecules stacked together give wood its structure and density. In general, the density of wood and plastic are similar because they are made of similar atoms arranged in long chains. The difference in density is mostly based on the arrangement and packing of the polymer chains. Also, since wood is from a living thing, its density is affected by the structure of plant cells and other substances that make up wood. Ask students: The size, mass, and arrangement of atoms affect the density of a substance. • How might these factors work together to cause a substance to have a Glucose molecule www.acs.org/middleschoolchemistry 9 ©2023 American Chemical Society high density? A substance with smaller more massive atoms that are close together is going to have a higher density. • How might these factors work together to cause a substance to have a low density? A substance with larger, lighter atoms that are farther apart is going to have a lower density. EXTEND 7. Have students explain on the molecular level why two blocks of different materials that have the same mass can have different densities. Remind students that they looked at cubes that had the same volume but different masses. Point out that their activity sheet has drawings of two blocks (Sample A and Sample B) made of different substances that both have the same mass, but different volumes. Ask students: • What is the density of Sample A? Volume = 5 × 5 × 4 = 100 cm3 Mass = 200 g Density = 200 g/100 cm3 = 2g/cm3 • What is the density of Sample B? Volume = 5 × 5 × 2 = 50 cm3 Mass = 200 g Density = 200 g/50 cm3 = 4g/cm3 Give two possible explanations for why one sample is more dense than the other. Hint: The size, mass, and arrangement of molecules affect the density of a substance. There are several possible answers for why sample B is more dense than sample A. • Sample B atoms might have more mass than Sample A atoms. • Sample B atoms might be smaller than Sample A atoms so more can fit in the same volume. • Sample B atoms might be arranged differently so more Sample B atoms than Sample A atoms fit in the same size cube. Any one of these explanations alone, or any combination, could be the reason why Sample B is more dense than Sample A.
190634
https://www.wyzant.com/resources/answers/801739/find-the-area-of-the-region-bounded-by-the-parabola-y-4x2-the-tangent-line-
Find the area of the region bounded by the parabola y = 4x2, the tangent line to this parabola at (4, 64), and the x-axis. | Wyzant Ask An Expert Log inSign up Find A Tutor Search For Tutors Request A Tutor Online Tutoring How It Works For Students FAQ What Customers Say Resources Ask An Expert Search Questions Ask a Question Wyzant Blog Start Tutoring Apply Now About Tutors Jobs Find Tutoring Jobs How It Works For Tutors FAQ About Us About Us Careers Contact Us All Questions Search for a Question Find an Online Tutor Now Ask a Question for Free Login WYZANT TUTORING Log in Sign up Find A Tutor Search For Tutors Request A Tutor Online Tutoring How It Works For Students FAQ What Customers Say Resources Ask An Expert Search Questions Ask a Question Wyzant Blog Start Tutoring Apply Now About Tutors Jobs Find Tutoring Jobs How It Works For Tutors FAQ About Us About Us Careers Contact Us Subject ZIP Search SearchFind an Online Tutor NowAsk Ask a Question For Free Login Calculus Mohammad N. asked • 12/05/20 Find the area of the region bounded by the parabola y = 4x2, the tangent line to this parabola at (4, 64), and the x-axis. Find the area of the region bounded by the parabola y=4 x 2, the tangent line to this parabola at (4,64), and the x-axis. Follow •1 Add comment More Report 2 Answers By Expert Tutors Best Newest Oldest By: William W.answered • 12/05/20 Tutor 4.9(1,021) Experienced Tutor and Retired Engineer See tutors like this See tutors like this Area under y = 4x 2 = 0∫4 (4x 2) dx = (4/3)x 3 evaluated between 0 and 4 = (4/3)(4 3) - 4/3(0 3) = 256/3 The tangent line has a slope found by taking the derivative: y' = 8x and y'(4) = 8(4) = 32 therefore the equation of the tangent line at (4, 64) using the point-slope form of a line is: y - 64 = 32(x - 4) or y = 32x -64. When y = 0, x = 2. Area under the tangent line is 2∫4(32x - 64) dx = 16x 2 - 64x evaluated between 2 and 4 = 16(4 2) - 64(4) - (16(2 2) - 64(2)) = 256 - 256 - 64 + 128 = 64 Area between the curve and the tangent line = 256/3 - 64 = 256/3 - 196/3 = 64/3 Upvote • 1Downvote Add comment More Report Tom K.answered • 12/05/20 Tutor 4.9(95) Knowledgeable and Friendly Math and Statistics Tutor See tutors like this See tutors like this The tangent at (4,64) of y = 4x 2 has slope 8x =84 = 32. Thus, y =mx+b yields 64 =32(4) +b, so b=-64 Then, 0 =32x -64, so x=2.The tangent line crosses the x axis at x =2. Thus, the area between the curve and the tangent line is the area under the curve minus the area of the triangle with vertices at (2,0),(4, 0), and (4,64) ∫0 4 4x 2 dx = 4/3x 3 |0 4 = 4/34 3 -0 = 256/3 The area of the triangle is 1/2 2 64 =64 256/3 - 64 = 64/3 Upvote • 1Downvote Add comment More Report Still looking for help? Get the right answer, fast. Ask a question for free Get a free answer to a quick problem. Most questions answered within 4 hours. OR Find an Online Tutor Now Choose an expert and meet online. No packages or subscriptions, pay only for the time you need. ¢€£¥‰µ·•§¶ß‹›«»<>≤≥–—¯‾¤¦¨¡¿ˆ˜°−±÷⁄׃∫∑∞√∼≅≈≠≡∈∉∋∏∧∨¬∩∪∂∀∃∅∇∗∝∠´¸ª º†‡À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö Ø Œ Š Ù Ú Û Ü Ý Ÿ Þ à á â ã ä å æ ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ø œ š ù ú û ü ý þ ÿ Α Β Γ Δ Ε Ζ Η Θ Ι Κ Λ Μ Ν Ξ Ο Π Ρ Σ Τ Υ Φ Χ Ψ Ω α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ ς σ τ υ φ χ ψ ω ℵ ϖ ℜ ϒ℘ℑ←↑→↓↔↵⇐⇑⇒⇓⇔∴⊂⊃⊄⊆⊇⊕⊗⊥⋅⌈⌉⌊⌋〈〉◊ RELATED TOPICS MathAlgebra 1PhysicsPrecalculusTrigonometryAlgebraPre CalculusLimitsFunctionsMath Help...DerivativeAp CalcAp CalculusIntegral CalculusCalcIntegrationDerivativesCalculus 3Calculus 2Calculus 1 RELATED QUESTIONS ##### CAN I SUBMIT A MATH EQUATION I'M HAVING PROBLEMS WITH? Answers · 3 ##### If i have rational function and it has a numerator that can be factored and the denominator is already factored out would I simplify by factoring the numerator? Answers · 7 ##### how do i find where a function is discontinuous if the bottom part of the function has been factored out? Answers · 3 ##### find the limit as it approaches -3 in the equation (6x+9)/x^4+6x^3+9x^2 Answers · 8 ##### prove addition form for coshx Answers · 4 RECOMMENDED TUTORS Zach M. 5.0(700) Nicholas P. 5(276) Priti S. 5.0(737) See more tutors find an online tutor Calculus tutors Multivariable Calculus tutors Business Calculus tutors AP Calculus tutors Differential Equations tutors Precalculus tutors AP Calculus BC tutors College Algebra tutors Download our free app A link to the app was sent to your phone. Please provide a valid phone number. App StoreGoogle Play ##### Get to know us About Us Contact Us FAQ Reviews Safety Security In the News ##### Learn with us Find a Tutor Request a Tutor Online Tutoring Learning Resources Blog Tell Us What You Think ##### Work with us Careers at Wyzant Apply to Tutor Tutor Job Board Affiliates Download our free app App StoreGoogle Play Let’s keep in touch Need more help? Learn more about how it works ##### Tutors by Subject Algebra Tutors Calculus Tutors Chemistry Tutors Computer Tutors Elementary Tutors English Tutors Geometry Tutors Language Tutors Math Tutors Music Lessons Physics Tutors Reading Tutors SAT Tutors Science Tutors Spanish Tutors Statistics Tutors Test Prep Tutors Writing Tutors ##### Tutors by Location Atlanta Tutors Boston Tutors Brooklyn Tutors Chicago Tutors Dallas Tutors Denver Tutors Detroit Tutors Houston Tutors Los Angeles Tutors Miami Tutors New York City Tutors Orange County Tutors Philadelphia Tutors Phoenix Tutors San Francisco Tutors Seattle Tutors San Diego Tutors Washington, DC Tutors Making educational experiences better for everyone. ##### IXL Comprehensive K-12 personalized learning ##### Rosetta Stone Immersive learning for 25 languages ##### Education.com 35,000 worksheets, games, and lesson plans ##### TPT Marketplace for millions of educator-created resources ##### Vocabulary.com Adaptive learning for English vocabulary ##### ABCya Fun educational games for kids ##### SpanishDictionary.com Spanish-English dictionary, translator, and learning ##### Inglés.com Diccionario inglés-español, traductor y sitio de aprendizaje ##### Emmersion Fast and accurate language certification SitemapTerms of UsePrivacy Policy © 2005 - 2025 Wyzant, Inc, a division of IXL Learning - All Rights Reserved Privacy Preference Center Your Privacy Strictly Necessary Cookies Performance Cookies Functional Cookies Targeting Cookies Your Privacy When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Clear [x] checkbox label label Apply Cancel Confirm My Choices Allow All
190635
https://www.bhu.ac.in/Content/Syllabus/Syllabus_300620200523101219.pdf
INJURY RAMKRISHNA MISHRA RESEARCH SCHOLAR DEPARTMENT OF FORENSIC MEDICINE INSTITUTE OF MEDICAL SCIENCES BANARAS HINDU UNIVERSITY VARANASI -221005 . • Injury – Any harm whatever illegally caused to any person, in body, mind, reputation or property (S.44 IPC). • Wound – Any breach in the natural continuity of any tissues of the living body. • Trauma – Any physical or psychological injury • Torture – is infliction of intense pain to punish, coerce, or affordable sadistic pleasure. Classifications of Injury Mechanical Injuries Due to blunt force Abrasions Contusions Lacerations Fracture & Dislocation Due to sharp force Incised wound Chop wounds Stab wounds Firearm injury . Abrasion Scratch Graze Pressure Impact Contusions Intradermal Subcutaneous Deep Lacerations Split Stretch Cut Avulsion Tangential force Perpendicular force Mixtur er of both . Thermal injuries Due to cold with tissue freezing Frost bite Frostnip Without tissue freezing Trench foot Chilblains Due to heat Burns Scalds . Chemical Injuries corrosive acids Corrosive alkalis Corrosive salts Physical agents Electricity Lightning X-rays Radioactive substances Explosions . Legal Hurt Grievous Hurt Medico legal Suicidal Homicidal Accidental Self inflicted Defense Iatrogenic Grievous Hurt (S.20 IPC, 8 clauses) ▪Emasculation ▪Permanent privation of the sight of either eye ▪Permanent privation of the hearing of either ear ▪Privation of any member or joint ▪Destruction or permanent impairing of the powers ▪Permanent disfiguration of the head or face ▪Fracture or dislocation of a bone or teeth ▪Any hurt: • Which endangers life • Which causes the suffer to be during to be during the space of 20 days in severe bodily pain, or • Unable to follow his ordinary pursuits Mechanical injuries ❑Injuries caused by application of physical violence to the body are known as Mechanical injuries. ❑Produced by Blunt, sharp or firearms ❑Severities and extent depend upon – • Amount of force delivered • Time period • Region struck • Extent of body surface • Nature of weapon used Abrasion • It is a superficial injury involving only the epidermal layers of skin. • The outer layers of skin are scratched or removed leaving a bare area with little or no bleeding. • These heal rapidly in few days and leave no scar • If the injury extends to sub-epidermal region below dermal papillae, results in superficial scarring, termed as superficial laceration • Become more prominent when skin dries (dark brown/black) Scratch Abrasion • It is a linear injury caused by a sharp pointed object such as pin or finger nail running across the skin having appreciable length but no significant width. • Point scratches produced by tip of knife, pin or razor • Nails produce curved, semi lunar abrasion. • The direction of scratch is indicated by sharp edge initially and heaped up epithelium/ epidermis at the end GRAZES / SLIDING/FRICTIONAL BURNS /GRAVEL RASH • It is produced when broad surface of skin slides against a rough surface. • It is essentially a collection of innumerable scratch abrasions, epithelium heaped up at the end indicates the direction • Identification of scene of incidence by presence of foreign body (dirt or grit) in the graze. Compared with the scene • Road traffic accidents (RTA). • Dragging of body on a ground. • Glancing kick with a boot Pressure Abrasion • Result of more or less perpendicular application of relatively small force for large time periods on body surface, causes crushing of epithelium. • Ligature marks • Nail abrasions • Teeth bite marks • Shoe bite marks • Contact lens on cornea • Nappy rashes Impact/Patterned Abrasion • When relatively large force in applied perpendicular to the skin for short duration of time, causes crushing of epithelium and leave a pattern/imprint • May be slightly depressed unless local edema • There is an underlying bruise • Headlamp rim and radiator grill mark • Tire mark • Bicycle chain mark • Cat-o-nine tails • Muzzle impression or recoil impression Age of Abrasions OBSERVATION TIME BRIGHT RED FRESH RED SCAB (DRIED BLOOD/SERUM) 12-24 HOURS. REDDISH BROWN SCAB 2-3 DAYS DARK BROWN SCAB (HEALING FROM PERIPHERY) 4-7 DAYS SCAB FALLS OFF (COMPLETE HEALING) 10-14 DAYS Difference between Antemortem and Postmortem Abrasion FEATURES ANTEMORTEM POSTMORTEM Site Anywhere on body Usually over bony prominences Color Brownish on Drying Yellow on drying Margins Blurred due to vasoconstriction Sharp Exudation More, scab slightly raised Less, below skin level Vital reaction Present, congestion Absent, no congestion Sign of repair Present Absent Bleeding Present, Absent MEDICO-LEGAL SIGNIFICANCE • They indicates site of impact and direction of force. • They may be the only external science of serious internal injury. • Pattern abrasions are helpful in connecting the crime with the object which produce them. • The age of the injury can be known. • In open wounds, dirt, dust, grease or grit are usually present which may connect the injuries to the scene of crime. • Manner of injury may be known from its distribution • In throttling, curved abrasion due to finger-nails are found on the neck. • In smothering, abrasions may be seen around the mouth and the nose. • In sexual assaults, abrasions may be found on the breasts, genitals, inside of the thighs and around the anus. • Abrasions on the face or body of the assailant indicate a struggle. . Scratch Abrasions Graze Abrasions . Pressure Abrasions Imprint Abrasion Bruise (Contusion) • A Bruise is hemorrhage into the tissues underneath the skin, due to rupture of vessels (arterioles, venules and veins) by the application of blunt force, without breach of covering tissue (skin or capsule). • When the effusion of blood is in other tissues and organs (muscles, lung, heart, brain, spleen etc.), called a Contusion • Bruise is usually situated in the dermis, subcutaneous tissue and sometimes in the fat layer Causes ❑Spontaneous – Diseases of blood, blood vessels, scurvy, leukemia etc. ❑Traumatic – Bruises are caused by blunt force e.g. iron rod, lathi, stone, cricket bat, hockey stick, whim, boot, accidental, RTA, firm griping of weak person etc. ❖Painful, tender, crushing and tearing of s/c tissues; usually no destruction of tissues. ❖If abrasion and contusion occur together then it is called abraded contusion or contused abrasion. . ❑Petechial hemorrhage is pin point red or blue spot due to rupture of small capillaries • 0.1 – 2 mm in size ❑Ecchymosis occurs due to fusion of large number petechie increase in size and joins to form Ecchymosis. • 2-5 mm in size ❑Bruise is effusion of blood under skin with discolored in center to reddish, bluish, in appearance. Flat surface. • more than 5 mm in size ❑Hematoma is appreciable bleeding due to damage of large blood vessels. • More than 5 mm in size • Lesion raised above the surface of skin ❑Mongolian spot is hyper-pigmented spot in lumbo-sacral region. Diagnostic features • Shape may not correspond the shape of causative object • Reddened area when fresh • Margin blurred • Pain • Swelling • Size varies from pinhead to an extensive hematoma Classification ❑3 types depending on locations: ▪Intradermal bruise • Bleeding occurs in dermis • Extravasted blood is small • Due to superficial position pattern is distinct • Vehicle tire pattern and stomping ▪Subcutaneous bruise • Bleeding occur in subcutaneous tissue • Most common type and appear soon after injury ▪Deep bruise • Bleeding in deep subcutaneous tissues, just above muscle or between muscle bundle • Appear after 2-3 days after injury (delayed bruising) • IR photography • examination after 48 hours FACTORS MODIFYING APPEARANCE OF BRUISE • Site of Injury • Vascularity of area • Age • Sex • Color of skin • Obesity • Embalming • Nature of disease • Clothing Gravity shifting of blood ▪The extravasated blood may move along tissue planes under gravity influence and gets collected at a distant place (gravity shifting). ▪A bruise round the tissues of the eyes & eyelids may result called as Spectacle Hematoma, It occurs due to -• A blow to the orbit • Fractured orbital roof • A blow to the forehead • A fall on the vertex . ▪BATTLE’S SIGN • A bruise behind the ear called Battle’s Sign may result from fall on the vertex or fracture of the base of the skull rather than a direct blow behind the ear. • Blow on forehead or fall on vertex - Black eye/ raccoon eye. • Fracture of head of femur – bruise on lateral aspect of lower thigh. • Blow on outer part of thigh - Bruise around knee. Patterned Bruising • A pattern bruise is that which reflect the pattern of the striking object • Also known as railway track bruise/tram line bruise • Intra-dermal bruise display distinct pattern bruise • May tell about the striking object • Vehicular accidents • Tire marks • Muzzle impression • Love bites Mechanism Weapon hits the pliable surface Edge of the weapon drag the skin downward Tearing of marginal blood vessels Weapon is removed Oozing of blood give railway track appearance Centre remains undamaged Age of bruise (changes are seen from periphery to centre) COLOUR TIME OF INJURY PIGMENT Fresh (Red) Fresh Oxygenated Hemoglobin Red to blue One day Deoxygenated Hemoglobin Bluish black to brown 2-4 Days Hemosiderin Green 5-7 Days Haemotidin Yellow 7-10 Days Billirubbin Completely disappears 14 Days/ 2weeks Normal Age of Bruise (Microscopic) ❑Microscopic examination (blood pigments) ❑Mechanism: Blood, due to disintegration of RBC by haemolysis, releases hemoglobin that breaks down into haemosidrin, haemotoidin & bilirubin by the action of histiocytes & tissue enzymes. • Hemosidirin within macrophages: not less than 24-48 hours. • Hemotoidin within Macrophages: not less than 3 days. • Bilirubin extra cellular: not less than 7 days Antemortem vs. postmortem bruise Trait AM Bruise PM Bruise Time During life Within 2-3 hrs after death Swelling Present Absent Damage to epithelium Present Absent Size Proportion to force used Small Extravasations of blood More Less Site Anywhere Bony prominence Tissue underlying bruise Tissue stains are permanent Stains not permanent Histological finding Inflammatory reaction present Absent Color changes Seen Always dull bluish Histochemical finding Enzymatic reactions Absent True bruise vs. Artificial bruise Findings True Bruise Artificial Bruise Cause Blunt force Juice of Plant Situation Any part On accessible parts Color Changes of color Dark brown Margins Less defined Well defined Shape Shape of weapon Irregular Swelling, Redness, & Ecchymosis Present with slight swelling Not present Contents Blood Serum Itching Absent Present Chemical Tests Negative Positive Hypostasis vs. Bruise Features Hypostasis Bruise Cause PM changes Blunt force injury Collection of blood Within the blood Outside the vessels Extent Extensive Localized Site Most dependent part Anywhere on body Blanching Blanching if not fixed No blanching Incision No staining of s/c tissues Blood Stained s/c tissues Swelling Absent May be Epidermis Not damaged May be abraded Area Merge with surrounding Clearly defined Microscopic RBCs within blood RBCs outside MEDICO-LEGAL SIGNIFICANCE • Indicates offending object (blunt) • Gives idea about degree of violence • Time of injury • Motive/purpose of injury • In throttling, pressure of pads of finger indicate homicide • Bruise on back of fingers, hand & forearms (Defensive act) • Multiple small bruise on arms just below shoulders (Force full grasping during struggle) . • Suction petechie: bruising on the cheeks breast. (Sexual intercourse & love bites) • Bruise on the medical aspect of thigh, vulva & around anus indicate forceful sexual intercourse • Bruising of cervix shows dilatation cervix • Bruising of buttocks indicate torture • Self inflicted: artificial bruised area produced by rubbing marking nut juice or Calotropis, or root of plumbago over skin . Bruise Spectacle Bruise Pattern Bruise LACERATION ❑A laceration is a rupture or tear or a split in the skin, mucous membrane, muscle, any internal organ or underlying tissues as a result of application of blunt force ❑CAUSATIVE WEAPON: Blows with club, stones, bricks, punch, kicks, iron bars. ❑Besides blows from blunt objects lacerations are also caused by: • Fall on hard surfaces • Machinery • Traffic accidents ❑Hemorrhage less in laceration Diagnostic Features • Margins - frequently abraded, irregular, ragged. • Edges - irregular, ragged, inverted, swollen, bruised. • Angels - torn, irregular. • Depth (base) - uneven, non uniform depth, strand of tissue found, bridging/ crossing over at the varying depths indicate blunt force used. • Hairs bulbs - crushed. • B. Vessels - crushed. • Skin - flapping. • Foreign material usually found . Classification Tear Heavy blunt instruments eg. Hockey, stick, cricket bat Skin having sufficient amount of underlying fat & muscle, ragged and bruised margins Split/ incised looking lacerated wound Force over bony prominences, less fat & muscle crushing of the affected tissue between two hard objects that is bone & blunt instrument Stretch by Blunt tangential impact caused by overstretching of the skin to produce a flap, attached Cut/Chop Produce by heavy cutting weapons (axe, hatchet, chopper, bush knife etc) crush and bruise the margins Avulsion/flying Heavy weight vehicles causing large area of skin to be avulsed and lost Incised vs. Incised looking lacerated wound Criteria Incised Incised looking lacerated Margins Clean cut Ragged and bruised Edges No/minimal bruising Heavily bruised Blood vessels Clean cut crushed Tissues at the base Muscles and nerves clean cut Continuity observed Hair Clean cut crushed True incised vs. True lacerated Criteria True incised True lacerated Edges Clean cut Ragged Margins Bruising absent Bruised Injuries to BV and tissues Clean cut crushed Hair Clean cut crushed Bleeding More less Medico-Legal Importance ▪Abrasions, bruise and laceration found together ▪Manner of production ▪Shape & size – may not correspond to the weapon used ❖Linear – long thin object ❖Irregular or Y shaped –object with flat surface ❖Curved – convexity to the direction of force ❖Stellate – blunt round object ❖Crescentic – blunt object with edge ❖Semi-circular – head against hard object ❖Patterned laceration ❖Swallow tail at one end – tearing at an angle . Lacerated Wounds Incised wound ❑An incised wound (cut, slash, slice) is an open wound resulting from a cut or an incision in the skin or underlying tissues, caused by a weapon with sharp cutting edge when it is drawn across the skin. ❑Causative weapons: • Light cutting weapons - knives, razor, blades, scissors, broken glass peaces • Heavy cutting weapons - daggers, swords, axe chopper . ❑Characteristics – ❖Length – greatest dimension, no relation with weapon ❖Width - may be greater than edge of weapon due to – • Retraction of tissue • Shaking of blade ❖Margins - clean cut, well defined, everted ❖Shape - usually spindle shape, depends on weapon ❖Directions – deep at beginning, shallow at end (tailing) ❖Beveling cut – weapon applied at angle ❖Hemorrhage – more b/c vessels clean cut, spurting on arterial cut . ❑Suicidal: found on certain elective sites ▪Sides + Front of neck ▪Front of wrist (Radial Artery). ▪Front of thigh (Femoral). ▪Front of chest (Heart). ❑Feature: ▪Multiple, ▪Super-imposed, ▪Parallel of Varying depth ▪Found on Opposite side of the working, ▪Hesitation or Tentative cuts Hesitation cuts or tentative cuts • It refers to the preliminary cuts made by a person intending to commit suicide by a cutting instrument before gathering a sufficient courage to make a final deep incision • These are generally small, multiple superficial cuts found at the commencement of incised wound & merging with the main incision. Suicidal cut throat vs. homicidal cut throat Features S cut throat H cut throat Place segregated lonely Not necessary Scene Undisturbed Disturbed Selection of weapon Light, sharp edge Heavy with sharp edge Presence of weapon at the scene Present Usually absent Clothes Orderly. Blood stained on anterior portions of clothes. Deranges suggesting scuffle. Blood stained on back of neck Farwell letters Mostly present Absent. If present, compare hand writing Personality trait Depressed Normal. . Cadaveric spasm HANDS CLENCHED holding INSTRUMENT Hands may be clenched, contains belonging of the assailant Defense wounds Absent. Present Distribution of injuries Confined to certain elective site (neck). Additional injuries over the body. Wound complex. Site Left side of neck in right-handed person or vice versa. Both side & mid line Level Higher level above the thyroid cartilage. Lower level below the thyroid cartilage Tentative cuts Present at the commencement Nil. Direction of wound Obliquely down wards & medially Transverse, upwards & laterally Depth of wound Gradual deepening, shallowing with tailing. Bold deep cut without tailing . Incised Wound Hesitation Cuts Chop wounds • Chop wounds are deep gaping wounds caused with sharp splitting edge of heavy weapon like axe, sword, meat cleaver • Margins moderately sharp with abrasions and bruises • Destruction of underlying tissue and organs • Depth may be same throughout • Head, face, neck, shoulders and extremities are most attacked area • Injuries to underlying bones • Majority homicidal in nature Stab wound ❑It is the wound caused by a sharp pointed weapon driven in the body or body is pressed or fall against instrument, the depth of wound being the greatest dimension ❑Weapons such as an arrow, dagger, knife, nail, needle, screw driver, spear etc. ❑Puncture wound - when a weapon enters into the tissues or muscle and terminate, no exit wound ❑Penetrating wound - when a weapon enters into the body cavity and terminate, no exit wound ❑Perforating wound - when a weapon enters into the body (or cavity) and exit (large, inverted & small, everted margins respectively) . ❑Characteristics – ▪Length - may correspond to width of weapon ▪Width - may correspond to thickness of weapon ▪Depth – may be less, equal or more than corresponding length of weapon, greatest dimension ▪Margins – • Sharp edged weapon – clean cut margins – incised stab wound • Rounded edges weapon – contuse & lacerated margins – lacerated stab wound . WEAPON SHAPE OF WOUND Single sharp edged weapon Wedge shape Double sharp edged weapon Elliptical shape Rounded pointed Circular Pointed square Cruciate Double edged blunt Circular with bruising Instrument twisted before with drawl Triangular or cruciate or fish tail injury Medico-Legal Importance • Concealed puncture wound indicate homicide • Depth of wound indicate intensity of force • Direction and dimension of wound indicate relative position of assailant an victim • Manner of production can be estimated • Multiplicity of wound • Shape of wound • Time of attack . • Hara – kiri is a suicidal stab wound on abdomen used for one time capital punishment. • The victim inflicts a single large wound on the abdomen with a tanto or wakizashi, while in sitting position or falls forward upon it, and pulls out intestine • Excessive bleeding sudden ed in intra-abdominal pressure ed cardiac return . sudden cardiac collapse death . • Defense wound caused as a result of immediate reaction of victim to save himself/herself from the attacking weapon either by raising the arm or by gasping the weapon. • Indicate homicide, Victim was alive & conscious Weapon Type Location Active Defense wound Passive Defense wound Blunt Abrasion/ bruise/laceration Palm of hand, forearm Dorsum of hand, Ulner border of forearm sharp edged incised wound Palm of hand, forearm Dorsum of hand, Ulner border of forearm Fabricated (self-inflicted) injury ❑The wound inflicted on the body, by the person himself or by another to misguide the investigators with some malafied intentions ❑Motives: • To bring a charge against a person or to implicate an innocent person in false charge • To accuse police of maltreatment during custody • Murdered misguiding the investigators that killing was in self defense • Falls charge of rape • To make a simple injury more serious • To get leave from service . ❑Elective sites: Accessible/non vital areas, Top of head/fore head, outer side of left arm, Front of left forearm, front of chest/abdomen, front & outer part of thigh ❑Weapons used: Sharp edge light cutting weapon, Chemicals, {Blunt weapons, Fire arms (shot gun) – rarely} ❑The wound: • Superficial, multiple, made half heartedly. • Seen on accessible, non-vital less functioning areas. • Caused by light, cutting instruments. • Shot guns are used. • Cartridge discharging small pallets is used. • Wound may be incised & pellets are kept manually. • After x-ray (certificate) they are removed Sources and suggested reading: • Textbook of Forensic Medicine and Toxicology, Anil Aggrawal, APC publication • Review of Forensic Medicine and Toxicology, Gautam Biswas, JAYPEE publication • The Essentials of Forensic Medicine and Toxicology, Dr. K S Narayan Reddy and Dr. O P Murthy, The Health Science Publisher • Textbook of Forensic Medicine and Toxicology, P C Dikshit, PEEPEE publications • Research papers • Google images • E-PG Pathshala/inflibnet .
190636
https://www.youtube.com/watch?v=tUFzOLDuvaE
Approximating Square Roots w/ Newton's Method 0612 TV w/ NERDfirst 61600 subscribers 296 likes Description 23345 views Posted: 27 Jun 2018 (Expand description for Errata!) Computing the Square Root of a number is something that turns out to not be straight forward - School teaches us to do it out of memory, without considering the actual algorithm or steps to achieve it. Today, we look at one algorithm to approximate the square root of a number - Newton's Method! We'll first look at the math behind the technique, before going on to code it in Python! ERRATA: At 10:30, there is a mistake in the equation shown in the yellow box. The sign in the middle is "-" not "+". Thanks to Gil Shapira for pointing out the error! = CODE DOWNLOAD = To view and download the code written in this video, check out the following Bitbucket repository: To download, first click on "Downloads" in the left sidebar. Then, in the subsequent page, click "Download Repository". = Contents = 00:00 Introduction 01:31 Contents Page 02:11 Explanation of the Math → 02:20 Forming the main function f(x) → 03:00 How to use the function representation → 03:23 Overview of what Newton's Method does → 04:10 Injecting our function into Newton's Method → 04:54 Drawing the Tangent Line → 06:58 Using the Tangent Line to find the next guess → 08:16 General Formulation of Newton's Method → 09:35 Applying the Method to Square Roots → 11:07 Conclusion / Summary 11:21 Writing The Code → 11:42 Starting the Function → 12:03 Discussion of Terminating Condition → 12:46 Making the First guess → 13:21 Difference variable for termination → 13:34 Building the main loop / applying Newton's Method → 14:23 Building terminating mechanism → 15:32 Brief Trace of completed algorithm → 16:18 First test → 17:04 Comparing with "ground truth" → 17:34 Changing Error value → 17:50 Adding a counter to see the number of iterations → 19:12 Discussion of verbose version of the code = 0612 TV = 0612 TV, a sub-project of NERDfirst.net, is an educational YouTube channel. Started in 2008, we have now covered a wide range of topics, from areas such as Programming, Algorithms and Computing Theories, Computer Graphics, Photography, and Specialized Guides for using software such as FFMPEG, Deshaker, GIMP and more! Enjoy your stay, and don't hesitate to drop me a comment or a personal message to my inbox =) If you like my work, don't forget to subscribe! Like what you see? Buy me a coffee → 0612 TV Official Writeup: More about me: Official Twitter: = NERDfirst = NERDfirst is a project allowing me to go above and beyond YouTube videos into areas like app and game development. It will also contain the official 0612 TV blog and other resources. Watch this space, and keep your eyes peeled on this channel for more updates! Disclaimer: Please note that any information is provided on this channel in good faith, but I cannot guarantee 100% accuracy / correctness on all content. Contributors to this channel are not to be held responsible for any possible outcomes from your use of the information. 54 comments Transcript: Introduction square roots an overwhelmingly common mathematical operation that as it turns out is a little bit hard to actually do or implements programmatically we've seen a hint of this previously when we looked at the fast inverse square root function we saw that was actually implemented in the quake engine and it used a bunch of hacks to achieve this we concluded then that there wasn't really a straightforward way to well achieve the correct results for a square root and so we had to use some form of approximation today we're gonna take a look at an approximation technique that was used in that algorithm at a very end and we'll see how we can build a square root function using just that your channel random Wednesday episode on zero six one two TV hello and welcome back to Anna random Wednesday episode now to start off with I have to tell you that the first half of this video give or take it's kind of messy yes we will be looking into the math behind this method which well it's known as Newton's method but ultimately this is still a programming / confusing kind of tutorial so I'll go at the end of the day despite all the math is to be able to build a function that can help us approximate square roots to you know a level of error we want so yeah with that we can have a quick overview of the entire video now this video comes Contents Page to you in two parts the first half will be well as I've said pretty messy we'll try to understand square roots you know a little bit deeper and try to understand how Newton's method actually worked what is its approach to approximating a square root and how we can actually do so very well with this in mind in our second half we'll shift over to writing some code essentially we'll be implementing what we've discussed before and we'll see how we can basically we'll get it to give us pretty good answers for square root operation so with that in mind let us jump into our first part where we take a look at the met yes of Explanation of the Math you at the end of today we want to get the square root of a number let's keep things simple let's say the number is night so we'll use Forming the main function f(x) nine throughout right and when we sort of have an idea of where we're going with this we'll replace this with an unknown to make it you know more general more dynamic now let the answer we're looking for be X so X is the square root of 9 we can then treat this up essentially if that is equivalent to X square equals to 9 and if we rearrange the equation we end up with a quadratic equation getting things in this form is important because then what we can do is we can actually well think of it as a function we could plot it out and essentially the roots of the function would be the square roots of the number we're actually looking for in this case if we were to just plot out this graph How to use the function representation you will see that we have an intersection here at position 3 that is the answer right square root of 9 is 3 so what I've just done is we've taken a basic square operations and we've sort of pushed and prodded until it became a function and once we have things in this form we can start to apply Newton's method to it isn't it Newton's method Overview of what Newton's Method does allows you to make an approximation for the roots of an equation by essentially letting you yes you make a guess right it doesn't have to be accurate at all but what he can do this you can look for the tangent line at the point that is a straight line describing being you know curvature off that part of line right you have that line draw it out extend it until it cuts the x axis that point of intersection will be closer to the roots than your previous guess you can use that x value to be your new guessed and repeat this process every time you do this every time you draw that tangent line and let it intersect the x-axis you get closer and closer to the actual root that is the idea now the good thing Injecting our function into Newton's Method about what we've just done is well we have the function itself already all we have to do next is to just make a guess once I've made a guess we need to find the tangent at that point this is where things get a little bit more complex because essentially to get a tangent you know the gradient at a point right that is of course the slope of the current position and in order to do that you have to differentiate your original equation now I get it calculus is scary but this is just a simple quadratic equation with constants are better back so well hopefully the differentiation isn't too difficult essentially you just end up with 2x and you discard the constant so yeah that should be fine right now turns out that derivative will give Drawing the Tangent Line us the gradients of the tangent line which is something that's very important to us in fact with this gradients we are on our way to actually drawing out our tangent line before we go on let's give a value to the guess that with me up to this points let's say our guess was for given the derivative we have here right plugging in the number 4 that tells us that the slope the gradients of our tangent line must be 8 this puts us on our way to drawing our tangent line but if we just say y equals to 8x that is going to be well a line of the correct gradients but positioned wrongly that is the green line in this picture and as you can see it's sort of just hanging out in the middle we want to actually get it to stick to you know our actual guess point on the light right so we need to do some things to push this in position first let's push it towards the right by subtracting a value from X that's just how graphs work you want to push it forwards by 4 positions you have to subtract by 4 you can think of this as essentially delaying the zero points of this line the reason why we use 4 and not any other number is because that is our guess we guessed 4 right so obviously the gradient that were looking at must be at position x equals to 4 so we push it to the right it's still not aligned to the actual line itself and that's not surprising because we need to push it up as well but a question is how far up luckily that's not too difficult it needs to go just far enough to touch the line and we happen to know the equation of the line right it's simply this so we sub stitute the value of fall in here this gives us a vertical bias of seven right we've integrated that into our equation as well and as you can see this line touches my main equation line perfectly and it gives us the tangent line at that point of intersection but the thing is Using the Tangent Line to find the next guess why go to all this trouble why do we need the equation of this line you see we meet this because ultimately we are trying to find that point of intersection that point in which the tangent line actually cuts our x axis with the equation finding this point of intersection is extremely easy because well if we know that this is the equation we essentially want to find you know the value of x when the value of y is 0 right so that's essentially what's happening here this point has an unknown X but I know Y and that is 0 so all we need to do is the substitute 0 for y this gives us an equation in which we can solve for X at the end of the day we get a new guess x equals to 3 point 1 to 5 which of course is a much closer value than 4 of course we know the actual root is 3 right so this one is definitely a far better answer than 4 of course if this looks like a lot of steps to you I don't blame you we clearly don't want to sort of stats here right and go through all the work just to get one yes luckily as it turns out we don't have to this process is essentially the same no matter what value you give it to see General Formulation of Newton's Method this we can represent our equation in a general form this is as general as it gets if we don't think of our functions you know in terms of their actual values we can think of them as just a function f so you can actually use this to approximate many types of functions and as it turns out the overall form of this is exactly the same remember this is just our tangent line equation so the 3 parts are just looked at as still there right firstly our gradient is here this is of course just the derivative or the differentiated version of the original function this is still here we have our X bias right here as well as our Y bias which taps apart the function itself so this is just the generalized form of the equation now remember we want to find the x intercept so we simply substitute 0 here we get this and essentially what we can do is we can rearrange our equation so that it looks something like this our next guess is simply our current guests use in a formula like this of course this is a very general form and in fact this is sort of you know the one-liner definition of Newton's method in a nutshell but this entire thing is supposed to boil down to Applying the Method to Square Roots square roots right so let's bring it back to talk about square roots itself now we have things expressed in terms of F X and F prime X and remember in the context of square roots it is simply these two equations now previously we have our value fixed at night right so whatever we do we end up looking for the roots of night let's make this a little bit more flexible by changing that to a and still now whatever value you plug in for I will be finding the roots for that number so this is the general form of our two equations since this is what Newton's method says we need to do then we can simply substitute these two things in and we will end up with a version of Newton's method that we have basically customized for finding square roots that as it turns out look something like this X and again refers to my current guess X n plus 1 will be our next or improved guess we've substituted F and F prime x over here so yeah we can substitute our current guest in these three places that gives us a better I guess which we can plug back in that gives us an even better I guess and so on we can basically do this iteratively and yeah we'll get better and better results obviously we've got to stop somewhere so yeah that is a decision we have to make when it comes to our programming stage so hopefully the ideas here aren't too Conclusion / Summary difficult to understand right at the end of the day all we are doing is we're just trying to well get closer and closer to the original roots by drawing out a tangent line from our current guess with this in mind we can move on Writing The Code to our code as it turns out the coding pad is pretty simple because well we just take what we've just done right which is already a step by step algorithm and we just turn it into code simple as that so we're gonna be doing today's exercise in Python and yeah we'll see it's not too difficult let us Starting the Function build an aprox square root function it takes in a number and it's gonna give you an approximate square roots essentially we need to run Newton's method over and over again until well we get it close enough square roots where we get a you know square root value that is within a certain amount of error now Discussion of Terminating Condition the many ways of approaching this right but what I'm gonna do is I'm gonna just well set an error parameter here the idea is that if between iterations we get a difference that it's less than a certain amount we'll consider it good enough they're pros and cons to using this method right and there of course other methods as well but we'll stick with this since this one is fairly easy to understand will actually give this a default value let's keep this small and let's say well we want five decimal places of precision as long as the error is smaller than this amount we'll be okay of course this being a parameter this can be overridden by a user if required so essentially we start to Making the First guess guess now our first guess is also something that isn't you know cast in stone if you cast your mind back to the fast inverse square root function they use a pretty hacky method to find well essentially a first guess which is considered very much good enough in our case we're not gonna do that it's that we're gonna start off our guess at just the value that was given clearly that's a horrible guess but you'll find that well using Newton's method we'll zoom in on a correct answer pretty quick we will also calculate a difference now Difference variable for termination this difference is essentially what we're gonna be using to match against this error we have up here right now let's just make the difference a very big number then we can say wow the Building the main loop / applying Newton's Method difference is greater than you know the error that we've set here right we'll keep going we of course need a loop right so we can keep iteratively improving our guests until it basically meets the level of error that were okay with with this we can immediately implement Newton's method so the new guess is simply the existing guess subtract yes square - you know the value that you're actually looking for right the original value that you want to take the route of then divide by the derivative right so in this case it will be 2 times the guess so yeah this is Newton's method in one line in fact this is all of Newton's method everything else is just that to support this anyway we have this right Building terminating mechanism and what we can do is we can calculate the error so the diff here right is essentially the new guess minus the previous guess now yes this number could be positive could be negative right because of course when you're actually doing the approximation you could overshoot to the other side we're not sure which one of these numbers are actually bigger so essentially we want to take an absolute of the difference right the error is always just the magnitude one way you can do this is to use an absolute function from the math library but if you don't want to do that you can also use an if statement so if diff is less than 0 if x equals to 1 sorry negative 1 so what you're trying to say here is if that error there is negative we'll multiply it by negative 1 hence flipping it to a positive number when we get past this points our error is definitely positive we only have one more step to do right and that is in order to actually forward our guesses correctly we say guess equals 2 new guests like so how this is going to work Brief Trace of completed algorithm Venice that this is going to keep running it's gonna keep generating new and you know far improved guesses each time we get a guess we check and see how much you know difference there actually was then we update our guests to the new guests will loop around here and we'll check to see if the difference is actually greater than the error the idea is that if the difference is still too big right in other words bigger than this value or whatever value that was supplied here we'll continue through the loop to refine our guess otherwise well we're basically done right and we can just jump out and we're we're outs we can simply return our guess and this is you know consider as good of a guest as we didn't make let us now try and run our function let's print out the real First test approximate square root of nine right since that was the example we were using we ran this any answer we get is three it's exactly the result we're looking for which is of course great let's try a bigger number and see you know what happens to it right one zero two four when we run this we get 32 points you know a bunch of zeros and then it ate at a back this is considered a pretty good approximation and yeah given the amount of error we've set you know this is fine right of course the actual value is 32 if we try and look for a number that cannot be rooted let's go for a random one four to seven let's see what happens well where you run this we get an answer right but we have no clue how close this answer chubbie is so what I'm gonna do Comparing with "ground truth" is I'm gonna come up here I'm gonna import math and I'm just gonna check this value against the actual well answer that you're gonna get so using pythons actual square root function which I'm gonna just assume is ground truth let's compare up these two values and whoa thank you you look exactly the same I'm guessing this is somewhat coincidental since of course we didn't ask for this many decimal places were for precision right but things seem to be fairly okay while we are on this Changing Error value subject we can of course take a look at how we can adjust up the error right if I want the error to be smaller here then we'll get a slightly less precise answer you will see based on what we have that well the answer is still pretty close all we can do is we can Adding a counter to see the number of iterations quickly assess you know how much work this actually requires since this is an iterative process it of course takes up time and how we can do that is we can maintain account essentially let's start off with zero here right and every iteration will increase the counts by one then at the end of the day what we can do is well let's just print it out write print approximation in this number of guesses so what I'm doing is I'm just trying to show the count on screen well for Grandma let's fix this real quick so for that value we took it guesses not a great number but well I'll CPUs are fairly fast so that shouldn't be an issue of course if we want things to a greater level of precision right let's draw in more zeroes then of course we take more guesses though in this case it's just one more guess we can also force this to be you know a lot more precise I don't know how yeah that's still 9 guesses right so yeah not too bad you get the idea this is a very simple way in which we can do a square root approximation using Newton's method this piece of code will be available in the video description now of course annotate this with a bit more comments to make this you know a little bit more readable for you but yeah essentially that's as simple as it gets I also upload a different piece of code Discussion of verbose version of the code that is interactive and gives you a little bit more analytics I guess to sort of show you in better terms how the approximation actually serve refines itself over the iterations both of these downloads can be found in the video description so do you pop it open and take a look and there you have it you have just written a square root function that will can give you an approximation to a certain number of decimal places so yeah like we were talking about in the fast inverse square root video we've never really given the procedure to get square roots any plot usually if we have two square root a number by hand it's a number that you know we are already very familiar with but the truth is we could square root any old number and well usually we just go to a calculator we never really have a chance to think about the actual implementation of this and well now that we've seen this hopefully we can have a better appreciation of what our devices are doing to give us the answer obviously most devices are not going to go through such a long iterative process they may have some other way to make things you know a little bit faster but yeah if you wanted to figure out right the answer to this Newton's method is a method you could use and we've just seen a code to make that happen that's all there is for this episode thank you very much for watching I hope you've gained some insight today but until next time you're watching zero six watching TV with Nick first darn it thank you very much for watching if you like my work and are feeling generous you can shoot me a one-time donation on PayPal or sign up for a recurring one on patreon of course you can simply like comment and subscribe you know the deal for more videos links to my channel and a related playlists are on screen thank you for your support
190637
https://www.webqc.org/balanced-equation-H2SO4+NaOH=Na2SO4+H2O
Printed from Balance Chemical Equation - Online Balancer | | | | | Balanced equation: H2SO4 + 2 NaOH = Na2SO4 + 2 H2OReaction type: double replacement | Reaction stoichiometry | Limiting reagent | --- | | Compound | Coefficient | Molar Mass | Moles | Weight | | Reagents | | H2SO4 | 1 | 98.08 | | | | NaOH | 2 | 40.00 | | | | Products | | Na2SO4 | 1 | 142.04 | | | | H2O | 2 | 18.02 | | | Units: molar mass - g/mol, weight - g. | Word equation | | Sulfuric acid + 2 Sodium hydroxide = Sodium sulfate + 2 Water | | Full ionic equation | | 2 H{+} + 2 SO4{-2} + 4 Na{+} + 2 OH{-} = 2 H2O | | Balancing step by step using the inspection method | | Let's balance this equation using the inspection method.First, we set all coefficients to 1:1 H2SO4 + 1 NaOH = 1 Na2SO4 + 1 H2OFor each element, we check if the number of atoms is balanced on both sides of the equation.S is balanced: 1 atom in reagents and 1 atom in products.Na is not balanced: 1 atom in reagents and 2 atoms in products.In order to balance Na on both sides we:Multiply coefficient for NaOH by 21 H2SO4 + 2 NaOH = 1 Na2SO4 + 1 H2OH is not balanced: 4 atoms in reagents and 2 atoms in products.In order to balance H on both sides we:Multiply coefficient for H2O by 21 H2SO4 + 2 NaOH = 1 Na2SO4 + 2 H2OO is balanced: 6 atoms in reagents and 6 atoms in products.All atoms are now balanced and the whole equation is fully balanced: H2SO4 + 2 NaOH = Na2SO4 + 2 H2O | | Balancing step by step using the algebraic method | | Let's balance this equation using the algebraic method.First, we set all coefficients to variables a, b, c, d, ...a H2SO4 + b NaOH = c Na2SO4 + d H2ONow we write down algebraic equations to balance of each atom:H: a 2 + b 1 = d 2S: a 1 = c 1O: a 4 + b 1 = c 4 + d 1Na: b 1 = c 2Now we assign a=1 and solve the system of linear algebra equations:a 2 + b = d 2a = ca 4 + b = c 4 + db = c 2a = 1Solving this linear algebra system we arrive at:a = 1b = 2c = 1d = 2To get to integer coefficients we multiply all variable by 1a = 1b = 2c = 1d = 2Now we substitute the variables in the original equations with the values obtained by solving the linear algebra system and arrive at the fully balanced equation: H2SO4 + 2 NaOH = Na2SO4 + 2 H2O | Direct link to this balanced equation: Please tell about this free chemistry software to your friends! | | Instructions on balancing chemical equations: Enter an equation of a chemical reaction and click 'Balance'. The answer will appear below Always use the upper case for the first character in the element name and the lower case for the second character. Examples: Fe, Au, Co, Br, C, O, N, F. Compare: Co - cobalt and CO - carbon monoxide To enter an electron into a chemical equation use {-} or e To enter an ion, specify charge after the compound in curly brackets: {+3} or {3+} or {3}.Example: Fe{3+} + I{-} = Fe{2+} + I2 Substitute immutable groups in chemical compounds to avoid ambiguity. For instance equation C6H5C2H5 + O2 = C6H5OH + CO2 + H2O will not be balanced, but PhC2H5 + O2 = PhOH + CO2 + H2O will Compound states [like (s) (aq) or (g)] are not required. If you do not know what products are, enter reagents only and click 'Balance'. In many cases a complete equation will be suggested. Reaction stoichiometry could be computed for a balanced equation. Enter either the number of moles or weight for one of the compounds to compute the rest. Limiting reagent can be computed for a balanced equation by entering the number of moles or weight for all reagents. The limiting reagent row will be highlighted in pink. Examples of complete chemical equations to balance: Fe + Cl2 = FeCl3 KMnO4 + HCl = KCl + MnCl2 + H2O + Cl2 K4Fe(CN)6 + H2SO4 + H2O = K2SO4 + FeSO4 + (NH4)2SO4 + CO C6H5COOH + O2 = CO2 + H2O K4Fe(CN)6 + KMnO4 + H2SO4 = KHSO4 + Fe2(SO4)3 + MnSO4 + HNO3 + CO2 + H2O Cr2O7{-2} + H{+} + {-} = Cr{+3} + H2O S{-2} + I2 = I{-} + S PhCH3 + KMnO4 + H2SO4 = PhCOOH + K2SO4 + MnSO4 + H2O CuSO45H2O = CuSO4 + H2O calcium hydroxide + carbon dioxide = calcium carbonate + water sulfur + ozone = sulfur dioxide Examples of the chemical equations reagents (a complete equation will be suggested): H2SO4 + K4Fe(CN)6 + KMnO4 Ca(OH)2 + H3PO4 Na2S2O3 + I2 C8H18 + O2 hydrogen + oxygen propane + oxygen Understanding chemical equations A chemical equation represents a chemical reaction. It shows the reactants (substances that start a reaction) and products (substances formed by the reaction). For example, in the reaction of hydrogen (H₂) with oxygen (O₂) to form water (H₂O), the chemical equation is: H2 + O2 = H2O However, this equation isn't balanced because the number of atoms for each element is not the same on both sides of the equation. A balanced equation obeys the Law of Conservation of Mass, which states that matter is neither created nor destroyed in a chemical reaction. Balancing with inspection or trial and error method This is the most straightforward method. It involves looking at the equation and adjusting the coefficients to get the same number of each type of atom on both sides of the equation. Best for: Simple equations with a small number of atoms. Process: Start with the most complex molecule or the one with the most elements, and adjust the coefficients of the reactants and products until the equation is balanced. Example:H2 + O2 = H2O 1. Count the number of H and O atoms on both sides. There are 2 H atoms on the left and 2 H atom on the right. There are 2 O atoms on the left and 1 O atom on the right. 2. Balance the oxygen atoms by placing a coefficient of 2 in front of H2O: H2 + O2 = 2H2O 3. Now, there are 4 H atoms on the right side, so we adjust the left side to match: 2H2 + O2 = 2H2O 4. Check the balance. Now, both sides have 4 H atoms and 2 O atoms. The equation is balanced. Balancing with algebraic method This method uses algebraic equations to find the correct coefficients. Each molecule's coefficient is represented by a variable (like x, y, z), and a series of equations are set up based on the number of each type of atom. Best for: Equations that are more complex and not easily balanced by inspection. Process: Assign variables to each coefficient, write equations for each element, and then solve the system of equations to find the values of the variables. Example: C2H6 + O2 = CO2 + H2O 1. Assign variables to coefficients: a C2H6 + b O2 = c CO2 + d H2O 2. Write down equations based on atom conservation: 2 a = c 6 a = 2 d 2 b = 2c + d 3. Assign one of the coefficients to 1 and solve the system. a = 1 c = 2 a = 2 d = 6 a / 2 = 3 b = (2 c + d) / 2 = (2 2 + 3) / 2 = 3.5 4. Adjust coefficient to make sure all of them are integers. b = 3.5 so we need to multiply all coefficient by 2 to arrive at the balanced equation with integer coefficients: 2 C2H6 + 7 O 2 = 4 CO2 + 6 H2O Balancing with oxidation number method Useful for redox reactions, this method involves balancing the equation based on the change in oxidation numbers. Best For: Redox reactions where electron transfer occurs. Process: identify the oxidation numbers, determine the changes in oxidation state, balance the atoms that change their oxidation state, and then balance the remaining atoms and charges. Example: Ca + P = Ca3P2 1. Assign oxidation numbers: Calcium (Ca) has an oxidation number of 0 in its elemental form. Phosphorus (P) also has an oxidation number of 0 in its elemental form. In Ca3P2, calcium has an oxidation number of +2, and phosphorus has an oxidation number of -3. 2. Identify the changes in oxidation numbers: Calcium goes from 0 to +2, losing 2 electrons (oxidation). Phosphorus goes from 0 to -3, gaining 3 electrons (reduction). 3. Balance the changes using electrons: Multiply the number of calcium atoms by 3 and the number of phosphorus atoms by 2. 4. Write the balanced Equation: 3 Ca + 2 P = Ca3P2 Balancing with ion-electron half-reaction method This method separates the reaction into two half-reactions – one for oxidation and one for reduction. Each half-reaction is balanced separately and then combined. Best for: complex redox reactions, especially in acidic or basic solutions. Process: split the reaction into two half-reactions, balance the atoms and charges in each half-reaction, and then combine the half-reactions, ensuring that electrons are balanced. Example: Cu + HNO3 = Cu(NO3)2 + NO2 + H2O 1. Write down and balance half reactions: Cu = Cu{2+} + 2{e} H{+} + HNO3 + {e} = NO2 + H2O 2. Combine half reactions to balance electrons. To accomplish that we multiple the second half reaction by 2 and add it to the first one: Cu + 2H{+} + 2HNO3 + 2{e} = Cu{2+} + 2NO2 + 2H2O + 2{e} 3. Cancel out electrons on both sides and add NO3{-} ions. H{+} with NO3{-} makes HNO3 and Cu{2+} with NO3{-} makes Cu(NO3)3: Cu + 4HNO3 = Cu(NO3)2 + 2NO2 + 2H2O Learn to balance chemical equations: Practice what you learned: Practice balancing chemical equations Related chemical tools: Molar mass calculator pH solver | | chemical equations balanced today | Please let us know how we can improve this web app. | | | Chemistry tools | | Gas laws | | Unit converters | | Periodic table | | Chemical forum | | Constants | | Symmetry | | Contribute | | Contact us | | Choose languageDeutschEnglishEspañolFrançaisItalianoNederlandsPolskiPortuguêsРусский中文日本語한국어 | | How to cite? | Menu Balance Molar mass Gas laws Units Chemistry tools Periodic table Chemical forum Symmetry Constants Contribute Contact us How to cite? Choose languageDeutschEnglishEspañolFrançaisItalianoNederlandsPolskiPortuguêsРусский中文日本語한국어 WebQC is a web application with a mission to provide best-in-class chemistry tools and information to chemists and students. By using this website, you signify your acceptance of Terms and Conditions and Privacy Policy.Do Not Sell My Personal Information © 2025 webqc.org All rights reserved
190638
https://artofproblemsolving.com/wiki/index.php/Factor_Theorem?srsltid=AfmBOorMTOiBby9d6e6ucw5EKgtWYG3aIrzjKiLwj7hCGQw7BNv25AAs
Art of Problem Solving Factor Theorem - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Factor Theorem Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Factor Theorem In algebra, the Factor theorem is a theorem regarding the relationships between the factors of a polynomial and its roots. One of it's most important applications is if you are given that a polynomial have certain roots, you will know certain linear factors of the polynomial. Thus, you can test if a linear factor is a factor of a polynomial without using polynomial division and instead plugging in numbers. Conversely, you can determine whether a number in the form ( is constant, is polynomial) is using polynomial division rather than plugging in large values. Contents 1 Statement 2 Proof 3 Problems 3.1 Introductory 3.2 Intermediate 3.3 Olympaid 4 See Also Statement The Factor Theorem says that if is a polynomial, then is a factor of if and only if . Proof If is a factor of , then , where is a polynomial with . Then . Now suppose that . Apply Remainder Theorem to get , where is a polynomial with and is the remainder polynomial such that . This means that can be at most a constant polynomial. Substitute and get . Since is a constant polynomial, for all . Therefore, , which shows that is a factor of . Problems Here are some problems that can be solved using the Factor Theorem: Introductory Intermediate Suppose is a -degree polynomial. The Fundamental Theorem of Algebra tells us that there are roots, say . Suppose all integers ranging from to satisfies . Also, suppose that for an integer . If is the minimum possible positive integral value of . Find the number of factors of the prime in . (Source: I made it. Solution here) Olympaid If denotes a polynomial of degree such thatfor , determine . (Source: 1975 USAMO Problem 3) See Also Polynomials Remainder Theorem This article is a stub. Help us out by expanding it. Retrieved from " Categories: Algebra Polynomials Theorems Mathematics Stubs Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
190639
https://mathsupto12th.quora.com/What-are-7-rules-of-exponents
What are 7 rules of exponents? - Maths upto 12th - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Maths upto 12th In this page I am giving the solutions of all maths questions. Upto 12th classes Follow · 999 222 What are 7 rules of exponents? Answer Request Follow · 9 1 4 Answers Sort Recommended Sanjeev Saini Studied Mathematics&Qualities (Graduated 2011) ·2y Product Rule: a^m a^n = a^(m + n) Quotient Rule: a^m / a^n = a^(m - n) Power Rule: (a^m)^n = a^(m n) Negative Exponent Rule: a^(-m) = 1 / a^m Zero Exponent Rule: a^0 = 1 (where 'a' is not equal to 0) One Exponent Rule: a^1 = a Fractional Exponent Rule: a^(m/n) = nth root of (a^m) (where 'n' is not equal to 0 and 'm' is any integer) MATHEMATICS SOLUTIONS Im passionate student of mathematics. I will share Q/A ·2y The rules of exponents, also known as the laws of exponents, are essential mathematical principles that govern the manipulation and simplification of expressions involving exponents. Here are the seven fundamental rules of exponents: Product Rule: (a^m \cdot a^n = a^{m+n}) When multiplying two terms with the same base, add the exponents. Quotient Rule: (\dfrac{a^m}{a^n} = a^{m-n}) When dividing two terms with the same base, subtract the exponents. Power Rule: ((a^m)^n = a^{m \cdot n}) When raising an exponent to another exponent, multiply the exponents. Zero Exponent R Continue Reading The rules of exponents, also known as the laws of exponents, are essential mathematical principles that govern the manipulation and simplification of expressions involving exponents. Here are the seven fundamental rules of exponents: Product Rule: (a^m \cdot a^n = a^{m+n}) When multiplying two terms with the same base, add the exponents. Quotient Rule: (\dfrac{a^m}{a^n} = a^{m-n}) When dividing two terms with the same base, subtract the exponents. Power Rule: ((a^m)^n = a^{m \cdot n}) When raising an exponent to another exponent, multiply the exponents. Zero Exponent Rule: (a^0 = 1) Any non-zero number raised to the power of zero is equal to 1. Negative Exponent Rule: (a^{-n} = \dfrac{1}{a^n}) A negative exponent indicates the reciprocal of the number raised to the positive exponent. Product of Powers Rule: (a^m \cdot b^m = (a \cdot b)^m) When multiplying two terms with the same exponent, raise the product of the bases to the same exponent. Quotient of Powers Rule: (\dfrac{a^m}{b^m} = \left(\dfrac{a}{b}\right)^m) When dividing two terms with the same exponent, raise the quotient of the bases to the same exponent. These rules are fundamental to simplify expressions involving exponents, solve equations, and manipulate various mathematical expressions. By applying these rules, you can handle complex exponent expressions more easily and efficiently. Terry Moore M.Sc. in Mathematics, University of Southampton (Graduated 1968) ·2y What are 7 rules of exponents? I have no idea how many there are. Let’s just list a few and see if we reach [math]7[/math]. math^x=a^xb^x[/math], [math]\left(\frac ab\right)^x=frac{a^x}{b^x)[/math] [math]a^{x+y}=a^xa^y[/math], [math]a^{x-y}=\frac{a^x}{a^y}[/math] [math]a^{xy}=\left(a^x\right)^y[/math], [math]a^{-x}=\frac1{a^x}[/math] [math]a^{m/n}=\sqrt[n]{a^m}=\left(\sqrt[n]a\right)^m[/math]. That sort of makes [math]4[/math] because the first two are essentially the same and so are the second pair and the third pair.. 9 1 Related questions What is the justification for rounding up 0.5 rupees to 1 rupee in bills prepared by merchant outlets? How was the name π chosen for the ratio of circumference to the diameter of a circle? How will we guess the prime number divisors of 538229 and 130869 in a few seconds? How can we split up a semi-prime number 34723387 which is a product of two prime numbers each with digital root 01? 8^3 + 1 =? What is the next number in the sequence, 3, 6, 9, 12, __? What are all of the methods to calculate the least common multiple LCM? What is one half written as a fraction? If the time is 12pm, how many hours is it until 6pm? How many trailing zeros are there in 100 ! + 200? What is the answer for 689+79+99+800? How can w prove that 2 (6^n-4^n) / (4^n-3^n) can never be a natural number, given than n is greater than 1 and a natural number? What is the so called "Enormous Theorem" in mathematics? Is the sequence 564+64656-23134/50? A car travels 240 km in 4 hours. What is its speed? If 2^n/ (4^n - 3^n) is a natural number, what is the value of n (n is not equal to 1)? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
190640
https://www.wordreference.com/definition/thrive
| | | | | | | | | | | | | --- --- --- --- --- --- | | + See Also: + - thrice - thrift - thriftless - thriftshop - thrifty - thrill - thriller - thrilling - thrippence - thrips - thrive - thro - thro' | in context | images "images") Inflections of '' (v): (⇒ conjugate) thrives v 3rd person singular thriving v pres p thrived v past thrived v past p WordReference Random House Learner's Dictionary of American English © 2025 thrive /θraɪv/USA pronunciation v., thrived or throve/θroʊv/USA pronunciation thrived or thriv•en /ˈθrɪvən/USA pronunciation thriv•ing. 1. to prosper; be successful:[no object]The business is thriving. 2. to grow or develop well; flourish: [no object]The plants will thrive in such a climate.[~ + on + object]Do you thrive on such challenges? WordReference Random House Unabridged Dictionary of American English © 2025 thrive (thrīv),USA pronunciation v.i., thrived or throve, thrived or thriv•en (thriv′ən),USA pronunciation thriv•ing. 1. to prosper; be fortunate or successful. 2. to grow or develop vigorously; flourish:The children thrived in the country. Old Norse thrīfast to thrive, reflexive of thrīfa to grasp Middle English thriven 1150–1200 thriv′er, n. thriv′ing•ly, adv. 1. advance. See succeed. Collins Concise English Dictionary © HarperCollins Publishers:: thrive /θraɪv/ vb (thrives, thriving, thrived, throve, thrived, thriven /ˈθrɪvən/) (intransitive) 1. to grow strongly and vigorously 2. to do well; prosper Etymology: 13th Century: from Old Norse thrīfask to grasp for oneself, reflexive of thrīfa to grasp, of obscure origin 'thrive' also found in these entries (note: many are not synonyms or translations): batten - do for - euthenics - flourish - prosper - thrift - throve - abetalipoproteinemia - advance - bloom - blossom - boom - dow - luxuriate - vigor - saltbush - succeed In Lists: Top 2000 English words, Gardens (US), Family, more... Synonyms: flourish, mushroom, burgeon, shoot up, prosper, more... Collocations: the [business, civilization, industry] has thrived, [business] is thriving, thrive in [the wild, the summer, the United States, extreme conditions], more... 🗣️Forum discussions with the word(s) "thrive" in the title: activists thrive with bards Calamities thrive the country can also thrive descended from creatures that killed to thrive and survive For an interlinked world to thrive for creativity to thrive help children thrive I believe they continue to thrive movie reviewers thrive on our yearning prosper-flourish-thrive-succeed scarce thrive after (to make timber) being lopped So thrive on my sisters with your too much selves. Strive vs thrive thrive thrive thrive thrive -prosper thrive as where he wandered thrive in my future career thrive off doing something thrive on thrive on thrive conflict thrive on diversity <thrive> on feedback thrive on two intellectual traditions. thrive very far indeed thrive vs flourish vs grow vigorously thrive vs grow To thrive, if not survive more... Visit the English Only Forum. Help WordReference: Ask in the forums yourself. Look up "thrive" at Merriam-Webster Look up "thrive" at dictionary.com Go to Preferences page and choose from different actions for taps or mouse clicks. In other languages: Spanish | French | Italian | Portuguese | Romanian | German | Dutch | Swedish | Russian | Polish | Czech | Greek | Turkish | Chinese | Japanese | Korean | Arabic Links: ⚙️Preferences | Abbreviations | Pron. Symbols | Privacy Policy | Terms of Service | Support WR | Forums | Suggestions | | | | Advertisements | | | | Advertisements | | | | Report an inappropriate ad. | | WordReference.com WORD OF THE DAY GET THE DAILY EMAIL! | | | | Become a WordReference Supporter to view the site ad-free. | | Chrome users: Use search shortcuts for the fastest search of WordReference. | | thrive 🗣️Forum discussions with the word(s) "thrive" in the title: In other languages: Spanish | French | Italian | Portuguese | Romanian | German | Dutch | Swedish | Russian | Polish | Czech | Greek | Turkish | Chinese | Japanese | Korean | Arabic | | | Advertisements | | | | Advertisements | | | | Report an inappropriate ad. | | WordReference.com WORD OF THE DAY GET THE DAILY EMAIL! | | | | Become a WordReference Supporter to view the site ad-free. | | Chrome users: Use search shortcuts for the fastest search of WordReference. | | | | --- | | Copyright © 2025 WordReference.com | Please report any problems. |
190641
https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/dobutamine
Skip to Main content Sign in Chapters and Articles You might find these chapters and articles relevant to this topic. Autonomic Nervous System Pharmacology Dobutamine Dobutamine is a synthetic catecholamine obtained by substitution of a bulky aromatic group on the side chain of dopamine. Dobutamine is a racemic mixture of the (+) and (−) isomers. The (−) isomer acts on α1-adrenergic receptors and increases vascular resistance, and the (+) isomer is a potent β1-adrenergic receptor agonist and a potent α1-adrenergic receptor antagonist that blocks the effects of (−) dobutamine (see Table 14.2). Compared with dopamine, dobutamine has less notable venoconstriction and is less likely to increase HR and more likely to decrease pulmonary vascular resistance. The most prominent effects with increasing infusion rates of dobutamine (2–20 µg/kg per minute intravenously) are a progressive increase in cardiac output, decrease in left ventricular filling pressure, minor increases in HR until higher doses are given, and decreases or no change in systematic vascular resistance. However, when dobutamine is given to patients using β-blockers, systemic vascular resistance can increase, leading to increases in BP from the unmasked α1-effect. Dobutamine has minimal β2 effects. Thus it often improves cardiac output without major adverse effects on the myocardial oxygen supply/demand ratio because afterload is maintained, thereby improving coronary blood flow.46 It enhances automaticity of the sinus and atrioventricular nodes and facilitates intraventricular conduction. It does not affect dopamine receptors. Dobutamine is prepared in 5% dextrose in water because it is inactivated in alkaline solutions. Tachyphylaxis can occur with infusions longer than 72 hours. Dobutamine is often used for nonexercise cardiac stress testing and for the treatment of acute heart failure, especially in patients being weaned from cardiopulmonary bypass. View chapterExplore book Read full chapter URL: Book2019, Pharmacology and Physiology for Anesthesia (Second Edition)Thomas J. Ebert Chapter What Vasopressor Agent Should Be Used in the Septic Patient? 2010, Evidence-Based Practice of Critical CareGráinne McDermott, Patrick J. Neligan Dobutamine Dobutamine is a potent β1-adrenergic receptor agonist, with predominant effects in the heart, where it increases myocardial contractility and thus stroke volume and cardiac output. Dobutamine is associated with much less increase in heart rate than dopamine. In sepsis, dobutamine, although a vasodilator, increases oxygen delivery and consumption. Dobutamine appears particularly effective in splanchnic resuscitation, increasing pHi (gastric mucosal pH) and improving mucosal perfusion in comparison with dopamine.23 As part of an early goal-directed resuscitation protocol that combined close medical and nursing attention and aggressive fluid and blood administration, dobutamine was associated with a significant absolute reduction in the risk for mortality. This study, however, looked at early (hypovolemic) rather than late (vasoplegic) sepsis.5 By and large, dobutamine, when administered in late-stage sepsis, is used as a adjunct agent to drive up splanchnic blood flow or increase stroke volume. For example, Levy and colleagues24 compared the combination of norepinephrine and dobutamine to epinephrine in septic shock; this was a physiologic study. After 6 hours, the use of epinephrine was associated with an increase in lactate levels (from 3.1 ± 1.5 to 5.9 ± 1.0 mmol/L; P < .01), whereas lactate levels decreased in the norepinephrine-dobutamine group (from 3.1 ± 1.5 to 2.7 ± 1.0 mmol/L). The lactate-to-pyruvate ratio increased in the epinephrine group (from 15.5 ± 5.4 to 21 ± 5.8; P < .01) and did not change in the norepinephrine-dobutamine group (13.8 ± 5 to 14 ± 5.0). Gastric mucosal pH (pHi) decreased (from 7.29 ± 0.11 to 7.16 ± 0.07; P < .01), and the partial pressure of carbon dioxide (Pco2) gap (tonometer Pco2 − arterial Pco2) increased (from 10 ± 2.7 to 14 ± 2.7 mm Hg; P < .01) in the epinephrine group. In the norepinephrine-dobutamine group, pHi (from 7.30 ± 0.11 to 7.35 ± 0.07) and the Pco2 gap (from 10 ± 3 to 4 ± 2 mm Hg) were normalized within 6 hours (P < .01). Thus, compared with epinephrine, dobutamine and norepinephrine were associated, presumably, with better splanchnic blood flow and a reduction in catecholamine-driven lactate production. Whether this is of clinical significance is unclear. Moreover, the decrease in pHi and the increase in the lactate-to-pyruvate ratio in the epinephrine group returned to normal within 24 hours. The serum lactate level normalized in 7 hours. View chapterExplore book Read full chapter URL: Book2010, Evidence-Based Practice of Critical CareGráinne McDermott, Patrick J. Neligan Chapter Discontinuing Cardiopulmonary Bypass 2018, Kaplan's Essentials of Cardiac Anesthesia (Second Edition)Liem Nguyen MD, ... Joel A. Kaplan MD, CPE, FACC Dobutamine Dobutamine is a synthetic catecholamine that displays a strong affinity for the β-receptor and results in dose-dependent increases in CO and HR, as well as reductions in diastolic filling pressures. Administration of dobutamine in cardiac surgical patients produced a marked increase in CI and HR in several studies. In patients with the LCOS, dobutamine resulted in an increase in HR in excess of 25% and a significant concomitant decrease in SVR. The effects of epinephrine (0.03 µg/kg per min) were compared with those of dobutamine (5 µg/kg per min) in 52 patients recovering from CABG procedures. Both drugs significantly and similarly increased SV index (SVI), but epinephrine increased the HR by only 2 beats/minute, whereas dobutamine increased the HR by 16 beats/minute. In addition to increasing contractility, dobutamine may have favorable metabolic effects on ischemic myocardium. Intravenous and intracoronary injections of dobutamine increased coronary blood flow in animal studies. In paced cardiac surgical patients, dopamine increased oxygen demand without increasing oxygen supply, whereas dobutamine increased myocardial oxygen uptake and coronary blood flow. However, because increases in HR are a major determinant of Mv̇O2, these favorable effects of dobutamine could be lost if dobutamine induces tachycardia. During dobutamine stress echocardiography, segmental wall motion abnormalities suggestive of myocardial ischemia can result from tachycardia and increases in Mv̇O2. View chapterExplore book Read full chapter URL: Book2018, Kaplan's Essentials of Cardiac Anesthesia (Second Edition)Liem Nguyen MD, ... Joel A. Kaplan MD, CPE, FACC Chapter Vasopressors and Inotropes 2019, Pharmacology and Physiology for Anesthesia (Second Edition)Josh Zimmerman, ... Michael Cahalan Dobutamine Dobutamine is a direct-acting synthetic catecholamine and is the drug of choice for the noninvasive assessment of coronary disease (dobutamine stress echocardiography). Dobutamine is also used for short-term treatment of congestive heart failure and for low cardiac output after cardiopulmonary bypass. In patients with chronic low output cardiac failure, dobutamine was superior to dopamine in its ability to increase cardiac output without untoward side effects.44 Similarly, it is superior to dopamine in managing hemodynamically unstable patients after cardiac surgery, reducing cardiac filling pressures and PVR with less trachycardia.39,43 Compared with milrinone after cardiac surgery, dobutamine was “comparable,” producing a greater increase in cardiac output, blood pressure, and heart rate but with a higher incidence of arrhythmias.53 In patients with congestive heart failure, the principal effect of dobutamine is an increase in myocardial contractility and ventricular ejection mediated by its β1 effects. In contrast to epinephrine or dopamine, dobutamine generally reduces SVR by a combination of direct vasodilation and a reflex decrease in sympathetic vascular tone. This might be offset by the increase in cardiac output, leading to no change or a decrease in MAP. Dobutamine generally decreases cardiac filling pressures and PVR. Dobutamine has a variable effect on heart rate, but it can significantly increase heart rate (particularly at the higher concentrations used in stress echocardiography).8,54 After cardiopulmonary bypass the primary mechanism of increased cardiac output by dobutamine is an increase in heart rate (approximately 1.4 beats/min per microgram per kilogram per minute) with an increase in SVR.55 The contrasting results of these studies reflect dobutamine's complex mechanisms of action, particularly with regard to the balance of α1 stimulation and inhibition by its isomers, as well as patient factors. Dobutamine can produce tachycardia, arrhythmias, and hypertension. Dobutamine can exacerbate myocardial ischemia in susceptible patients by increases in heart rate and contractility. View chapterExplore book Read full chapter URL: Book2019, Pharmacology and Physiology for Anesthesia (Second Edition)Josh Zimmerman, ... Michael Cahalan Chapter Inotropes in Heart Failure 2018, Encyclopedia of Cardiovascular Research and MedicineM. Ginwalla, C. Bianco Dobutamine Dobutamine is a synthetic catecholamine that exists as a racemic mixture. FDA approved for clinical use in 1978, dobutamine has served as the most commonly employed inotrope for several decades. At typical doses, the predominant hemodynamic effects of increased contractility and mild peripheral vasodilation are due to a strong affinity for the beta-1 and beta-2 receptors, which are bound in a 3:1 ratio (Overgaard and Dzavík, 2008). The negative enantiomer of the racemic mixture leads to vasoconstriction via alpha-1 antagonism, however at usual doses this appears to be more than counterbalanced by beta-2 stimulation leading to a mild net reduction in systemic vascular resistance. At doses greater than 15 mcg/kg/min, vasoconstriction becomes more prominent leading to increased systemic vascular resistance (Ruffolo, 1987). PVR response is variable, but overall a mild decrease can be expected within the lower dose range, while moderate to high doses result in little appreciable PVR change (Kerbaul et al., 2004; Bradford et al., 2000; Romson et al., 1999). Unlike milrinone, dobutamine is known to increase myocardial oxygen consumption (Grose et al., 1986). Also tachyphylaxis occurs with infusions greater than 48 h. Electrophysiologic changes associated with dobutamine include increased sinoatrial node automaticity and decreased atrial and AV node refractoriness as well as decreased AV node conduction time (Tisdale et al., 1995a). Patients taking chronic beta-blocker therapy may have an attenuated response to dobutamine until the beta-blocker has been metabolized. Limited data, in compensated individuals, suggests that concomitant dobutamine and metoprolol administration can lead to a favorable but blunted hemodynamic response. Concomitant dobutamine and carvedilol does not result in favorable hemodynamics and is associated with elevated filling pressures, mean pulmonary artery pressure, systemic vascular resistance, and PVR (Metra et al., 2002). The manufacturer recommended starting dose is 0.5–1 mcg/kg/min, however initiation at 2.5 mcg/kg/min is commonly employed. Current ACC/AHA guidelines recommend a maintenance dose between 2.5–20 mcg/kg/min. Onset of action is between 1 and 10 min, with peak effect in 10–20 min. Unlike milrinone, dobutamine has an extremely short elimination half-life of 2 min and no dose adjustments are necessary in patients with kidney disease. The predominant route of metabolism is tissue methylation via catechol-O-methyltransferase (COMT) and monoamine oxidase (MAO), as well as hepatic conjugation (Yan et al., 2002). Caution should be exercised in patients concurrently receiving MAO inhibitors or tricyclic antidepressants, as paradoxical, prolonged hypertensive episodes may result. Notable drug interactions leading to exaggerated hypertensive reactions also include linezolid and atomoxetine. Arrhythmias are the major life-threatening side effects of dobutamine. Ventricular ectopy is observed in at least 5% of patients during short-term use. Dose-dependent sinus tachycardia is common and approximately 10% of adult patients in clinical studies have a rate increases of 30 beats/minute or more. Increased frequency of atrial fibrillation and worsening control over ventricular response are not uncommon. Headache is reported in 1%–3% of patients. Less common side effects include hypotension, dyspnea, paresthesias, thrombocytopenia, nausea, and leg cramps. Fever can occur in 1%–3% of patients, while infrequent peripheral eosinophilia and rash have also been reported. Rare allergic reactions ranging from mild bronchospasm to anaphylaxis can ensue due to the sulfite component of drug preparation (Lexicomp Online Dobutamine, 2016). Although thought rare, about 2%–8% of patients receiving prolonged dobutamine infusions while waitlisted for heart transplant have evidence of eosinophilic myocarditis on explantation (Takkenberg et al., 2004; Yoshizawa et al., 2013). View chapterExplore book Read full chapter URL: Reference work2018, Encyclopedia of Cardiovascular Research and MedicineM. Ginwalla, C. Bianco Chapter Pharmacology of the Cardiovascular System 2011, Pediatric Critical Care (Fourth Edition)Marc G. Sturgill, ... Daniel A. Notterman Summary Dobutamine is a positive inotropic agent that should be reserved to treat poor myocardial contractility. Following cardiac surgery, dobutamine may be used when contractility is abnormal. For septic shock and other acute hemodynamic disturbances, dobutamine is an adjunct when the primary problem is complicated by poor myocardial function (see Table 25-6). In this context, concomitant use of a vasopressor such as norepinephrine may be appropriate. View chapterExplore book Read full chapter URL: Book2011, Pediatric Critical Care (Fourth Edition)Marc G. Sturgill, ... Daniel A. Notterman Chapter PHYSIOLOGY OF THE NEWBORN 2010, Ashcraft's Pediatric Surgery (Fifth Edition)Mara Antonoff MD, ... Daniel Saltzman MD, PhD Dobutamine Dobutamine, a synthetic catecholamine, has predominantly β-adrenergic effects with minimal α-adrenergic effects. The hemodynamic effect of dobutamine in infants and children with shock has been studied.64 Dobutamine infusion significantly increased cardiac index, stroke index, and pulmonary capillary wedge pressure, and it decreased systemic vascular resistance. The drug appears more efficacious in treating cardiogenic shock than septic shock. The advantage of dobutamine over isoproterenol is its lesser chronotropic effect and its tendency to maintain systemic pressure. The advantage over dopamine is dobutamine’s lesser peripheral vasoconstrictor effect. The usual range of dosages for dobutamine is 2 to 15 μg/kg/min. One study found dobutamine significantly increased systemic blood flow in preterm infants when compared with dopamine. However, it did not demonstrate differences in outcomes.65,66 The combination of dopamine and dobutamine has been increasingly used. However, little information regarding their combined advantages or effectiveness in pediatric patients has been published. View chapterExplore book Read full chapter URL: Book2010, Ashcraft's Pediatric Surgery (Fifth Edition)Mara Antonoff MD, ... Daniel Saltzman MD, PhD Chapter Clinical Presentations of Neonatal Shock 2012, Hemodynamics and Cardiology: Neonatology Questions and Controversies (Second Edition)Martin Kluckow PhD, MBBS, FRACP, Istvan Seri MD, PhD, HonD Dobutamine Dobutamine is an inotropic synthetic sympathomimetic amine, which has complex cardiovascular actions, increasing myocardial contractility via stimulation of the myocardial adrenergic receptors.127 In addition, it exerts a variable peripheral vasodilatory effect via the stimulation of the peripheral cardiovascular beta-adrenergic receptors.128 In contrast to dopamine, dobutamine does not rely on the release of endogenous catecholamines for its positive inotropic action.119 Although it also has some stimulatory effect on peripheral cardiovascular alpha-adrenergic receptors, its affinity to the peripheral cardiovascular beta-adrenergic receptors is higher. Due to these complex actions, the most frequently seen net cardiovascular effects of dobutamine are an increase in myocardial contractility and a variable degree of peripheral vasodilation.128 These effects are present even in the preterm neonate and make dobutamine particularly suited to treatment of hypotension in neonates with associated myocardial dysfunction and low cardiac output.55,129,130 Figure 12-7 illustrates the maturation-dependent cardiovascular actions of dobutamine. Cardiovascular response to dobutamine has been demonstrated via left ventricular performance at doses as low as 5 mcg/kg/min and increases in cardiac output and systemic blood flow at doses of 10-20 mcg/kg/minute.55,118,131 Cardiovascular side effects of dobutamine administration include tachycardia, undesirable decreases in blood pressure due to the drug's potential peripheral vasodilatory effects and, at higher doses, dobutamine may impair diastolic performance and thus compromise preload. This latter action is caused by a drug-induced decrease in the compliance of the myocardium as a result of a significant increase in the myocardial tone. Dobutamine has been compared with dopamine in several randomized trials but, as with dopamine, has never been subjected to trial against placebo or no treatment in newborns. Systematic review of five randomized trials found that dopamine is better than dobutamine at increasing blood pressure in hypotensive preterm infants, but to date has not been better at improving clinical outcomes (including PIVH and PVL) in hypotensive preterm infants.132 In contrast, in a randomized clinical trial using a crossover design in infants with low systemic blood flow in the first postnatal day, dobutamine administered at 10 and 20 mcg/kg/min was more effective at increasing blood flow than dopamine given at the same two doses.118 Similarly, infants who received dobutamine as treatment for hypotension in another randomized trial were more likely to increase their cardiac output than infants who received dopamine, while dopamine was more effective at increasing blood pressure.55 These findings can be explained by the differences in the effects on peripheral vascular resistance between the two drugs with dopamine significantly increasing SVR while dobutamine has little or even a decreasing effect on SVR. An understanding of the mechanisms of action of both of these drugs and their effect on the various vascular beds as well as that of the underlying pathogenesis of shock in the preterm infant is of utmost importance to guide the treatment of any cardiovascular compromise. View chapterExplore book Read full chapter URL: Book2012, Hemodynamics and Cardiology: Neonatology Questions and Controversies (Second Edition)Martin Kluckow PhD, MBBS, FRACP, Istvan Seri MD, PhD, HonD Chapter Dobutamine 2007, xPharm: The Comprehensive Pharmacology ReferenceSara Mraz, Boyd Rorabaugh Introduction Dobutamine is a synthetic catecholamine with activity at both alpha and beta adrenoceptors. Dobutamine is primarily used as an inotropic agent for short term treatment of heart failure. The inotropic effect of dobutamine in the heart is similar to that of isoproterenol. However, dobutamine has a much smaller chronotropic effect compared to isoproterenol. Unlike other beta adrenoceptor agonists, dobutamine has little effect on peripheral vascular resistance. Ruffolo Ruffolo (1987) proposed that this occurs because the vasoconstricting action of dobutamine acting at vascular alpha-1 adrenoceptors in some blood vessels compensates for the vasodilating effect of dobutamine acting at vascular beta-2 adrenoceptors in other blood vessels. View chapterExplore book Read full chapter URL: Reference work2007, xPharm: The Comprehensive Pharmacology ReferenceSara Mraz, Boyd Rorabaugh Chapter Inotropic and Vasoactive Agents in the Cardiac Intensive Care Unit 2010, Cardiac Intensive Care (Second Edition)Andreia Biolo, ... Michael M. Givertz Dobutamine Dobutamine is a direct-acting synthetic sympathomimetic amine that stimulates β1-, β2-, and α-adrenergic receptors (see Table 38-2). Clinically, it is available as a racemic mixture in which the (+) enantiomer is both a β1- and β2-adrenergic receptor agonist and an α-adrenergic receptor competitive antagonist, and the (−) enantiomer is a potent β1-adrenergic receptor agonist and an α-adrenergic receptor partial agonist.13,14 The net effect of this pharmacologic profile is that dobutamine causes a relatively selective stimulation of β1-adrenergic receptors, and accordingly, dobutamine's primary cardiovascular effect is to increase cardiac output by increasing myocardial contractility. This positive inotropic effect is associated with relatively little increase in heart rate. The drug causes modest decreases in left ventricular filling pressure and systemic vascular resistance due to a combination of direct vascular effects and the withdrawal of sympathetic tone15 (see Table 38-3). Dobutamine also directly improves left ventricular relaxation (positive lusitropic effect) via stimulation of myocardial β-adrenergic receptors.16 Dobutamine has no effect on dopaminergic receptors and therefore no direct renal vasodilator effect. However, renal blood flow often increases with dobutamine in proportion to the increase in cardiac output. Dobutamine is a valuable agent for the initial management of patients with acute or chronic systolic heart failure characterized by a low cardiac output.17 It is often initiated at an infusion rate of 2 μg/kg/min (without a loading dose) and titrated upward by 1 to 2 μg/kg/min every 15 to 30 minutes until the hemodynamic goal is reached or a dose-limiting event, such as unacceptable tachycardia or arrhythmias, occurs. Maximum effects are usually achieved at a dose of 10 to 15 μg/kg/min, although higher infusion rates may occasionally be used. In patients with more severe decompensation, and presumably greater β-adrenergic receptor downregulation, dobutamine can be started at 5 μg/kg/min. If the maximum tolerated infusion rate of dobutamine does not result in a sufficient increase in cardiac index, a second drug (e.g., milrinone) may be added.8,18 In patients with elevated systemic vascular resistance and/or left heart filling pressures, the co-administration of a vasodilator such as nitroprusside or nitroglycerin may be required. In patients who remain hypotensive on dobutamine, consideration should be given to the addition of a pressor dose of dopamine and/or the use of mechanical circulatory support. Other clinical situations in which dobutamine is effective include cardiogenic shock complicating acute myocardial infarction, low cardiac output following cardiopulmonary bypass, and as a “bridge” to cardiac transplantation.19 There is some evidence that short-term or intermittent infusions of dobutamine can result in sustained improvement in hemodynamics and functional status for days or weeks after the infusion is stopped.20-22 However, there are limited clinical data to suggest that the intermittent use of dobutamine either has no effect on outcomes23 or may increase mortality.24 As a result, the administration of dobutamine should be limited to the inpatient setting. Dobutamine may increase heart rate, thereby limiting the dose that can be infused. However, in some patients with very depressed cardiac output the improvement in hemodynamic function may cause a withdrawal of sympathetic tone such that heart rate falls. Hypotension is uncommon, but can occur in patients who are hypovolemic. Arrhythmias, including supraventricular and ventricular tachycardia, may limit the dose. Likewise, myocardial ischemia secondary to increased myocardial oxygen consumption may occur. Some patients with chronic severe heart failure may be tolerant to dobutamine, or tolerance to dobutamine may develop after several days of a continuous infusion.25 In this situation, the addition or substitution of a phosphodiesterase inhibitor may be helpful. Hypersensitivity myocarditis has also been reported with chronic infusions of dobutamine and should be suspected if a patient develops worsening hemodynamics or peripheral eosinophilia. View chapterExplore book Read full chapter URL: Book2010, Cardiac Intensive Care (Second Edition)Andreia Biolo, ... Michael M. Givertz Related terms: Norepinephrine Hemodynamic Low Drug Dose Heart Muscle Perfusion Inotropism Blood Pressure Lung Vascular Resistance Heart Output Adrenergic Receptor Magnetism View all Topics We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user.
190642
https://www.pythonfordatascience.org/anova-python/
Python for Data Science Table of contents Introduction Assumptions & Hypotheses One way ANOVA with Python ... using Scipy.stats ... using StatsModels Assumption Check Post-hoc Testing References One-way ANOVA If you are looking for how to run the code jump to the next section or if you would like some theory/refresher then start with this section or see a publicly available peer reviewed article such as this one. ANOVA stands for "Analysis of Variance" and is an omnibus test, meaning it tests for a difference overall between all groups. The one-way ANOVA, also referred to as one factor ANOVA, is a parametric test used to test for a statistically significant difference of an outcome between 3 or more groups. Since it is an omnibus test, it tests for a difference overall, i.e. at least one of the groups is statistically significantly different than the others. However, if the ANOVA is significant one cannot tell which group is different. In order to tell which group is different, one has to conduct planned or post-hoc comparisons. As with all parametric tests, there are certain conditions that need to be met in order for the test results to be considered reliable. The reason why it's called an one-way or one factor ANOVA even though there are 3 or more groups being tested is because those groups are under one categorical variable, such as race or education level, and the name is referring to the number of variables in the analysis and not the number of groups. If there are two variables being compared it would technically be called a two-way, or two factor, ANOVA if both variables are categorical, or it could be called an ANCOVA if the 2nd variable is continuous. The "C" doesn't stand for continuous, it stands for covariate. When working from the ANOVA framework, independent variables are sometimes referred to as factors and the number of groups within each variable are called levels, i.e. one variable with 3 categories could be referred to as a factor with 3 levels. Parametric test assumptions Population distributions are normal Samples have equal variances Independence Hypothesis The test statistic is the F-statistic and compares the mean square between samples () to the mean square within sample (). This F-statistic can be calculated using the following formula: Where, is the number of groups is the total number of observations and where, Sum of square between sample () = Sum of square within sample () = or can be calculated as One rejects the the null hypothesis, , if the computed F-static is greater than the critical F-statistic. The critical F-statistic is determined by the degrees of freedom and alpha, , value. Reject if Before the decision is made to accept or reject the null hypothesis the assumptions need to be checked. See this page on how to check the parametric assumptions in detail - how to check the assumptions for this example will be demonstrated near the end. Let's make sense of all these mathmatical terms. In order to do that, let's start with a generic ANOVA table filled in with symbols and the data set used in this example for now. ANOVA Table | Source | Sum of Squares | Degrees of Freedom | Mean Square | F-statistic | | Between samples | | | | | | Within samples | | | | | | Total | | | | | | Note: TSS means total sum of squares | | | | | Data Table | Drug Dose | Libido | | | | | Sample Size | Sample Means | Sample Variance | | Placebo () | 3 | 2 | 1 | 1 | 4 | 5 () | 2.2 () | 1.7 () | | Low () | 5 | 2 | 4 | 2 | 3 | 5 () | 3.2 () | 1.7 () | | High () | 7 | 4 | 5 | 3 | 6 | 5 () | 5.0 () | 2.5 () | | Total () | | | | | | 15 () | 3.5 () | 3.1 () | Now using the formulas from above, the ANOVA table can be filled in. ANOVA Table | Source | Sum of Squares | Degrees of Freedom | Mean Square | F-statistic | | Between samples | 20.15 | 2 | 10.07 | 5.11 | | Within samples | 23.6 | 12 | 1.97 | | | Total | 43.75 | 14 | | | In order to tell if the calcualted F-statistic is statistically significant, one would look up the F-statistic based on the degress of freedom and alpha level - using statistical software this doesn't need to be done since it'll be provided. Fear not if math is not your strong suit. All this is being calucated when using the methods of a statistical software or programming language. It's good to know what is going on behind the scences. References for this section are provided at the end of the page. One-way ANOVA with Python Don't forget to check the assumptions before interpreting the results! First to load the libraries and data needed. Below, Pandas, Researchpy and the data set will be loaded. Specific libraries for each demonstrated method below will contain any further libraries that are need is using that demonstration. import pandas as pd import researchpy as rp Now to load the data set and take a high level look at the variables. ``` df = pd.read_csv(" df.drop('person', axis= 1, inplace= True) Recoding value from numeric to string df['dose'].replace({1: 'placebo', 2: 'low', 3: 'high'}, inplace= True) df.info() ``` RangeIndex: 15 entries, 0 to 14 Data columns (total 2 columns): dose 15 non-null object libido 15 non-null int64 dtypes: int64(1), object(1) memory usage: 320.0+ bytes rp.summary_cont(df['libido']) | | Variable | N | Mean | SD | SE | 95% Conf. | Interval | --- --- --- --- | | 0 | libido | 15.0 | 3.466667 | 1.76743 | 0.456349 | 2.487896 | 4.445437 | rp.summary_cont(df['libido'].groupby(df['dose'])) | | N | Mean | SD | SE | 95% Conf. | Interval | --- --- --- | dose | | | | | | | | high | 5 | 5.0 | 1.581139 | 0.707107 | 3.450484 | 6.549516 | | low | 5 | 3.2 | 1.303840 | 0.583095 | 1.922236 | 4.477764 | | placebo | 5 | 2.2 | 1.303840 | 0.583095 | 0.922236 | 3.477764 | One-way ANOVA using scipy.stats Conducting an one-way ANOVA using scipy.stats is quick and only returns the restuling F-statistic and p-value of the test. ``` import scipy.stats as stats stats.f_oneway(df['libido'][df['dose'] == 'high'], df['libido'][df['dose'] == 'low'], df['libido'][df['dose'] == 'placebo']) ``` F_onewayResult(statistic=5.11864406779661, pvalue=0.024694289538222603) Before the results should be interpreted, the assumptions of the test should be checked. For example purposes, the results will be interpreted before checking the assumptions. Interpretation A new medication was developed to increase the libido of those who take the medication. The purpose of this study was to test for a difference between the dosage levels. The overall average libido was 3.5 95% CI(2.5, 4.4) with group averages of 2.2 95% CI(0.9, 3.5) for the placebo group; 3.2 95% CI(1.9, 4.5) for the low dose group; and 5.0 95% CI(3.5, 6.5) for the high dose group. There is a statistically significant difference between the groups and their effects the libido, F= 5.12, p-value= 0.0247. One-way ANOVA using StatsModels This method conducts a one-way ANOVA in two steps: Fit the model using an estimation method, The default estimation method in most statistical software packages is ordinary least squares Not going to dive into estimation methods as it's out of scope of this section's topic If you are not familiar with it and don't care to really dive into it, then just know it's one of many types of estimation methods that aim to provide estimates of the parameter (mean, propertion, etc.) being tested Pass fitted model into ANOVA method to produce ANOVA table Here is the official StatsModels documentation on an ANOVA. The general structure for entering the equation is: ols("outcome_variable ~ independent_variable", data= data_frame).fit() In the case of an ANOVA, the independent variable will be categorical. The pseudo code above would work if you were conducting a simple linear regression, but that's not what we are here for! Have to modify the pseudo code which would make it look like: ols("outcome_variable ~ C(independent_variable)", data= data_frame).fit() Now to use real code. In the code below there is an argument "typ" in the anova_lm method, this determines how the sum of squares is calculated. The calculation differences is a bit out of scope, but it's encouraged to learn more about them. An easy to read primer can be found here. Additionally, to see how to conduct an ANOVA with type 3 sum of squares see this page - this requires one additional step. import statsmodels.api as sm from statsmodels.formula.api import ols model = ols('libido ~ C(dose)', data=df).fit() aov_table = sm.stats.anova_lm(model, typ=2) aov_table | | sum_sq | df | F | PR(>F) | --- --- | C(dose) | 20.133333 | 2.0 | 5.118644 | 0.024694 | | Residual | 23.600000 | 12.0 | NaN | NaN | | Note: C(dose)= between samples and Residual= within samples | | | | | This table provides all the information one needs in order to interprete if the results are significant; however, it does not provide any effect size measures to tell if the statistical significance is meaningful. The function below calculates eta-squared () and omega-squared (). A quick note, is the exact same thing as except when coming from the ANOVA framework people call it ; is considered a better measure of effect size since it is unbiased in it's calculation by accounting for the degrees of freedom in the model. """ The function below was created specifically for the one-way ANOVA table results returned for Type II sum of squares """ def anova_table(aov): aov['mean_sq'] = aov[:]['sum_sq']/aov[:]['df'] aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq']) aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']aov['mean_sq'][-1]))/(sum(aov['sum_sq'])+aov['mean_sq'][-1]) cols = ['sum_sq', 'df', 'mean_sq', 'F', 'PR(>F)', 'eta_sq', 'omega_sq'] aov = aov[cols] return aov anova_table(aov_table) | | sum_sq | df | mean_sq | F | PR(>F) | eta_sq | omega_sq | --- --- --- --- | | C(dose) | 20.133333 | 2.0 | 10.066667 | 5.118644 | 0.024694 | 0.460366 | 0.354486 | | Residual | 23.600000 | 12.0 | 1.966667 | NaN | NaN | NaN | NaN | Interpretation A new medication was developed to increase libido. The purpose of this study was to test for a difference between the dosage levels. The overall average libido was 3.5 95% CI(2.5, 4.4) with group averages of 2.2 95% CI(0.9, 3.5) for the placebo group; 3.2 95% CI(1.9, 4.5) for the low dose group; and 5.0 95% CI(3.5, 6.5) for the high dose group. There is a statistically significant difference between the groups and their effects the libido, F= 5.12, p-value= 0.0247, with an overall large effect, = 0.35. In order to tell which groups differed significantly, post-hoc tests need to be conducted. Before one goes through that work, the assumptions should be checked first in case any modifications need to be made to the model. Assumption check The assumptions in this section need to be met in order for the test results to be considered valid. A more in-depth look at parametric assumptions is provided here, which includes some potential remedies. Independence This assumption is tested when the study is designed. What this means is that all groups are mutually exclusive, i.e. an individual can only belong in one group. Also, this means that the data is not repeated measures (not collected through time). In this example, this condition is met. Normality The assumption of normality is tested on the residuals of the model when coming from an ANOVA or regression framework. One method for testing the assumption of normality is the Shapiro-Wilk test. This can be completed using the shapiro() method from scipy.stats. Ensure that scipy.stats is imported for the following method to work. Unfortunately the output is not labelled, but it's (W-test statistic, p-value). import scipy.stats as stats stats.shapiro(model.resid) (0.9166916012763977, 0.17146942019462585) The test is non-significant, W= 0.9167, p= 0.1715, which indicates that the residuals are normally distributed. Another way to test the assumption is through a visual check- this is helpful when the sample is large. The reason this is true is that as the sample size increases, the statistical test's ability to reject the null hypothesis increases, i.e. it gains power to detect smaller differences as the sample size n increases. One method of visually checking the distribution is to use a probability plot with or without the correlation value, , to assess the observed values correlation with the theoretical distribution in question - in the current case it would be the Gaussian (a.k.a the normal) distribution. This can be completed by using the probplot() method from Scipy.stats. If using the measure, one can refer to the NIST/SEMATECH e-handbook of statistical methods to see if the value is significant. import matplotlib.pyplot as plt fig = plt.figure(figsize= (10, 10)) ax = fig.add_subplot(111) normality_plot, stat = stats.probplot(model.resid, plot= plt, rvalue= True) ax.set_title("Probability plot of model residual's", fontsize= 20) ax.set plt.show() This is a case where the statistical testing method indicated the residuals were normally distributed, but the probability plot correlation coefficient (PPCC) indicated non-normality. Given the current example's sample size is small, N= 15, the Shapiro-Wilk test indicated normality, and that the calculated PPCC, = 0.9349, is ever so slightly smaller than the table PPC, = 0.9376, it is reasonable to state this assumption is met. However, looking at the plotted probability plot and the residual structure it would also be reasonable to transform the data for the analysis, or to use a non-parametric statistical test such as Welch's ANOVA or the Kruskal-Wallis ANOVA. Homogeneity of variance The final assumption is that all groups have equal variances. One method for testing this assumption is the Levene's test of homogeneity of variances. This can be completed using the levene() method from Scipy.stats. stats.levene(df['libido'][df['dose'] == 'high'], df['libido'][df['dose'] == 'low'], df['libido'][df['dose'] == 'placebo']) LeveneResult(statistic=0.11764705882352934, pvalue=0.8900225182757423) The Levene's test of homogeneity of variances is not significant which indicates that the groups have non-statistically significant difference in their varability. Again, it may be worthwhile to check this assumption visually as well. fig = plt.figure(figsize= (10, 10)) ax = fig.add_subplot(111) ax.set_title("Box Plot of Libido by Dosage", fontsize= 20) ax.set data = [df['libido'][df['dose'] == 'placebo'], df['libido'][df['dose'] == 'low'], df['libido'][df['dose'] == 'high']] ax.boxplot(data, labels= ['Placebo', 'Low', 'High'], showmeans= True) plt.xlabel("Drug Dosage") plt.ylabel("Libido Score") plt.show() The graphical testing of homogeneity of variances supports the statistical testing findings which is the groups have equal variance. By default box plots show the median (orange line in graph above). The green triangle is the mean for each group which was an additional argument that was passed into the method. There are different ways to handle heteroskedasticity (unequal variance) and a decision needs to be made. Some options include, but is not limited to, transformming the dependent variable (outcome), could use trimmed means, robust standard errors, or use a parametric test suchs as the Welch's t-test. For a more in-depth look at the assumptions and some potential remedies, please check out this page. Post-hoc Testing By conducting post-hoc tests or planned comparisons it allows one to see which group(s) significantly differ from each other; remember that the ANOVA is an omnibus test! There are a few different approaches that can be taken while conducting these tests, ones that are implemented in StatsModels currently are: Tukey Honestly Significant Difference (HSD) Tests all pairwise group comparisons while controlling for the multiple comparisons which protects the familywise error rate and from making a Type I error Not technically a "post-hoc" test since this test can be used as a test independently of the ANOVA and can be planned before hand More in-depth information about this statistical method can be found here Bonferroni Tests groups for a difference while controlling for the multiple comparisons which protects the familywise error rate and from making a Type I error. It should be noted that some statistical software reports the Bonferroni adjusted confidence interval, however this is not the case in Python at this time (unless one were to program a function to do so) This method is common because it is fast to calculate - take the number of groups to be compared and divide that by the initial alpha value . In the current example there are 3 groups being compared (placebo vs. low, placebo vs. high, and low vs. high) which had = 0.05 making the equation become . Thus, in order for a comparison to be considered significant a p-value would need to be < 0.0167 in order to be considered statistically significant. More in-depth information about this statistical method can be found here Šidák (a.k.a. Dunn-Šidák) Tests groups for a difference while controlling for the multiple comparisons which protects the familywise error rate and from making a Type I error. It should be noted that some statistical software reports the Šidák adjusted confidence interval, however this is not the case in StatsModels at this time (unless one were to program a function to do so) This method is common because it is pretty fast to calculate, the formula is . In the current example there are 3 groups being compared (placebo vs. low, placebo vs. high, and low vs. high) which had = 0.05 making the equation become . Thus, in order for a comparison to be considered significant a p-value would need to be < 0.0170 to be considered statistically significant. More in-depth information about this statistical method can be found here. Tukey Honestly Significant Difference (HSD) Have to use a library that has not been imported yet; please see the official documentation about this method for more information if interested. import statsmodels.stats.multicomp as mc comp = mc.MultiComparison(df['libido'], df['dose']) post_hoc_res = comp.tukeyhsd() post_hoc_res.summary() Multiple Comparison of Means - Tukey HSD, FWER=0.05 | group1 | group2 | meandiff | p-adj | lower | upper | reject | | high | low | -1.8 | 0.1472 | -4.1651 | 0.5651 | False | | high | placebo | -2.8 | 0.0209 | -5.1651 | -0.4349 | True | | low | placebo | -1.0 | 0.5171 | -3.3651 | 1.3651 | False | Now to make sense of the table. At the top the table testing information is provided FWER is the familywise error rate, i.e. what is being set to and controlled at group1 and group2 columns are the groups being compared meandiff is the difference between the group means p-adj is the corrected p-value which takes into account the multiple comparisons being conducted lower is the lower band of the confidence interval. In the current example the confidence interval at the 95% level since = 0.05. upper is the upper band of the confidence interval. In the current example the confidence interval at the 95% level since = 0.05. reject is the decision rule based on the corrected p-value It is possible to plot the difference using this method as well! post_hoc_res.plot_simultaneous(ylabel= "Drug Dose", xlabel= "Score Difference") Using Tukey HSD to test for differences between groups indicates that there is a statistically significant difference in libido score between those who took the placebo and those who took the high dosage of the medication, no other groups differed significantly. What this indicates is that the high dosage of the medication is effective at increasing libido, but the low dosage is not. Bonferroni Correction Have to use a library that has not been imported yet (if you didn't do the Tukey HSD example above); please see the official documentation about this method for more information if interested. The documentation for allpairtest is not in the best shape at the time of writing this. The method returns 3 objects, one is a completed table object, the second is the data of the table, and the third is the data of the table with the table headings - it is not understood why the developers of StatsModels did this. All that is needed is the first object. Before jumping into the code, let's take a look at pseudo code to make sense of this method. allpairtest(statistical_test_method, method= "correction_method") The documentation shows one needs to supply this method with a statistical test method, which can either be a user defined function or a function from another Python library - in this case independent sample t-tests will be conducted. One also has to state the correction method to be applied to the p-value to adjust for the multiple comparisons taking place. Now to see the method in action. import statsmodels.stats.multicomp as mc comp = mc.MultiComparison(df['libido'], df['dose']) tbl, a1, a2 = comp.allpairtest(stats.ttest_ind, method= "bonf") tbl Test Multiple Comparison ttest_ind FWER=0.05 method=bonf alphacSidak=0.02, alphacBonf=0.017 | group1 | group2 | stat | pval | pval_corr | reject | | high | low | 1.964 | 0.0851 | 0.2554 | False | | high | placebo | 3.0551 | 0.0157 | 0.0471 | True | | low | placebo | 1.2127 | 0.2598 | 0.7795 | False | Now to make sense of the table. At the top the table testing information is provided FWER is the familywise error rate, i.e. what is being set to and controlled at method is the correction method that is being applied to the p-values Then there is the adjusted p-value (adjusted ) for both the Sidak and Bonferroni correction methods group1 and group2 columns are the groups being compared stat is the test statistic value; in this case it would be the t statistic pval is the uncorrected p-value returned from the supplied "statistical_test_method" pval_corr is the corrected p-value which has been corrected using whichever "correction_method" was supplied reject is the decision rule based on the corrected p-value Conducting comparisons using the Bonferroni correction indicates that the only groups that differed significantly are those who took the high dose and the placebo dose. ŠidÁk Correction (a.k.a. Dunn-ŠidÁk Correction) Have to use a library that has not been imported yet (if you didn't do the Tukey HSD or Bonferroni examples above); please see the official documentation about this method for more information if interested. The documentation for allpairtest is not in the best shape at the time of writing this. The method returns 3 objects, one is a completed table object, the second is the data of the table, and the third is the data of the table with the table headings - it is not understood why the developers of StatsModels did this. All that is needed is the first object. Before jumping into the code, let's take a look at pseudo code to make sense of this method. allpairtest(statistical_test_method, method= "correction_method") The documentation shows one needs to supply this method with a statistical test method, which can either be a user defined function or a function from another Python library - in this case independent sample t-tests will be conducted. One also has to state the correction method to be applied to the p-value to adjust for the multiple comparisons taking place. Now to see the method in action. import statsmodels.stats.multicomp as mc comp = mc.MultiComparison(df['libido'], df['dose']) tbl, a1, a2 = comp.allpairtest(stats.ttest_ind, method= "sidak") tbl Test Multiple Comparison ttest_ind FWER=0.05 method=sidak alphacSidak=0.02, alphacBonf=0.017 | group1 | group2 | stat | pval | pval_corr | reject | | high | low | 1.964 | 0.0851 | 0.2343 | False | | high | placebo | 3.0551 | 0.0157 | 0.0464 | True | | low | placebo | 1.2127 | 0.2598 | 0.5945 | False | Now to make sense of the table. At the top the table testing information is provided FWER is the familywise error rate, i.e. what is being set to and controlled at method is the correction method that is being applied to the p-values Then there is the adjusted p-value (adjusted ) for both the Sidak and Bonferroni correction methods group1 and group2 columns are the groups being compared stat is the test statistic value; in this case it would be the t statistic pval is the uncorrected p-value returned from the supplied "statistical_test_method" pval_corr is the corrected p-value which has been corrected using whichever "correction_method" was supplied reject is the decision rule based on the corrected p-value Conducting comparisons using the Šidák correction indicates that the only groups that differed significantly are those who took the high dose and the placebo dose. References Kutner, M. H., Nachtsheim, C. J., Neter, J., and Li, W. (2004). Applied linear statistical models (5th). New York, NY: McGraw-Hill Irwin. Rosner, B. (2015). Fundamentals of Biostatistics (8th). Boston, MA: Cengage Learning. Ott, R. L., and Longnecker, M. (2010). An introduction to statistical methods and data analysis. Belmon, CA: Brooks/Cole.
190643
https://math.stackexchange.com/questions/4846639/identically-distributed-random-variables-and-events-of-probability-0
Identically distributed random variables and events of probability $0$ - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Identically distributed random variables and events of probability 0 0 Ask Question Asked 1 year, 8 months ago Modified1 year, 8 months ago Viewed 251 times This question shows research effort; it is useful and clear 5 Save this question. Show activity on this post. Let (Ω,F,P)(Ω,F,P) be a probability space and X 1,…,X n+1:Ω→R X 1,…,X n+1:Ω→R be random variables. Suppose that the random variables are identically distributed, i.e. P∘X−1 1(B)=…=P∘X−1 n+1(B),∀B∈B(R).P∘X 1−1(B)=…=P∘X n+1−1(B),∀B∈B(R). Also suppose that there exist real-valued functions f:R→R f:R→R and h:R→R h:R→R such that, (P∘X−1 1)({x 1∈R:f(x 1)≠h(x 1)})=(1)P[{ω∈Ω:f(X 1(ω))≠h(X 1(ω))}]=0.(P∘X 1−1)({x 1∈R:f(x 1)≠h(x 1)})=(1)P[{ω∈Ω:f(X 1(ω))≠h(X 1(ω))}]=0. Now consider the remaining random variables as a random vector X(ω)=(X 2(ω),…,X n+1(ω)):Ω→R n X(ω)=(X 2(ω),…,X n+1(ω)):Ω→R n. I want to prove that, (P∘X−1)({(x 2,…,x n+1)∈R n:f(x 2)≠h(x 2),…,f(x n+1)≠h(x n+1)})=(2)P[{ω∈Ω:f(X 2(ω))≠h(X 2(ω)),…,f(X n+1(ω))≠h(X n+1(ω))}]=0.(P∘X−1)({(x 2,…,x n+1)∈R n:f(x 2)≠h(x 2),…,f(x n+1)≠h(x n+1)})=(2)P[{ω∈Ω:f(X 2(ω))≠h(X 2(ω)),…,f(X n+1(ω))≠h(X n+1(ω))}]=0. I don't know whether (1)(1) and (2)(2) are true but anyway, here is my proof. My try: Note that P[{ω∈Ω:f(X 2(ω))≠h(X 2(ω)),…,f(X n+1(ω))≠h(X n+1(ω))}]=P[B 2∩…∩B n+1],P[{ω∈Ω:f(X 2(ω))≠h(X 2(ω)),…,f(X n+1(ω))≠h(X n+1(ω))}]=P[B 2∩…∩B n+1], where B j={ω∈Ω:f(X j(ω))≠h(X j(ω))}B j={ω∈Ω:f(X j(ω))≠h(X j(ω))} and j=2,…,n+1 j=2,…,n+1. We know P[B j]=0 P[B j]=0, since random variables are identically distributed, and that the events of probability 0 0 are independent of all other events. So this implies that, P[B 2∩⋯∩B n+1]=P[B 2]…P[B n+1]=0.P[B 2∩⋯∩B n+1]=P[B 2]…P[B n+1]=0. Is this reasoning correct? Is it possible to use this line of reasoning with push-forward measure P∘X−1?P∘X−1? The push-forward measure assigns probability to a vector of real numbers, so I don't know how to describe the correct subset of R n R n and apply the measure to it. probability probability-theory measure-theory random-variables measurable-functions Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Jan 20, 2024 at 0:28 S.H.WS.H.W asked Jan 17, 2024 at 19:52 S.H.WS.H.W 4,420 4 4 gold badges 29 29 silver badges 66 66 bronze badges 9 What you did is right. Actually you've used the push-forward measure in the last equality since (\mathbb{P} \circ X^{-1})(B_2 \times \cdots \times B_{n+1}) = \mathbb{P}(B_2 \cap \cdots B_{n+1}).Captuna –Captuna 2024-01-17 21:09:00 +00:00 Commented Jan 17, 2024 at 21:09 1 In the last equation, you don't need independence, just use the intersection is the subset of any B j B j so one P(B j)=0 P(B j)=0 is enough. The rest are correct, it's called the change-of-variables formula (wiki: pushforward measure). For example, write (1) as an integration with a indicator function.Psychomath –Psychomath 2024-01-20 00:15:22 +00:00 Commented Jan 20, 2024 at 0:15 1 This is a false statement in general, but it becomes true if you assume f f and h h are measurable functions. In that case, the function g:R→R g:R→R given by g(x)=f(x)−h(x)∀x∈R g(x)=f(x)−h(x)∀x∈R is also measurable and so (since X 1,...,X n X 1,...,X n are identically distributed) we know g(X 1),...,g(X n)g(X 1),...,g(X n) are identically distributed. So P[g(X i)=0]=P[g(X 1)=0]=1∀i P[g(X i)=0]=P[g(X 1)=0]=1∀i. The explanation of why it is false in general takes a bit more work and it is not clear if you care about the nonmeasurable f,h f,h case.Michael –Michael 2024-01-20 00:38:17 +00:00 Commented Jan 20, 2024 at 0:38 1 LNT is correct, if B,C B,C are any events that satisfy P[B]=0 P[B]=0 then B∩C⊆B⟹P[B∩C]≤P[B]=0 B∩C⊆B⟹P[B∩C]≤P[B]=0 Michael –Michael 2024-01-20 01:15:12 +00:00 Commented Jan 20, 2024 at 1:15 1 Yes your equations (1) and (2) are correct.Michael –Michael 2024-01-20 01:19:33 +00:00 Commented Jan 20, 2024 at 1:19 |Show 4 more comments 1 Answer 1 Sorted by: Reset to default This answer is useful 3 Save this answer. +50 This answer has been awarded bounties worth 50 reputation by S.H.W Show activity on this post. This is a false statement in general, but is true if you assume f f and h h are Borel measurable functions. The LNT comments above are correct if you assume Borel measurability (which is a common assumption). For simplicity, define g:R→R g:R→R by g(x)=f(x)−h(x)g(x)=f(x)−h(x) for all x∈R x∈R. Setup: You have identically distributed random variables X 1,...,X n X 1,...,X n with n≥2 n≥2. You have a function g:R→R g:R→R and you are told P[g(X 1)≠0]=0 P[g(X 1)≠0]=0. You want to evaluate P[∩n i=2{g(X i)≠0}]P[∩i=2 n{g(X i)≠0}]. Case 1 - Suppose g g is a Borel measurable function: Then g(X 1),g(X 2),...,g(X n)g(X 1),g(X 2),...,g(X n) are identically distributed and so P[g(X i)≠0]=P[g(X 1)≠0]=0∀i∈{1,...,n}P[g(X i)≠0]=P[g(X 1)≠0]=0∀i∈{1,...,n} As noted in comments above by LNT (who was implicitly assuming this measurable case) we have ∩n i=2{g(X i)≠0}⊆{g(X 2)≠0}⟹P[∩n i=2{g(X i)≠0}]≤P[g(X 2)≠0]0∩i=2 n{g(X i)≠0}⊆{g(X 2)≠0}⟹P[∩i=2 n{g(X i)≠0}]≤P[g(X 2)≠0]⏟0 and since probabilities cannot be negative, we obtain the desired result P[∩n i=2{g(X i)≠0}]=0 P[∩i=2 n{g(X i)≠0}]=0 Case 2 (counter-example when g g is not Borel measurable): Fix n=2 n=2. Strange fact: It is possible to have two random variables X 1:Ω→[0,1]X 1:Ω→[0,1] and X 2:Ω→[0,1]X 2:Ω→[0,1] on the same probability space (Ω,F,P)(Ω,F,P) that are both uniformly distributed over [0,1][0,1], but with disjoint images: X 1(Ω)∩X 2(Ω)=ϕ X 1(Ω)∩X 2(Ω)=ϕ See "Strange uniform random variables" by D. Rizzolo here: Assuming such strange uniform random variables X 1,X 2 X 1,X 2, define the function g:R→R g:R→R by g(x)={0 1 if x∈X 1(Ω)else g(x)={0 if x∈X 1(Ω)1 else Then {g(X 1)≠0}=ϕ⟹P[g(X 1)≠0]=0{g(X 1)≠0}=ϕ⟹P[g(X 1)≠0]=0 {g(X 2)≠0}=Ω⟹P[g(X 2)≠0]=1{g(X 2)≠0}=Ω⟹P[g(X 2)≠0]=1 In view of Case 1, it is clear that this counter-example cannot occur unless g g is nonmeasurable. This means that X 1(Ω)X 1(Ω) is not Borel measurable. Indeed, the only way to get these strange uniform random variables is if their images are not Borel measurable sets. Note: Proving g(X i)g(X i) are identically distributed: Claim: If X,Y X,Y are identically distributed random variables and g:R→R g:R→R is Borel measurable, meaning that g−1(B)∈B(R)∀B∈B(R)g−1(B)∈B(R)∀B∈B(R) then g(X)g(X) and g(Y)g(Y) are identically distributed. Proof: Fix B∈B(R)B∈B(R). Since g g is Borel measurable, we know that g−1(B)∈B(R)g−1(B)∈B(R). So {g(X)∈B}={X∈g−1(B)}{g(X)∈B}={X∈g−1(B)} is a valid event, as is {g(Y)∈B}{g(Y)∈B}. Then we have P[g(X)∈B]=P[X∈g−1(B)]=(a)P[Y∈g−1(B)]=P[g(Y)∈B]P[g(X)∈B]=P[X∈g−1(B)]=(a)P[Y∈g−1(B)]=P[g(Y)∈B] where (a) holds because X,Y X,Y are identically distributed. □◻ Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Jan 20, 2024 at 1:31 answered Jan 20, 2024 at 1:10 MichaelMichael 26.6k 2 2 gold badges 34 34 silver badges 57 57 bronze badges 9 1 Yes, to prove the intersection has zero probability, it is enough to prove P[g(X 2)≠0]=0 P[g(X 2)≠0]=0. The place where measurability is used can be seen from the last note in my answer above. You cannot necessarily talk about g(X i)g(X i) as a random variable if g g is not measurable, equivalently, {X i∈g−1(B)}{X i∈g−1(B)} is not necessarily a valid event if g−1(B)g−1(B) is not a Borel measurable set.Michael –Michael 2024-01-20 06:14:38 +00:00 Commented Jan 20, 2024 at 6:14 1 Yes, P:F→[0,1]P:F→[0,1] and so we can only take probabilities of sets in F F. Your equations (1) and (2) try to do two steps at once and it is not helping, you cannot take probabilities of sets that are not events. If I remove your probability operation then (1) and (2) are true by definition of "inverse image" of a function. They are similar to saying for X:Ω→R X:Ω→R and B⊆R B⊆R that X−1(B)={ω∈Ω:X(ω)∈B}:={X∈B}X−1(B)={ω∈Ω:X(ω)∈B}:={X∈B} Michael –Michael 2024-01-20 15:59:33 +00:00 Commented Jan 20, 2024 at 15:59 1 I summarized my understanding. Do you think it is correct now? If f f and h h are Borel measurable functions then A={x 1∈R:f(x 1)≠h(x 1)}A={x 1∈R:f(x 1)≠h(x 1)} is a measurable set, i.e., we have A∈B(R)A∈B(R). Since X 1:Ω→R X 1:Ω→R is a measurable function, the set X−1 1(A)={ω∈Ω:X 1(ω)∈A}X 1−1(A)={ω∈Ω:X 1(ω)∈A} is in F F...S.H.W –S.H.W 2024-01-20 23:57:40 +00:00 Commented Jan 20, 2024 at 23:57 1 ... Note that X 1(ω)∈A X 1(ω)∈A is equivalent to f(X 1(ω))≠h(X 1(ω))f(X 1(ω))≠h(X 1(ω)). So we have, X−1 1(A)={ω∈Ω:f(X 1(ω))≠h(X 1(ω))}∈F X 1−1(A)={ω∈Ω:f(X 1(ω))≠h(X 1(ω))}∈F and the probability of this set is P[X−1 1(A)]=P[X 1∈A]P[X 1−1(A)]=P[X 1∈A] which is by definition same as (P∘X−1 1)(A)(P∘X 1−1)(A).S.H.W –S.H.W 2024-01-20 23:57:53 +00:00 Commented Jan 20, 2024 at 23:57 1 Yes, looks good.Michael –Michael 2024-01-21 22:53:43 +00:00 Commented Jan 21, 2024 at 22:53 |Show 4 more comments You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions probability probability-theory measure-theory random-variables measurable-functions See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 2expected values of identically distributed random variables 2Constructing dependent sequences of random variables 1Definition of identically distributed random variables 3Is a collection of random variables always a random vector? 2Independence of random variables and events 0About identically distributed random variables 0Expected Value of Nonnegative Identically Distributed Random Variables 3Example of independent and identically distributed random variables 6Independent and identically distributed random variable Hot Network Questions How different is Roman Latin? Storing a session token in localstorage Gluteus medius inactivity while riding Why are LDS temple garments secret? Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf For every second-order formula, is there a first-order formula equivalent to it by reification? Is direct sum of finite spectra cancellative? I have a lot of PTO to take, which will make the deadline impossible Vanishing ext groups of sheaves with disjoint support Weird utility function My dissertation is wrong, but I already defended. How to remedy? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Making sense of perturbation theory in many-body physics In Dwarf Fortress, why can't I farm any crops? What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? Can a cleric gain the intended benefit from the Extra Spell feat? Proof of every Highly Abundant Number greater than 3 is Even How do you create a no-attack area? Fundamentally Speaking, is Western Mindfulness a Zazen or Insight Meditation Based Practice? Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" How to rsync a large file by comparing earlier versions on the sending end? What's the expectation around asking to be invited to invitation-only workshops? Is it ok to place components "inside" the PCB Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.29.34589 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
190644
https://www.youtube.com/watch?v=xweYA-IJTqs
Sodium potassium pump animation Dr.G Bhanu Prakash Animated Medical Videos 1410000 subscribers 5890 likes Description 591545 views Posted: 25 Apr 2015 📌 𝐅𝐨𝐥𝐥𝐨𝐰 𝐨𝐧 𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦:- 📌𝗝𝗼𝗶𝗻 𝗢𝘂𝗿 𝗧𝗲𝗹𝗲𝗴𝗿𝗮𝗺 𝗖𝗵𝗮𝗻𝗻𝗲𝗹 𝗛𝗲𝗿𝗲:- 📌𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗧𝗼 𝗠𝘆 𝗠𝗮𝗶𝗹𝗶𝗻𝗴 𝗟𝗶𝘀𝘁:- SODIUM-POTASSIUM PUMP ANIMATED LECTURE The #Sodium-PotassiumPump. The process of moving sodium and potassium ions across the cell membrance is an active transport process involving the hydrolysis of ATP to provide the necessary energy. It involves an enzyme referred to as Na+/K+-ATPase. Na+-K+-ATPase (Sodium potassium ATPase) A transmembrane protein pump that actively transports three Na+ ions out of the cell for every two K+ ions that it moves into the cell. Uses ATP for energy. The ATP binding site is on the cytosolic side. Na+-K+-ATPase maintains the concentration gradients of Na+ and K+ and is thus responsible for the resting membrane potential of all cells. The pump is phosporylated when it transports sodium and dephosphorylated when it transports potassium. sodiumpotassiumpump #sodiumpotassiumpumpanimation #sodiumpotassiumpumpphysiology #activetransport #usmlevideos #membranephysiology #usmlestep1videos #neetpg #mbbs #fmge #drgbhanuprakash #physiologyvideos #animatedmedicalvideos 98 comments Transcript: the sodium potassium pump is an active transport mechanism that is driven by the breakdown of ATP and works through a series of conformational changes in a transmembrane protein 3 sodium ions bind to the cytoplasmic side of the protein causing the protein to change its conformation in its new conformation the molecule becomes phosphorylated at the expense of a molecule of ATP the phosphorylation induces a second conformational change the translocates the three sodium ions across the membrane in this new conformation the protein has a low affinity for sodium ions and the three bound sodium ions dissociate from the protein and diffuse into the extracellular fluid the new conformation has a high affinity for potassium ions two of which bind to the extracellular side of the protein the bound phosphate now dissociates and the protein reverts to its original conformation exposing the two potassium ions to the cytoplasm on the inside of the cell this conformation has a low affinity for potassium ions so the two bound potassium ions dissociate from the protein and diffuse into the interior of the cell
190645
https://arxiv.org/pdf/0711.0913
arXiv:0711.0913v2 [math.AG] 20 Nov 2007 Generalizations of two theorems of Ritt on decompositions of polynomial maps V. V. Bavula Dedicated to F. van Oystaeyen on the occasion of his 60 ′th birthday Abstract In 1922, J. F. Ritt proved two remarkable theorems on decompositions of polynomial maps of C[x] into irreducible polynomials (with respect to the composi-tion ◦ of maps). Briefly, the first theorem states that in any two decompositions of a given polynomial function into irreducible polynomials the number of the irreducible polynomials and their degrees are the same (up to order). The second theorem gives four types of transformations of how to obtain all the decompositions from a given one. In 1941, H. T. Engstrom and, in 1942, H. Levi generalized respectively the first and the second theorem to polynomial maps over an arbitrary field K of characteristic zero. The aim of the paper is to generalize the two theorems of J. F. Ritt to a more general situation: for, so-called, reduction monoids (( K[x], ◦) and (K[x2]x, ◦) are examples of reduction monoids). In particular, analogues of the two theorems of J. F. Ritt hold for the monoid ( K[x2]x, ◦) of odd polynomials. It is shown that, in general, the two theorems of J. F. Ritt fail for the cusp ( K + K[x]x2, ◦) but their analogues are still true for decompositions of maximal length of regular elements of the cusp. Key Words: the two theorems of Ritt, Ritt transformations, composition of poly-nomial maps, cusp transformations, irreducible map, the length and defect of a poly-nomial. Mathematics subject classification 2000: 12F20, 14H37, 14R10. Contents Introduction. 2. Generalizations of the two theorems of J. F. Ritt. 3. Analogues of the two theorems of J. F. Ritt for the cusp. 1 Introduction In this paper, K is a field of characteristic zero and K[x] is a polynomial algebra over the field K in a single variable x. The polynomial algebra K[x] is a monoid, ( K[x], ◦), where 1◦ is the composition of polynomial functions, ( a ◦ b)( x) := a(b(x)), and x is the identity element of the monoid K[x]. An element u of the monoid K[x] is a unit iff deg( u) = 1. The group of units of the monoid ( K[x], ◦) is denoted by K[x]∗.A polynomial a ∈ K[x] is said to be irreducible (or prime or indecomposable ) if deg( a) > 1 and the polynomial a is not a composition of two non-units, i.e. a is an irreducible element of the monoid ( K[x], ◦). This concept of irreducibility should not be confused with the concept of irreducibility of the multiplicative monoid ( K[x], ·) which is not used in the paper. A polynomial which is not irreducible is said to be reducible or composite . When K = C composite polynomials were studied by J. F. Ritt . He proved two theorems that completely describe the decompositions composite polynomials may possess. His first theorem states: any two decompositions of a given polynomial of C[x] into irreducible polynomials contain the same number of polynomials; the degrees of the polynomials in one decomposition are the same as those in the other, except, perhaps, for the order in which they occur. Two decompositions of a polynomial a into irreducible polynomials a = p1 ◦ · · · ◦ pr = q1 ◦ · · · ◦ qr are called equivalent if there exist r − 1 polynomials of the first degree u1, . . . , u r−1 such that q1 = p1 ◦ u1, q2 = u−11 ◦ p2 ◦ u2, . . . , q r−1 = u−1 r−2 ◦ pr−1 ◦ ur−1, qr = u−1 r−1 ◦ pr. Suppose that in a decomposition of the polynomial a into irreducible polynomials a = p1 ◦ · · · ◦ pr (1) there is an adjacent pair of irreducible polynomials pi = λ1 ◦ π1 ◦ λ2, pi+1 = λ−12 ◦ π2 ◦ λ3 where λ1, λ2 and λ3 are polynomials of degree 1 and where π1 and π2, of unequal degrees m and n, respectively, are of any of the following three types: (a) π1 = Tm, π2 = Tn, (b) π1 = xm, π2 = xrg(xm), (c) π1 = xr gn, π2 = xn, where g = g(x) is a polynomial, Tn is the trigonometric polynomial, Tn(cos t) := cos( nt ). Then, for the polynomial a we have a decomposition distinct from (1), a = p1 ◦ · · · ◦ pi−1 ◦ p∗ i ◦ p∗ i+1 ◦ pi+2 ◦ · · · ◦ pr (2) where respectively to the three cases above the polynomials p∗ i and p∗ i+1 are as follows: 2(a) p∗ i = λ1 ◦ Tn, p∗ i+1 = Tm ◦ λ3, (b) p∗ i = λ1 ◦ [xrgm], p∗ i+1 = xm ◦ λ3, (c) p∗ i = λ1 ◦ xn, p∗ i+1 = [ xrg(xn)] ◦ λ3. Clearly, deg( p∗ i ) = deg( pi+1 ) = n and deg( p∗ i+1 ) = deg( pi) = m.The second theorem of J. F. Ritt states: if a ∈ C[x] has two distinct decompositions into irreducible polynomials, we can pass from either to a decomposition equivalent to the other by repeated steps of the three types just indicated. He writes in his paper, p. 53: “The analogous problem for fractional rational functions is much more difficult. There is a much greater variety of possibilities, as one sees, without going far, on considering the formulas for the transformation of the periods of the elliptic functions. There are even cases in which the number of prime functions in one decompo-sition is different from that in another.” We will see later in the paper that the situation is similar for the cusp. J. F. Ritt’s approach is based on the monodromy group associated with the equation f (x) − y = 0. Later H. T. Engstrom and H. Levi proved respectively the first and the second theorem of J. F. Ritt for the polynomial algebra K[x] where K is a field of characteristic zero. Their methods are algebraic. It is known that the theorems of J. F. Ritt are false in prime characteristic , , but the first theorem is true for, so-called, tame polynomials , . For some generalizations, applications and connections with the two theorems of J. F. Ritt the reader is referred to [1, 3, 4, 6, 10, 12, 15, 14, 16, 17]. The goal of this paper is to generalize the two theorems of J. F. Ritt to a more general situation (for, so-called, reduction monoids - see Section 2 for a definition; ( K[x], ◦) and (K[x2]x, ◦) are reduction monoids). The advantage of our method is that generalizations of the two theorems are proved in one go. For a natural number r, let Sr be the symmetric group. For reduction monoids (the definition is given in Section 2), the first and the second statement of the following theorem are generalizations of the first and the second theorem of J. F. Ritt, respectively. The first statement is precisely the same as the first theorem of J. F. Ritt, but the second statement contains only ‘half’ of the second theorem of J. F. Ritt, as the second part of the second theorem of J. F. Ritt classifies all the situations pipi+1 = p′ i p′ i+1 for the monoid ( C[x], ◦). Theorem 1.1 Let M be a reduction monoid, M∗ be its group of units, a ∈ M with |a| > 1, and a = p1 · · · pr = q1 · · · qs be two decompositions of the element a into irreducible factors. Then 1. r = s and |p1| = |qσ(1) |, . . . , |pr| = |qσ(r)| for a permutation σ ∈ Sr; and if the decompositions are distinct then one can be obtained from the other by finitely many transformations on adjacent irreducible factors of the following two types: (a) p1 · · · pipi+1 · · · pr = p1 · · · (piu)( u−1pi+1 ) · · · pr where u ∈ M ∗,(b) p1 · · · pipi+1 · · · pr = p1 · · · p′ i p′ i+1 · · · pr where pipi+1 = p′ i p′ i+1 , the numbers |pi| and |pi+1 | are co-prime, |pi| = |p′ i+1 | and |pi+1 | = |p′ i |. Consider the submonoid ( O := K[x2]x, ◦) of odd polynomials of the monoid ( K[x], ◦). Theorem 1.2 Let K be a field of characteristic zero. Then the monoid O is a reduction monoid where | · | = deg . The group O∗ of units of the monoid O is equal to the group {λx | λ ∈ K∗} where K∗ := K{ 0}. The first two statements of the next corollary follow at once from Theorems 1.1 and 1.2; statement 3 follows from the second theorem of J. F. Ritt but not in a straightforward way as many additional results are used in its proof: Theorem 2.6, Lemma 2.3, Lemma 2.8 (see Section 2 for detail). Corollary 1.3 Let K be a field of characteristic zero, a ∈ O with deg( a) > 1, and a = p1 ◦ · · · ◦ pr = q1 ◦ · · · ◦ qs be two decompositions of the element a into irreducible factors of the monoid O. Then 1. r = s and deg( p1) = deg( qσ(1) ), . . . , deg( pr) = deg( qσ(r)) for a permutation σ ∈ Sr;and 2. if the decompositions are distinct then one can be obtained from the other by finitely many transformations on adjacent irreducible factors of the following two types: (a) p1 ◦ · · · ◦ pi ◦ pi+1 ◦ · · · ◦ pr = p1 ◦ · · · ◦ (pi ◦ u) ◦ (u−1 ◦ pi+1 ) ◦ · · · ◦ pr where u ∈ O ∗,(b) p1 ◦ · · · ◦ pi ◦ pi+1 ◦ · · · ◦ pr = p1 ◦ · · · ◦ p∗ i ◦ p∗ i+1 ◦ · · · ◦ pr where pi ◦ pi+1 = p∗ i ◦ p∗ i+1 , the degrees deg( pi) and deg( pi+1 ) are co-prime, deg( pi) = deg( p∗ i+1 ) and deg( pi+1 ) = deg( p∗ i ).3. There are only the following options for the pairs P := ( pi, p i+1 ) and P ∗ := ( p∗ i , p ∗ i+1 ):(a) P = ( Tn, T m) and P ∗ = ( Tm, T n) where n and m are odd distinct primes, (b) P = ( xt[α(x2)] s, x s) and P ∗ = ( xs, x tα(x2s)) ,(c) P = ( xs, x tα(x2s)) and P ∗ = ( xt[α(x2)] s, x s),where s is an odd prime number, t is an odd number, and α ∈ K[x]\K with α(0) 6 = 0 . 4Up to my knowledge, the monoid O is the only example distinct from K[x] for which (analogues of) the two theorems of J. F. Ritt hold. It would be interesting to find more examples (the definition of reduction monoid is very arithmetical). It is a curious fact that the monoid O, in fact, comes from non-commutative situation. The monoid O is the monoid of all central algebra endomorphisms of a certain localization of the quantum plane which is a non-commutative algebra (see Section 2 for detail). It would be interesting to find more reduction monoids coming from non-commutative situation (and as a result to obtain analogues of the two theorems of J. F. Ritt for them). Notice that in the definition of reduction monoid M is not necessarily a commutative algebra, it is just an abelian group. Moreover, in the case of the odd polynomials, O is not even an algebra. The cusp submonoid ( K + K[x]x2, ◦) of ( K[x], ◦) looks similar to the monoid O but for it situation is completely different. In particular, the cusp submonoid is not a reduction monoid. Till the end of this section let K be an algebraically closed field of characteristic zero and let A be the subalgebra of the polynomial algebra K[x] generated by the monomials x2 and x3. The algebra A = K + K[x]x2 is isomorphic to the algebra of regular functions on the cusp s2 = t3. It is obvious that ( A, ◦) is a sub-semi-group of ( K[x], ◦). For a polynomial a ∈ K[x] of degree deg( a) > 1, let Dec( a) be the set of all decompositions of the polynomial a into irreducible polynomials of K[x] (with respect to ◦). The length l(a) of the polynomial a ∈ K[x] is the number of irreducible polynomials in any decomposition of Dec( a). Similarly, for a polynomial a ∈ A\K, let Dec A(a) be the set of all decompositions of the polynomial a into irreducible polynomials of A. The natural number lA(a) := max {r | p1 ◦ · · · ◦ pr ∈ Dec A(a)} is called the A-length of the element a. It is obvious that lA(a) ≤ l(a). In general, this inequality is strict (Corollary 3.4). An element a ∈ A is called regular (respect. irregular ) if lA(a) = l(a) (resp. lA(a) < l (a)). The are plenty of elements of both types. Moreover, if a is irregular then a ◦ (x + λ) is regular for some λ ∈ K. Adecomposition p1 ◦ · · · ◦ plA(a) ∈ Dec A(a)is called a decomposition of maximal length or a maximal decomposition for the element a.Let Max( a) be the set of all maximal decompositions for a. Clearly, Max( a) ⊆ Dec A(a), but, in general, Max( a) 6 = Dec A(a), see (14). Lemma 3.7 describes the set Max( a). In general, the number of irreducible polynomials in decomposition into irreducible polynomials of an element of A is non-unique (Lemma 3.5); moreover, it can vary greatly. So, for the cusp the two theorems of J. F. Ritt do not hold. Therefore, the cusp is not a reduction monoid. Nevertheless, for decompositions of maximal length of each regular element a of A analogues of the two theorems do hold – Theorem 1.4 and Theorem 1.5 if K is algebraically closed (if K is not algebraically closed then, in general, Theorem 1.5 does not hold). 5Theorem 1.4 Let K be a field of characteristic zero, a be a regular element of A such that a 6 ∈ K, and a = p1 ◦ · · · ◦ pr = q1 ◦ · · · ◦ qr be two decompositions of maximal length of the element a into irreducible polynomials of A. Then deg( p1) = deg( qσ(1) ), . . . , deg( pr) = deg( qσ(r)) for a permutation σ ∈ Sr. Theorem 1.4 follows from the first theorem of J. F. Ritt (or from Theorem 1.5). In gen-eral, for irregular elements Theorem 1.4 is not true (Proposition 3.6), i.e. the invariance of degrees (up to permutation) does not hold. The next theorem is an analogue of the second theorem of J. F. Ritt for regular elements. A new moment is that the transformations (Adm), ( Ca), ( Cb) and ( Cc) are defined on three adjacent elements rather than two as in the second theorem of J. F. Ritt. Theorem 1.5 Let K be an algebraically closed field of characteristic zero, a be a regular element of A such that a 6 ∈ K, and X, Y ∈ Max( a). Then the decomposition Y can be obtained from the decomposition X by finitely many transformations of the following four types: (Adm), ( Ca), ( Cb) and ( Cc), see below. For a non-scalar polynomial f of K[x], a polynomial λ + μx of degree 1 is called an f -admissible polynomial if λ is a root of the derivative f ′ := df dx of f .Let a ∈ A\K with r := lA(a) = l(a), and Z := p1 ◦ · · · ◦ pi ◦ pi+1 ◦ · · · ◦ pr ∈ Max( a). Consider the following four types of transformations of the decomposition Z that produce a new decomposition Z∗ ∈ Max( a) where Z∗ := { p1 ◦ · · · ◦ pi−1 ◦ p∗ i ◦ p∗ i+1 ◦ p∗ i+2 ◦ · · · ◦ pr if i + 1 < r, p1 ◦ · · · ◦ p∗ r−1 ◦ p∗ r if i + 1 = r. (Adm) In both cases, p∗ i := pi ◦u and p∗ i+1 := u−1 ◦pi+1 where u ∈ K[x]∗ is pi-admissible, and p∗ i+2 = pi+2 if i + 1 < r (u−1 is the inverse of the element u in the monoid ( K[x], ◦), i.e. u−1 is the inverse map of u). In the remaining three cases below, gcd(deg( pi), deg( pi+1 )) = 1, all λi ∈ K[x]∗, p is a prime number, polynomials xsgp(x) and xsg(xp) satisfy the condition that g(0) 6 = 0, λ−1 i is the inverse of the element λi in the monoid ( K[x], ◦). (Ca) If i + 1 < r , pi = λ1 ◦ Tk ◦ λ2 and pi+1 = λ−12 ◦ Tl ◦ λ3 where k and l are distinct odd prime numbers, λ2 is Tk-admissible and λ3 is Tl-admissible, then p∗ i := λ1 ◦ Tl ◦ λ4, p∗ i+1 := λ−14 ◦ Tk ◦ λ3 ◦ λ5 and p∗ i+2 := λ−15 ◦ pi+2 , where λ4 is Tl-admissible and λ5 is Tk ◦ λ3-admissible. 6(Cb) If i + 1 < r , pi = λ1 ◦ xp and pi+1 = [ xsg(xp)] ◦ λ2 where λ2 is xsg(xp)-admissible, then p∗ i := λ1 ◦ [xsgp] ◦ λ3, p∗ i+1 := λ−13 ◦ xp ◦ λ2 ◦ λ4 and p∗ i+2 := λ−14 ◦ pi+2 , where λ3 is xsgp-admissible and λ2 ◦ λ4 is xp-admissible. If i + 1 = r, pr−1 = λ1 ◦ xp and pr = [ xsg(xp)] ◦ λ2 where s ≥ 2 and λ2 ∈ K∗x, then p∗ r−1 := λ1 ◦ [xsgp] and p∗ r := xp ◦ λ2. (Cc) If i + 1 < r , pi = λ1 ◦ [xsgp] ◦ λ2 and pi+1 = λ−12 ◦ xp ◦ λ3 where λ2 is xsgp-admissible and λ3 is xp-admissible, then p∗ i := λ1 ◦ xp, p∗ i+1 := [ xsg(xp)] ◦ λ3 ◦ λ4 and p∗ i+2 := λ−14 ◦ pi+2 , where λ3 ◦ λ4 is xsg(xp)-admissible. If i + 1 = r, pr−1 = λ1 ◦ xsgp, s ≥ 2, and pr = xp ◦ λ2 where λ2 is xp-admissible, then p∗ r−1 := λ1 ◦ xp and p∗ r := [ xsg(xp)] ◦ λ2. Decompositions of polynomials with coefficients in a commutative ring were studied by the author in . 2 Generalizations of the two theorems of J. F. Ritt In this section, the two theorems of J. F. Ritt are generalized to a more general situation. They are proved for reduction monoids (Theorem 1.1). The polynomial algebra K[x] is a reduction monoid with respect to the composition of functions. These generalizations are inspired by the paper of H. T. Engstrom and we follow some of his ideas. Proofs of Theorem 1.1, Theorem 1.2 and Corollary 1.3.(3) are given. Natural numbers i and j are called co-prime (or relatively prime ) if gcd( i, j ) = 1. Definition . A multiplicative monoid M is called a reduction monoid if the following axioms hold for all elements a, b, c ∈ M (where M∗ is the group of units of the monoid M): (A1) M is a Z-module (i.e. M is an abelian group under +) such that (a + b)c = ac + bc. (A2) There exists a map | · | : M → N := {0, 1, . . . } such that |ab | = |a|| b| and |a + b| ≤ max {| a|, |b|} . (A3) a ∈ M ∗ iff |a| = 1. 7(A4) If ac = bc then a = b provided |c| > 1. (A5) For any elements a, b ∈ M with |a| > 1 and |b| > 1 and, in addition, there exists an element x ∈ M a ∩ M b such that |x| 6 = 0, there exists an element c ∈ M such that Ma ∩ M b = Mc and |c| = lcm( |a|, |b|). (A6) If αa = βb with |α| = i, |a| = jk , |β| = j, |b| = ik , ijk ≥ 1, and the natural numbers i and j are co-prime then a = a1c and b = b1c for some elements a1, b1 and c of M such that |c| = k. Example . ( K[x], ◦) is the reduction monoid where | · | := deg. The axioms (A1)-(A4) are obvious. The axioms (A5) and (A6) follow respectively from Theorems 2.2 and 3.1 of the paper . If p is an irreducible element of the monoid M then so are the elements up and pu for all units u ∈ M ∗. • Each element a of M with |a| > 1 is a product of irreducible elements. To prove this statement we use induction on |a|. By (A2) and (A3), each element a with |a| = 2 is irreducible. Suppose that |a| > 2 and the result holds for all elements a′ of M with 1 < |a′| < |a|. Then either the element a is irreducible or, otherwise, it is a product, say bc , of two non-units b and c. Since |a| = |b| | c|, |b| > 1 and |c| > 1 (see (A2) and (A3)), we have 1 < |b| < |a| and 1 < |c| < |a|. By induction, the elements b and c are products of irreducible elements, then so is the element a. Corollary 2.1 Let M be a reduction monoid, p and q be irreducible elements of M such that M∗p 6 = M∗q and there exists an element a ∈ M p∩M q with |a| > 1. Then the natural numbers |p| and |q| are co-prime. Proof . Suppose that the natural numbers |p| and |q| are not co-prime, i.e. k := gcd( |p|, |q|) > 1, we seek a contradiction. Then |p| = ki , |q| = kj for some co-prime natural numbers i and j. By (A5), Mp ∩ M q = Mc for some element c of M with |c| = lcm( |p|, |q|) = ijk . Then c = αp = βq for some elements α and β of M with |α| = j and |β| = i. By (A6), there exist elements p1, q 1, d ∈ M such that p = p1d, q = q1d, |d| = k > 1, |p1| = i, |q1| = j.If i = j = 1 then |α| = |β| = 1, and so α, β ∈ M ∗, by (A3). The equality αp = βq implies that M∗p = M∗q. This contradicts to the assumption of the corollary. Therefore, either i > 1 or j > 1 or both i and j are strictly greater than 1. These mean that either the element p is reducible (since p = p1d, |p1| = i > 1, |d| > 1) or the element q is reducible (since q = q1d, |q1| = j > 1, |d| > 1) or both elements p and q are reducible. These contradictions prove the corollary. Proof of Theorem 1.1 .81. The first statement is an easy corollary of the second (since in the case (a): |piu| = |pi| and |u−1pi+1 | = |pi+1 |, by (A2) and (A3)). 2. For each element b of the monoid M with |b| > 1, let Dec( b) be the set of all decompositions of the element b into irreducible components. Two such decompositions, say X and Y , are equivalent, X ∼ Y , if one can be produced from the other by finitely many transformations of the types (a) and (b). Clearly, this is an equivalence relation on the set Dec( b). Let X, Y ∈ Dec( b) and X′, Y ′ ∈ Dec( b′). If X ∼ Y then XX ′ ∼ Y X ′ in Dec( bb ′) and X′X ∼ X′Y in Dec( b′b). If X ∼ Y and X′ ∼ Y ′ then XX ′ ∼ Y Y ′ in Dec( bb ′). To finish the proof of statement 2 we have to show that p1 · · · pr ∼ q1 · · · qs. To prove this fact we use induction on |a|. Note that if the element a is irreducible then Dec( a) = {a},and there is nothing to prove. The base of the induction, |a| = 2, is obvious since the element a is irreducible, by (A2) and (A3). Suppose that |a| ≥ 3 and the result is true for all elements a′ of M with 1 < |a′| < |a|. We may assume that the element a is reducible, i.e. r ≥ 2 and s ≥ 2. The proof consists of considering several possibilities. Suppose that M∗pr = M∗qs, i.e. pr = uq s for some element u ∈ M ∗. By (A4), we can delete the element qs in the equality p1 · · · pr−1uq s = q1 · · · qs−1qs. As a result, there are two decompositions of the element a′ := p1 · · · pr−1u = q1 · · · qs−1 into irreducible components with 1 < |a′| = |a| |qs| < |a| (note that pr−1u is the irreducible element). By induction, these two decompositions are equivalent in Dec( a′). In particular, r = s. Now, p1 · · · pr ∼ p1 · · · (pr−1u)( u−1pr) ∼ p1 · · · (pr−1u) · qs ∼ q1 · · · qr−1 · qs, as required. Suppose that M∗pr 6 = M∗qs. Then, by Corollary 2.1, the natural numbers |pr| and |qs| are co-prime since a = p1 · · · pr = q1 · · · qs ∈ M pr ∩ M qs and the elements pr and qs are irreducible. By (A6), Mpr ∩ M qs = Mc for some element c of the monoid M with |c| = lcm( |pr|, |qs|) = |pr|| qs| since the numbers |pr| and |qs| are co-prime. Since a ∈ M c and c ∈ M pr ∩ M qs, there exist elements d, α, β ∈ M such that a = dc, c = αp r = βq s. (3) We can write the equality dc = a in two different ways: dαp r = p1 · · · pr−1pr and dβq s = q1 · · · qs−1qs. 9By (A4), we can delete the element pr in the first equality and the element qs in the second: dα = p1 · · · pr−1 and dβ = q1 · · · qs−1. (4) Note that 1 < |p1| ≤ | dα | = |a| |pr| < |a| and 1 < |q1| ≤ | dβ | = |a| |qs| < |a| since r, s ≥ 2. Then induction yields the equivalence relations dα ∼ p1 · · · pr−1 and dβ ∼ q1 · · · qs−1. There are two options: either |d| > 1 or |d| = 1. If |d| > 1 then 1 < |pr| ≤ | c| = |a| |d| < |a| (see (3)), and so, by induction, αp r ∼ βq s.Now, p1 · · · pr−1pr ∼ dαp r ∼ dβq s ∼ q1 · · · qs−1qs. Finally, suppose that |d| = 1. By (A3), the element d is a unit of the monoid M since |d| = 1. Then Mc = Mda = Ma (since c = da ). Without loss of generality we may assume that c = a and d = 1. Then the equations (4) mean that α = p1 · · · pr−1 and β = q1 · · · qs−1. (5) Recall that we have the equality |c| = |pr|| qs|. In combination with (3), i.e. a = c = αp r = βq s, it yields the equalities |α| = |qs| and |β| = |pr|. In particular, the numbers |α| and |β| are co-prime. Recall that r ≥ 2 and s ≥ 2. Now, the case r = s = 2 is trivially true, p1p2 ∼ q1q2, since a = p1p2 = q1q2 and the numbers |p1| = |q2| and |p2| = |q1| are co-prime. This is a transformation of the type (b). It remains to consider the case ( r, s ) 6 = (2 , 2). In a view of symmetry, we may assume that r ≥ 3 and s ≥ 2. We prove that this case is not possible, i.e. we seek a contradiction. In order to get a contradiction, the axiom (A6) will be applied to the equality p1 · (p2 · · · pr) = β · qs. (6) First, note that the numbers i := |p1| = |p1 · · · pr−1| |p2 · · · pr−1| = |α| |p2 · · · pr−1| = |qs| |p2 · · · pr−1| and j := |β| = |pr| are co-prime since the numbers |qs| and |pr| are co-prime; i > 1 and j > 1. Clearly, k := |p2 · · · pr−1| > 1 since r ≥ 3; |p2 · · · pr| = kj and |qs| = ki . Applying the axiom (A6) to the equality (6), we obtain the equalities p2 · · · pr = AC and qs = BC for some elements A, B and C of the monoid M with |C| = k > 1. Then |B| = |qs| |C| = ki k = i > 1, and so the elements B and C are not units. Therefore, the element qs = BC is reducible, a contradiction. The proof of Theorem 1.1 is complete. 10 Proof of Theorem 1.2. In the proof of Theorem 1.2, we use the Theorem of L¨ uroth and the fact that O is a submonoid of the reduction monoid ( K[x], ◦). The axioms (A1)–(A4) are obvious for the monoid O.Let us prove that the axiom (A5) holds for O. Let a and b be elements of the monoid O such that deg( a) > 1, deg( b) > 1, and there exists an element x′ ∈ O ◦ a ∩ O ◦ b with deg( x′) ≥ 1. Note that x′ ∈ O . Then x′ ∈ K[x] ◦ a ∩ K[x] ◦ b, and so K[x] ◦ a ∩ K[x] ◦ b = K[x] ◦ c for some element c of K[x], by the axiom (A5) for the reduction monoid K[x]. Moreover, deg( c) = lcm(deg( a), deg( b)). It suffices to show that c + ν ∈ O for some element ν ∈ K. For, we introduce the K-algebra automorphism ω of the polynomial algebra K[x] given by the rule x 7 → − x.Then K[x] = K[x2] ⊕ K[x2]x = K[x2] ⊕ O , (7) where K[x2] is the fixed ring for the automorphism ω, and O is the eigen-space for ω that corresponds to the eigenvalue −1, i.e. O = ker( ω + 1). Note that the equality K[x] ◦ a ∩ K[x] ◦ b = K[x] ◦ c simply means that K[a] ∩ K[b] = K[c], and so the element c is uniquely defined up to an affine transformation. By (7), the element c is a unique sum c0 + c1x for some elements c0, c 1 ∈ K[x2]. Note that c1 6 = 0 since, otherwise, c = c0 ∈ K[x2], and then x′ ∈ O ◦ a ∩ O ◦ b ⊆ K[x] ◦ a ∩ K[x] ◦ b = K[c] ⊆ K[x2]. Now, x′ ∈ O ∩ K[x2] = 0, a contradiction (recall that deg( x′) ≥ 1, by the assumption). This contradiction proves the claim that c1 6 = 0. Note that ω(K[c]) = ω(K[a] ∩ K[b]) = ω(K[a]) ∩ ω(K[b]) = K[−a] ∩ K[−b] = K[a] ∩ K[b] = K[c]. This means that ω(c) = λc + μ for some scalars λ 6 = 0 and μ of K. In combination with the equality ω(c) = c0 − c1x and the fact that c1 6 = 0, it gives that λ = −1, i.e. ω(c) = −c + μ.Then changing c to c − μ 2 we may assume that μ = 0, i.e. ω(c) = −c. This means that c ∈ O , as required. This proves that the axiom (A5) holds for the monoid O.To finish the proof of Theorem 1.2, it remains to establish the axiom (A6) for the monoid O.Suppose that elements a, b, α and β of the monoid O satisfy the following conditions: α ◦ a = β ◦ b with deg( α) = i, deg( a) = jk , deg( β) = j, deg( b) = ik , ijk ≥ 1, and the natural numbers i and j are co-prime. We have to show that a = a1 ◦ d and b = b1 ◦ d for some elements a1, b1 and d of the monoid O such that deg( d) = k. In the proof of the axiom (A5) for the monoid O, we found the element c ∈ O such that K[c] = K[a] ∩ K[b], deg( c) = lcm(deg( a), deg( b)) = ijk. 11 Then, it is easy to show that K(c) = K(a) ∩ K(b). (8) Indeed, by the Theorem of L¨ uroth, K(a) ∩ K(b) = K(c∗) for some element c∗ ∈ K(x)which can be chosen from the polynomial algebra K[x], by Lemma 3.1, . Then K[c∗] = K[x] ∩ K(c∗) = ( K[x] ∩ K(a)) ∩ (K[x] ∩ K(b)) = K[a] ∩ K[b] = K[c], and so the equality (8) follows. For a field extension ∆ ⊆ Γ, let [Γ : ∆] := dim ∆(Γ). Consider the fields K(c) ⊆ K(a) ⊆ K(x). Then ijk = deg( c) = [ K(x) : K(c)] = [ K(x) : K(a)] · [K(a) : K(c)] = deg( a) · [K(a) : K(c)] = jk · [K(a) : K(c)] , hence [ K(a) : K(c)] = i. By symmetry, [ K(b) : K(c)] = j. By the Theorem of L¨ uroth, the composite field K(a)K(b) = K(a, b ) ⊆ K(x) is equal to K(d) for some rational function d ∈ K(x) which can be chosen to be a polynomial of K[x] since a, b ∈ K[x]. Let us show that [K(d) : K(c)] = ij. (9) Clearly, [K(d) : K(c)] = [K(a, b ) : K(c)] = [ K(a)( b) : K(a)][ K(a) : K(c)] ≤ [K(c)( b) : K(c)][ K(a) : K(c)] = [K(b) : K(c)][ K(a) : K(c)] = ji. To prove the reverse inequality note that [K(d) : K(c)] = [ K(d) : K(a)][ K(a) : K(c)] = [ K(d) : K(a)] · i, [K(d) : K(c)] = [ K(d) : K(b)][ K(b) : K(c)] = [ K(d) : K(b)] · j, and so [ K(d) : K(c)] ≥ lcm( i, j ) = ij since the numbers i and j are co-prime. This proves the equality (9). Now, deg( d) = [K(x) : K(c)] [K(d) : K(c)] = ijk ij = k. Note that K(ω(d)) = ω(K(d)) = ω(K(a, b )) = K(ω(a), ω (b)) = K(−a, −b) = K(a, b ) = K(d). This means that ω(d) = λd + μ for some scalars λ 6 = 0 and μ of K since d ∈ K[x] and ω(K[x]) = K[x]. By (7), the polynomial d is a unique sum d0 + d1x for some polynomials d0, d 1 ∈ K[x2]. We must have d1 6 = 0 since, otherwise, d = d0 ∈ K[x2]. Since a = a1 ◦ d for 12 some polynomial a1 ∈ K[x], we would have a ∈ a0 ◦ K[x2] ⊆ K[x2], and so a ∈ O ∩ K[x2] = 0, a contradiction (since a 6 = 0). Therefore, d1 6 = 0. Then the equalities d0 − d1x = ω(d) = λd + μ = λd 0 + μ + λd 1x yield λ = −1, and so ω(d) = −d + μ. Then changing d for d − μ 2 we may assume that μ = 0, that is ω(d) = −d, i.e. d ∈ O . We claim that the polynomial a1 ∈ K[x] in the equality a = a1 ◦ d above belongs to O. To prove this we write the polynomial a1 as a unique sum u + vx for some polynomials u, v ∈ K[x2]. Note that u ◦ d, v ◦ d ∈ K[x2] and (v ◦ d) · d ∈ O . The inclusion a = a1 ◦ d = u ◦ d + ( v ◦ d) · d ∈ O yields u ◦ d = 0, i.e. u = 0. This proves that a1 = vx ∈ O . By symmetry, we have b = b1 ◦ d for some element b1 ∈ O . This means that the axiom (A6) holds for the monoid O. The proof of Theorem 1.2 is complete. Definition . A Ritt transformation of the decomposition (1) is either one of the decom-positions (a), (b) or (c) with λ2 = 1 and gcd(deg( pi), deg( pi+1 )) = 1 (in all three cases) and with the numbers m and n being odd prime numbers in the case (a) (see (2)) or a decomposition of the type (d) p1 ◦ · · · ◦ (pi ◦ u) ◦ (u−1 ◦ pi+1 ) ◦ · · · ◦ pr for some polynomial u ∈ K[x]∗.In his paper, J. F. Ritt wrote (page 52, the last line): “Case (a) with m = 2 can be reduced to Case (b) by linear transformation.” In more detail, for each natural number k ≥ 1, T2 = −1 + 2 x2 = ( −1 + 2 x) ◦ x2 = α ◦ x2, α := −1 + 2 x, T2k+1 = k ∑ i=0 (2k + 1 2i ) x2k+1 −2i(1 − x2)i = xt 2k+1 (x2),t2k+1 (x) := k ∑ i=0 (2k + 1 2i ) xk−i(1 − x)i. Let n = 2 k + 1. Then T2 ◦ Tn = α ◦ x2 ◦ [xt n(x2)] = α ◦ [xt 2 n ] ◦ x2 = α ◦ [xt 2 n ] ◦ α−1 ◦ α ◦ x2 = α ◦ [xt 2 n ] ◦ α−1 ◦ T2, and the remark of J. T. Ritt is obvious. Note that Tn ◦ T2 = T2 ◦ Tn = α ◦ [xt 2 n ] ◦ α−1 ◦ T2,and so (by (A4)) Tn = α ◦ [xt 2 n ] ◦ α−1. 13 Now, it is obvious that also the case (a) with n = 2 can be reduced to the case (c) by linear transformation. This is the reason why in the definition of Ritt transformation m and n are odd primes (in the case (a)). All trigonometric polynomials Tl = xt l(x2) do not belong to the algebra A where l runs through all odd prime numbers (since T ′ l (0) = l 6 = 0). But T2 ∈ A.The next corollary follows from Theorem 1.1 and the second theorem of Ritt(-Levi), it is implicit in the papers and . Corollary 2.2 If a ∈ K[x] has two decompositions into irreducible polynomials then one can be obtained from the other by Ritt transformations. Proof of Corollary 1.3.(3). The idea of the proof of Corollary 1.3.(3) is to use the second theorem of Ritt-Levi in combination with Lemma 2.3, Theorem 2.6 and Lemma 2.8. We first prove all these preliminary results that are interesting on their own. Lemma 2.3 Let K be a field of characteristic zero, a and b be non-scalar polynomials of K[x] such that a ◦ b ∈ O . If one of the polynomials a or b belongs to the set O then so does the other. Proof . Case (i) : a ∈ O . The polynomial a is a non-scalar polynomial, and so a = N ∑ n=0 λnx2n+1 , λn ∈ K, λN 6 = 0 . Due to the decomposition K[x] = K[x2] ⊕ K[x2]x, each polynomial p of K[x] is a unique sum p = pev + pod of an even pev ∈ K[x2] and odd pod ∈ K[x2]x polynomials. Then b = b0 + b1 where b0 := bev and b1 := bod . We have to show that b0 = 0. Suppose that b0 6 = 0, we seek a contradiction. Clearly, b1 6 = 0 since otherwise we would have the inclusion c ∈ K[x2]x ∩ K[x2] = 0, a contradiction. Let us consider the even part of the polynomial c, cev = ( a ◦ b)ev = ( N ∑ n=0 λn(b0 + b1)2n+1 )ev = N ∑ n=0 λnn∑ m=0 ( 2n + 1 2m + 1 ) b2m+1 0 b2( n−m)1 . The degrees of the nonzero polynomials b0 and b1 are even and odd numbers respectively. Therefore, either deg( b0) > deg( b1) or, otherwise, deg( b0) < deg( b1). The leading coefficient of the polynomial cev is equal to { λN b2N +1 0 if deg( b0) > deg( b1),λN (2N +1 1 )b0b2N 1 if deg( b0) < deg( b1). 14 The first case is obvious; the second case follows from the inequalities: for all natural numbers m and n such that 0 ≤ m ≤ n,deg( b2m−10 b2( n−m+1) 1 ) − deg( b2m+1 0 b2( n−m)1 ) = 2(deg( b1) − deg( b0)) > 0. Since in both cases the leading term of the polynomial cev is non-zero, we have cev 6 = 0. This contradicts to the assumption that c ∈ K[x2]x, i.e. cev = 0. The contradiction finishes the proof of the case (i). Case (ii) : b ∈ O . Then ω(b) = −b. Similarly, ω(c) = −c since c ∈ K[x2]x. The polynomial a is a unique sum aev + aod of even and odd polynomials. Comparing both ends of the following series of equalities −(aev ◦ b + aod ◦ b) = −c = ω(c) = ω(a ◦ b) = a ◦ ω(b) = a ◦ (−b)= aev ◦ b − aod ◦ b we conclude that aev ◦ b = 0, hence aev = 0 since b is a non-scalar polynomial, and so a = aod ∈ O , as required. The proof of Lemma 2.3 is complete. Let f = f0 + f1 ∈ K[x] where f0 := f ev and f1 := f od . Let f (k) := dk f dx k and f (k)(g) := dkf dx k ◦g. Then f (2 n) = f (2 n)0 +f (2 n)1 and f (2 n+1) = f (2 n+1) 1 +f (2 n+1) 0 where f (2 n)0 , f (2 n+1) 1 ∈ K[x2]and f (2 n)1 , f (2 n+1) 0 ∈ O . Lemma 2.4 Let f = f ev + f od ∈ K[x] and μ ∈ K∗. Then (x + μ) ◦ f ∈ O iff f ev = −μ.Proof . ( x + μ) ◦ f = μ + f = μ + f ev + f od ∈ O iff f ev = −μ. Lemma 2.5 Let a ∈ O and f = f0 + f1 ∈ K[x] where f0 := f ev and f1 := f od . Then (a ◦ f )ev = ∑ k≥0 a(2 k+1) (f1) · f 2k+1 0 (2 k+1)! and (a ◦ f )od = ∑ k≥0 a(2 k)(f1) · f 2k 0 (2 k)! .Proof . The result is an easy consequence of the Taylor’s formula, a ◦ f = a(f1 + f0) = ∑ i≥0 a(i)(f1) · f i 0 i! , and the following two facts: a(2 k+1) (f1) ∈ K[x2] and a(2 k)(f1) ∈ O . Theorem 2.6 Suppose that a ∈ O with deg( a) > 1, μ ∈ K∗, and f ∈ K[x]\K. Then (x + μ) ◦ a ◦ f 6 ∈ O . 15 Proof . Suppose that ( x + μ) ◦ a ◦ f ∈ O , we seek a contradiction. Then −μ = (a ◦ f )ev (by Lemma 2.4) = ∑ k≥0 a(2 k+1) (f1) · f 2k+1 0 (2 k + 1)! (by Lemma 2.5) = f0 · ∑ k≥0 a(2 k+1) (f1) · f 2k 0 (2 k + 1)! . Comparing the degrees of both ends of the series of equalities above, we conclude that f0 ∈ K∗ since μ 6 = 0. Let ∂ := d dx . Then −μ = ∆ ∂(f1) where the linear map ∆ := ∑ k≥0 f 2k+1 0 (2 k + 1)! ∂2k : K[x] → K[x]is equal to f0(1 − n) where n := − ∑ k≥1 f2k 0 (2 k+1)! ∂2k is a locally nilpotent map, that is K[x] = ∪i≥1ker(∆ i). The map ∆ is invertible and ∆ −1 = f −10 (1 + n + n2 + · · · ). Then ∂(f1) = −∆−1(μ) = −f −10 , and so deg( f1) ≤ 1, that is f1 = γx for some γ ∈ K∗ since f 6 ∈ K. We claim that f0 6 = 0 since otherwise we would have the inclusion ( x + μ) ◦ a ◦ f1 = a ◦ f1 + μ ∈ O , which would have implied that μ = 0 (since a ◦ f1 ∈ O ), a contradiction. Changing, if necessary, the element a to a ◦ f1 = a ◦ [γx ] ∈ O , we may assume that γ = 1. Then O ∋ (x + μ) ◦ a ◦ (f0 + x) iff −μ = ( a ◦ (f0 + x)) ev = ∑ k≥0 a(2 k+1) f 2k+1 0 (2 k + 1)! (see above) . This implies that deg( a) ≤ 1, a contradiction (since deg( a) > 1, by the assumption). This contradiction finishes the proof of the theorem. The next corollary follows at once from Theorem 2.6. Corollary 2.7 Suppose that a ∈ O with deg( a) ≥ 1, μ ∈ K∗, and f ∈ K[x]\K. If (x + μ) ◦ a ◦ f ∈ O then deg( a) = 1 .Example . ( x + μ) ◦ [λx ] ◦ (x − λ−1μ) ∈ O for all λ, μ ∈ K∗. Lemma 2.8 Let f ∈ K[x] with deg( f ) ≥ 1 and u ∈ K[x]∗. Then f ◦ x2 ◦ u 6 ∈ O .Proof . Let u = λx + μ for some λ ∈ K∗, and f = ∑ni=0 λixi where n := deg( f ), and so λn 6 = 0. Then f ◦ x2 ◦ (λx + μ) = ∑ni=0 (λx + μ)2i = λnλ2nx2n + smaller terms, and so f ◦ x2 ◦ u 6 ∈ O . 16 The proof of Corollary 1.3.(3) continued . Let us continue with the proof of Corollary 1.3.(3). Recall that O∗ = {λx | λ ∈ K∗}. We have to show that if there is an equality p ◦ q = p∗ ◦ q∗ where p, q, p∗ and q∗ are irreducible elements of the monoid O then modulo basic transformations of the pairs P := ( p, q ) and P ∗ := ( p∗, q ∗): (p, q ) 7 → (u ◦ p ◦ v, v −1 ◦ q ◦ w), (p∗, q ∗) 7 → (u ◦ p∗ ◦ ˜v, ˜v−1 ◦ q ◦ w), where u, v, ˜v, w ∈ O ∗, we have either the equality P = P ∗ or, otherwise, P and P ∗ as in Corollary 1.3.(3). If ( p∗, q ∗) = ( p ◦ v, v −1 ◦ q) for some element v ∈ K[x]∗ then, by Lemma 2.3, v ∈ O ∗, and there is nothing to prove, the result is obvious. So, suppose that ( p∗, q ∗) 6 = ( p ◦ v, v −1 ◦ q)for all element v ∈ K[x]∗. Then by the second theorem of Ritt-Levi the pair P ∗ can be obtained from the pair P by finitely many Ritt transformations P = P1 ∼R P2 ∼R · · · ∼ R Ps = P ∗, and necessarily some of the Ritt transformations are of the types (a), (b) or (c). It might happen that the elements p and q are reducible in the monoid K[x] (but the essence of the proof is to show that they are, in fact, irreducible in K[x]). Each Ritt transformation Pi := ( pi, q i) ∼R Pi+1 := ( pi+1 , q i+1 ) may transform either the irreducible factors (in ( K[x], ◦)) of pi or of qi or simultaneously the last irreducible factor, say li, of pi and the first irreducible factor, say fi, of qi. The first two types of Ritt transformations do not change the elements pi and qi. So, there exists an index i such that the Ritt transformation Pi ∼R Pi+1 is of the third type and, necessarily, of one of the types (a), (b) or (c) as in the definition of Ritt transformations since, for given u ∈ K[x]∗ and a ∈ O ∗, the inclusion u ◦ a ∈ O ∗ implies u ∈ O ∗ (Lemma 2.3). Let i be the least such an index. For each j, let Qj := ( lj , f j ). Then pj = αj ◦ lj and qj = fj ◦ βj for some polynomials αj , β j ∈ K[x]. There are the following three options for the pairs Qi = ( li, f i)and Qi+1 = ( li+1 , f i+1 ) (where u, v, w, ˜w ∈ K[x]∗): (a) Qi = ( u ◦ Tn ◦ w, w −1 ◦ Tm ◦ v) and Qi+1 = ( u ◦ Tm ◦ ˜w, ˜w−1 ◦ Tn ◦ v) where n and m are odd primes, (b) Qi = ( u ◦ [xtβs] ◦ w, w −1 ◦ xs ◦ v) and Qi+1 = ( u ◦ xs ◦ ˜w, ˜w−1 ◦ [xtβ(xs)] ◦ v), (c) Qi = ( u ◦ xs ◦ w, w −1 ◦ [xtβ(xs)] ◦ v) and Qi = ( u ◦ [xtβs] ◦ ˜w, ˜w−1 ◦ xs ◦ v), where s is a prime number, t ≥ 0, and β ∈ K[x] with β(0) 6 = 0. In the cases (b) and (c), s is an odd prime number since, otherwise, by Lemma 2.8, the polynomials pi+1 6 ∈ O (the case (b)) and pi 6 ∈ O (the case (c)), which are contradictions. Let us consider the case (a). Note that Tm, T n ∈ O . Applying Theorem 2.6 to the inclusion w−1 ◦ Tm ◦ (v ◦ βi) = qi ∈ O , we see that w−1 ∈ O ∗. Then we have the inclusion Tm ◦ (v ◦ βi) ∈ O which yields the inclusion v ◦ βi ∈ O , by Lemma 2.3 (since Tm ∈ O ). Since qi is an irreducible element of the monoid O, we must have v ◦ βi ∈ O ∗.Since w ∈ O ∗ and ( αi ◦ u ◦ Tn) ◦ w = pi ∈ O , we have the inclusion αi ◦ u ◦ Tn ∈ O , hence αi ◦ u ∈ O (by Lemma 2.3 since Tn ∈ O ). Moreover, αi ◦ u ∈ O ∗ since pi is an irreducible element of the monoid O. As a result, we have the case (a) of Corollary 1.3.(3). 17 Let us consider the case (b). Since xs ∈ O and w−1 ◦ xs ◦ (v ◦ βi) = qi ∈ O , we have w−1 ∈ O ∗ (by Theorem 2.6). Then xs ◦ (v ◦ βi) ∈ O , hence v ◦ βi ∈ O , by Lemma 2.3. The element qi is an irreducible element of the monoid O, and so v ◦ βi ∈ O ∗. By replacing the element v with v ◦ βi, we may assume that βi = 1 and v ∈ O ∗. Now, it follows from the inclusion O ∋ qi+1 = ˜w−1 ◦ [xtβ(xs)] ◦ v ◦ βi = ˜w−1 ◦ [xtβ(xs)] ◦ v that ˜w−1 ◦ [xtβ(xs)] ∈ O .If t 6 = 0 then ˜w−1 ∈ O ∗, and so xtβ(xs) ∈ O , hence t is odd (since β(0) 6 = 0), and β = α(x2) for some polynomial α(x) ∈ K[x]. Since [ xtβ(xs)] ◦ w ∈ O and ( αi ◦ u) ◦ [xtβ(xs)] ◦ w = pi ∈ O , we have αi ◦ u ∈ O , by Lemma 2.3. Therefore, αi ◦ u ∈ O ∗ since pi is an irreducible element of the monoid O and xtβ(xs) 6 ∈ O ∗. This means that we have the case (b) of Corollary 1.3.(3) (if t 6 = 0). To finish with the case b it suffices to show that the remaining subcase when t = 0 is impossible. Suppose that t = 0, we seek a contradiction. Then the inclusion ˜w−1◦β(xs) ∈ O yields β = ˜w ◦ xT α1(x2) for some odd natural number T and a polynomial α1(x) ∈ K[x]with α1(0) 6 = 0. Note that w ∈ O ∗ and O ∋ pi = αi ◦ u ◦ βs ◦ w = αi ◦ u ◦ xs ◦ ˜w ◦ [xT α1(x2)] ◦ w. Since xT α1(x2) ◦ w ∈ O and the element pi ∈ O is irreducible, we must have αi ◦ u ◦ xs ◦ ˜w ∈O∗, by Lemma 2.3, hence s = 1, a contradiction ( s is a prime number). The remaining case (c) follows from the case (b) by interchanging the roles of the pairs (and repeating the proof of the case (b)). Therefore, the pairs Pi and Pi+1 are as in Corollary 1.3.(3). By the minimality of i, we have p = p1 = · · · = pi and q = q1 = · · · = qi, and so P = Pi. Now, the result is obvious. The proof of Corollary 1.3.(3) is complete. Remark . Let us explain the remark made in the Introduction that the monoid O has non-commutative origin. Let λ be a nonzero scalar. The algebra Λ = 〈x, y | xy = λyx 〉 is called the quantum plane . The algebra Λ is the skew polynomial algebra K[y][ x; σ] where σ is the K-algebra automorphism of the polynomial algebra K[y] which is given by the rule σ(y) = λy . The localization Λ ′ := S−1Λ of the algebra Λ at the Ore set S := K[y]{ 0} is the skew polynomial algebra Λ ′ = K(y)[ x; σ]. Let λ = −1. The centre Z′ of the algebra Λ′ is the polynomial algebra K(y2)[ x2] with coefficients from the field K(y2). Clearly, Λ′ = K(y)[ x2] ⊕ K(y)[ x2]x where the algebra K(y)[ x2] is the fixed ring of the inner automorphism ωy : u 7 → yuy −1 of Λ′, and K(y)[ x2]x = ker( ωy + 1). Then it follows that the monoid E of all the K-algebra endomorphisms of Λ ′ elements of which fix the element y is equal to the set {τα : x 7 → αx | α ∈ K(y)[ x2]}. The endomorphism τα is called a central endomorphism if α ∈ Z′.18 The submonoid Z := {τα | α ∈ Z′} of all central endomorphisms of Λ ′ is isomorphic to the monoid O of odd polynomials in x where the base field is K(y2) rather than K.The set Irr( K[x]) of all the irreducible elements of the monoid ( K[x], ◦) is the union of its three subsets, Irr( K[x]) = P ∪ Q ∪ R (10) where an irreducible polynomial p is an element of the set P iff p ∈ K[x]∗ ◦ xl ◦ K[x]∗ for some prime number l; an irreducible polynomial p belongs to Q iff either p ∈ K[x]∗ ◦ [xsg(xl)] ◦ K[x]∗ or p ∈ K[x]∗ ◦ [xsgl] ◦ K[x]∗ for some prime number l, s ≥ 1, g(x) ∈ K[x]\K with g(0) 6 = 0; R := Irr( K[x]) \P ∪ Q . Proposition 2.9 1. The union (10) is a disjoint union. 2. The set P ∪ Q contains precisely all the irreducible polynomials of K[x] that are involved in all the Ritt transformations. Proof . 1. By Lemma 2.10, the union P ∪ Q is disjoint. Now, statement 1 is obvious. 2. For a prime number l, a polynomial f of the form g(xl) = g(x) ◦ xl (resp. gl = xl ◦ g)is irreducible iff f ∈ P (then, necessarily, g is a unit). By Lemma 2.11 and the explicit formula for Tl (see above), for each odd prime number l, K[x]∗ ◦ Tl ◦ K[x]∗ ⊆ Q . But T2 ∈ P . Now, statement 2 follows from the definitions of Ritt transformations and of the sets P and Q. Lemma 2.10 Let f (x) be a non-scalar polynomial of K[x] such that f (0) 6 = 0 , s and p be natural numbers such that s ≥ 1 and p ≥ 2. Then the polynomials xsf (xp) and xsf p do not belong to the set N := ∪n≥2K[x]∗ ◦ xn ◦ K[x]∗.Proof . Suppose that xsf (xp) ∈ N , that is xsf (xp) = u ◦ xn ◦ v for some elements u and v of the set K[x]∗ and n ≥ 2. We seek a contradiction. The derivative ( u ◦ xn ◦ v)′ of the polynomial u ◦ xn ◦ v has a single root with multiplicity n − 1 ≥ 1. The same is true for the derivative of the polynomial xsf (xp) which is equal to (xsf (xp)) ′ = xs−1(sf (xp) + px pf ′(xp)) = xs−1L(xp) 6 = 0 where L(x) := sf (x) + pxf ′(x). If s ≥ 2 then zero must be a root of the polynomial L(xp), but L(0) = sf (0) 6 = 0, a contradiction. If s = 1 then the polynomial L(xp) must have a single root, say λ, which is not equal to zero since L(0) 6 = 0. Let e be a p’th root of 1 which is not equal to 1. Then eλ is another root of L(xp) distinct from λ, a contradiction. Therefore, xsf (xp) 6 ∈ N .19 Suppose that xsf p(x) ∈ N , that is xsf p(x) = u ◦ xn ◦ v for some elements u and v of the set K[x]∗ and n ≥ 2. We seek a contradiction. By the same argument as in the previous case, the derivative ( xsf p)′ of the polynomial xsf p must have a single root with multiplicity n − 1 ≥ 1. Clearly, 0 6 = ( xsf p)′ = xs−1 · f p−1 · (sf + pxf ′). Note that the polynomial f p−1 has a nonzero root since f (0) 6 = 0. Hence, s = 1 and the polynomials f p−1 and f + pxf ′ have the same root, say λ, but may be with different multiplicities. The root λ is a nonzero one since f (0) 6 = 0. Then f = μ(x − λ)m for some 0 6 = μ ∈ K and m ≥ 1, and so f + pxf ′ = μ(x − λ)m−1(x − λ + pmx ). Hence, λ = λ(1 + pm )−1, and so 1 = 1 + pm > 1, a contradiction. Therefore, xsf p(x) 6 ∈ N . Lemma 2.11 Let p be an odd natural number such that p ≥ 3. Then the trigonometric polynomial Tp does not belong to the set N := ∪n≥2K[x]∗ ◦ xn ◦ K[x]∗.Proof . The derivative T ′ p of the polynomial Tp has at least two distinct roots (Lemma 2.12) since p ≥ 3, and so the result. The next result will be used in the proof of Theorem 1.5. Lemma 2.12 Let p be a natural number such that p ≥ 2. Then 1. The derivative T ′ p of the trigonometric polynomial Tp is a polynomial of degree p − 1 which has p − 1 distinct roots: cos( πi p ), i = 1 , 2, . . . , p − 1.2. If k and l are distinct prime numbers then the polynomials T ′ k and T ′ l have no common roots. Proof . 1. By the very definition, the numbers cos( πi p ), i = 1 , 2, . . . , p − 1, are distinct. Note that sin( πi p ) 6 = 0 and sin( p · πi p ) = 0 for all i = 1 , 2, . . . , p − 1. Since T ′ p (cos( x)) sin( x) = p sin( px ), we have T ′ p (cos( πi p )) = 0 for all i = 1 , 2, . . . , p − 1. Now, statement 1 is obvious since deg( T ′ p ) = deg( Tp) − 1 ≤ p − 1. 2. Statement 2 follows from statement 1. Let a be a polynomial of K[x] with deg( a) > 1 and X = p1 ◦ · · · ◦ pr ∈ Dec( a) be a decomposition of the polynomial a into irreducible polynomials of K[x]. Let nP (X), nQ(X)and nR(X) be the numbers of irreducible factors pi of the types P, Q and R respectively. For each prime number l, let nP,l (X) be the number of irreducible factors pi such that pi ∈ K[x]∗ ◦ xl ◦ K[x]∗.20 Theorem 2.13 The numbers nP (X), nQ(X), nR(X) and nP,l (X) do not depend on the decomposition X.Proof . Recall that (10) is a disjoint union, and the set P ∪ Q contains precisely all the irreducible polynomials that are involved in all the Ritt transformations (Proposition 2.9). Then it follows from the definition of Ritt transformations that the numbers nP (X), nQ(X) and nP,l (X) do not depend on the decomposition X. Then the number nR = l(a) − nP (X − nQ(X)does not depend on the decomposition X either. Definition . The common value of all the numbers nP (X), X ∈ Dec( a), is denoted by nP (a). Similarly, the numbers nQ(a), nR(a) and nP,l (a) are defined. 3 Analogues of the two theorems of J. F. Ritt for the cusp In this section, Theorems 1.4 and 1.5 are proved. It is shown that, in general, the first theorem of J. F. Ritt does not hold for the cusp, i.e., in general, the number of irreducible polynomials in decomposition of element of A into irreducible polynomials is not unique (Lemma 3.5). For each element a of A, the set Max( a) is found (Lemma 3.7). In this section, K is an algebraically closed field of characteristic 0 if it is not stated otherwise. The algebra K[s, t ]/(s2 − t3) of regular functions on the cusp s2 = t3 is isomorphic to the subalgebra A := K[x2, x 3] of the polynomial algebra K[x] (via s 7 → x3, t 7 → x2). For a polynomial a ∈ K[x], let a′ := da dx and a′(0) := da dx (0). Then A = {a ∈ K[x] | a′(0) = 0 }. (11) The polynomial algebra K[x] is a monoid with respect to the composition ◦ of functions. It follows from the chain rule, ( a ◦ b)′ = a′(b)b′, that K[x] ◦ A ⊆ A and A ◦ (x) ⊆ A (12) where ( x) is the ideal of the polynomial algebra K[x] generated by the element x. In particular, ( A, ◦) is a semigroup but not a monoid. Indeed, suppose that e is an identity of A then deg( a) = deg( e ◦ a) = deg( e) deg( a) for all elements a ∈ A, and so deg( e) = 1. But the semigroup A contains no element of degree 1, a contradiction. Note that A ∩ K[x]∗ = ∅. So, each element of A is not a unit of the monoid ( K[x], ◦). The next lemma gives a necessary and sufficient condition for a composition of two polynomials to be an element of A.21 Lemma 3.1 Let K be a field of characteristic zero and a, b ∈ K[x]. Then a ◦ b ∈ A iff either b ∈ A or b 6 ∈ A and the value b(0) of the polynomial b(x) at x = 0 is a root of the derivative da dx of a.Proof . a ◦ b ∈ A iff 0 = ( a ◦ b)′(0) = a′(b(0)) b′(0) iff either b′(0) = 0 or, otherwise, a′(b(0)) = 0 iff either b ∈ A or, otherwise, b(0) is a root of a′. Let Irr( A) and Irr( K[x]) be the sets of irreducible elements of the semi-groups A and K[x] respectively. The set Irr( A) is the disjoint union of its two subsets C and D where C := Irr( A) ∩ Irr( K[x]) = {p ∈ Irr( K[x]) | p′(0) = 0 } and D := Irr( A)\C . So, the set C contains precisely all the irreducible elements of K[x] that belong to the semi-group A, and the set D contains precisely all the irreducible elements of A which are reducible in K[x]. Below, Proposition 3.2 states a necessary and sufficient condition for an irreducible element of A to belong to the set C or D. First, let us give some definitions. For a polynomial a ∈ K[x], let R(a) and Dec( a) be, respectively, the set of its roots and the set of all possible decompositions into irreducible factors in K[x]. For an element a ∈ A, let Dec A(a) be the set of all possible decompositions into irreducible factors in A.If p1 ◦ · · · ◦ pr ∈ Dec( a) then a′ = ( p1 ◦ · · · ◦ pr)′ = p′ 1 (p2 ◦ · · · ◦ pr) · p′ 2 (p3 ◦ · · · ◦ pr) · · · p′ r−1 (pr) · p′ r , and so R(a′) = R(p′ 1 (p1 ◦ · · · ◦ pr−1)) ∪ · · · ∪ R (p′ r−1 (pr)) ∪ R (p′ r ). (13) Let E(a) := ∪p1◦···◦ pr ∈Dec( a)R(p′ r ). By the very definition, the set E(a) is a subset of R(a′). In particular, the set E(a) is a finite set. In general, E(a) 6 = R(a′). For each element p ∈ Irr( K[x]), q ∈ Irr( A) and λ ∈ R (q′), we have the inclusions (where K∗ := K{ 0}) K[x]∗ ◦ p ◦ K[x]∗ ⊆ Irr( K[x]) and K[x]∗ ◦ q ◦ (λ + K∗x) ⊆ Irr( A). In particular, K[x]∗ ◦ q ◦ K∗x ⊆ Irr( A) and K[x]∗ ◦ q ◦ (λ + x) ⊆ Irr( A). Proposition 3.2 Let p ∈ A\K. Then 1. p ∈ C iff p ∈ Irr( K[x]) and p′(0) = 0 .2. p ∈ D iff p 6 ∈ C and, for each decomposition p1 ◦· · ·◦ pr ∈ Dec( p), (p2 ◦· · ·◦ pr)′(0) 6 = 0 . 22 Proof . 1. This is obvious. 2. ( ⇒) Suppose that p ∈ D . Then, obviously, p 6 ∈ C . Suppose that ( p2 ◦ · · · ◦ pr)′(0) = 0 for some decomposition p1 ◦ · · · ◦ pr ∈ Dec( p), we seek a contradiction. Let λ be a root of the polynomial p′ 1 . The elements q1 := p1 ◦ (x + λ1) and q2 := ( x − λ1)−1 ◦ p2 ◦ · · · ◦ pr belong to the semi-group A, and p = q1 ◦ q2. This contradicts to the irreducibility of the element p. Therefore, ( p2 ◦ · · · ◦ pr)′(0) 6 = 0. (⇐) Suppose that p 6 ∈ C and, for each decomposition p1 ◦ · · · ◦ pr ∈ Dec( p), ( p2 ◦ · · · ◦ pr)′(0) 6 = 0. Suppose that the element p is reducible, i.e. p = a ◦ b for some elements a, b ∈ A\K, we seek a contradiction. Fix decompositions p1 ◦ · · · ◦ ps ∈ Dec( a) and ps+1 ◦ · · · ◦ pr ∈ Dec( b). Then p = p1 ◦ · · · ◦ pr and ( ps+1 ◦ · · · ◦ pr)′(0) = 0 since b ∈ A, and so ( p2 ◦ · · ·◦ pr)′(0) = 0 (by the chain rule), a contradiction. So, the element p is irreducible in A, hence p ∈ D since p 6 ∈ C . The following two corollaries give a method of construction of elements of the set D.In particular, they show that the set D is a non-empty set. Corollary 3.3 Suppose that an element q of A is a composition p1 ◦ · · · ◦ pr of irreducible factors pi ∈ Irr( K[x]) such that r ≥ 2, (p2 ◦ · · · ◦ pr)′(0) 6 = 0 and Dec( q) = {(p1 ◦ u1) ◦ (u−11 ◦ p2 ◦ u2) ◦ · · · ◦ (u−1 r−1 ◦ pr) |u1, . . . , u r−1 ∈ K[x]∗}. Then q ∈ D .Proof . Since r ≥ 2, q 6 ∈ C . By the assumption, for each decomposition q1 ◦ · · · ◦ qr ∈ Dec( q), we can find elements u1, . . . , u r−1 ∈ K[x]∗ such that q1 = p1 ◦ u1, q2 = u−11 ◦ p2 ◦ u2, . . . , qr = u−1 r−1 ◦ pr. Now, R(( q2 ◦ · · · ◦ qr)′) = R(( u−11 ◦ p2 ◦ · · · ◦ pr)′) = R(( p2 ◦ · · · ◦ pr)′) 6 ∋ 0. By Proposition 3.2.(2), q ∈ D . Note that any sufficiently generic irreducible polynomials p1, . . . , p r ∈ Irr( K[x]) ( r ≥ 2) with p1 ◦ · · · ◦ pr ∈ A satisfy the assumptions of Corollary 3.3. For example, take generic polynomials p1, . . . , p r ∈ K[x] such that ( p1 ◦ · · · ◦ pr)′(0) = 0 and ( p2 ◦ · · · ◦ pr)′(0) 6 = 0 then all pi ∈ Irr( K[x]) and p1 ◦ · · · ◦ pr ∈ D . Corollary 3.4 Let r ≥ 2 be a natural number. For each natural number i = 1 , . . . , r ,let pi = ∑ni j=0 aij xj ∈ K[x] be a polynomial of prime degree ni ≥ 5. Suppose that a11 := − ∑n1 j=2 ja 1j (p2 ◦· · ·◦ pr(0)) j−1 and that all the elements aij of the field K with (i, j ) 6 = (1 , 1) are algebraically independent over the field of rational numbers Q. Then p1 ◦ · · · ◦ pr ∈ D .In particular, D 6 = ∅. 23 Proof . The definition of the element a11 means that p′ 1 (( p2◦· · ·◦ pr)(0)) = 0. This implies that ( p1 ◦ · · · ◦ pr)′(0) = 0, and so p1 ◦ · · · ◦ pr ∈ A. Next, we show that the assumption of Corollary 3.3 hold. The polynomials pi are irreducible since their degrees are prime numbers. The elements aij , i = 2 , . . . , r , j = 1 , . . . , n i, are algebraically independent over Q, hence ( p2 ◦ · · · ◦ pr)′(0) 6 = 0. Suppose that Dec( p1 ◦ · · · ◦ pr) 6 = {(p1 ◦ u1) ◦ (u−11 ◦ p2 ◦ u2) ◦ · · · ◦ (u−1 r−1 ◦ pr) |u1, . . . , u r−1 ∈ K[x]∗}, we seek a contradiction. Then, by the second theorem of Ritt-Levi, there exists a pair (pi, p i+1 ) and elements α, β, γ ∈ K[x]∗ such that the pair ( α ◦ pi ◦ β, β −1 ◦ pi+1 ◦ γ) is one of the three types: (a) (Tni , T ni+1 ), (b) (xni , x r g(xni )) , r + ni deg( g) = ni+1 , (c) (xrgni+1 , x ni+1 ), r + ni+1 deg( g) = ni. For each polynomial f ∈ K[x], let C(f ) be the subfield of K generated by its coefficients over Q. In the case (a) (resp. (b)) pi = α−1 ◦ Tni ◦ β−1 (resp. pi = α−1 ◦ xni ◦ β−1.On the one hand, the transcendence degree tr .deg C(pi) = ni ≥ 5, on the other hand, tr .deg C(α−1 ◦ Tni ◦ β−1) ≤ 4 (resp. tr .deg C(α−1 ◦ xni ◦ β−1) ≤ 4), a contradiction. Similarly, in the case (c), pi+1 = β ◦ xni+1 γ−1, and so 5 ≤ tr .deg C(pi+1 ) = tr .deg C(β ◦ xni+1 γ−1) ≤ 4, a contradiction. These contradictions mean that the assumptions of Corollary 3.3 hold for the element p1 ◦ · · · ◦ pr, and so p1 ◦ · · · ◦ pr ∈ D . In particular, D is a non-empty set. The next lemma shows that, in general, the first theorem of J. F. Ritt does not hold for the cusp. Lemma 3.5 In general, the number of irreducible polynomials in decomposition into irre-ducible polynomials of an element of A is non-unique. Moreover, it can vary greatly. Proof . Let p ∈ D and q ∈ Irr( A). Consider their composition a := p ◦ q. Fix a decomposition p1 ◦ · · · ◦ pr ∈ Dec( p), and then, for each i = 1 , . . . , r , fix a root, say λi, of the polynomial pi. Consider the elements of C: a1 := p1 ◦ (x + λ1), a 2 := ( x − λ1)−1 ◦ p2 ◦ (x + λ2), . . . , a r := ( x − λr−1)−1 ◦ pr ◦ (x + λr). Then ar+1 := ( x − λr)−1 ◦ q ∈ Irr( A) and a = p ◦ q = a1 ◦ · · · ◦ ar ◦ ar+1 are two irreducible decompositions for the element a with distinct numbers of irreducible factors. 24 Lemma 3.5 means that both theorems of J. F. Ritt fails badly for the cusp. However, we can describe a procedure of how to obtain all irreducible decompositions of any given element of A. Let a ∈ A\K. Take any decomposition p1 ◦ · · · ◦ pr ∈ Dec( a). Suppose that it is possible to insert brackets (. . . ) ◦ (. . . ) ◦ · · · ◦ (. . . )in such a way that inside the brackets are irreducible elements of A (in principal, this can be checked using Proposition 3.2). It gives an irreducible decomposition for the element a in A. Moreover, all irreducible decompositions of the element a in A can be obtained in this way. Proof of Theorem 1.5. We keep the notation of Theorem 1.5. So, a ∈ A\K with lA(a) = l(a), and X, Y ∈ Max( a). We have to show that the decomposition Y can be obtained from the decompo-sition X using some of the transformations (Adm), ( Ca), Cb) or ( Cc). We call these trans-formations the cusp transformations. Note that Max( a) ⊆ Dec( a), and so X, Y ∈ Dec( a). Let X′, Y ′ ∈ Max( a). We write X′ ∼A Y ′ if the decomposition Y ′ can be obtained from the decomposition X′ by using the cusp transformations. The relation ∼A on the set Max( a)is an equivalence relation since the cusp transformations are reversible. This means that the inverse of a transformation of the type (Adm) or ( Ca) is a transformation of the type (Adm) or ( Ca) respectively; and the inverse of a transformation of the type ( Cb) or ( Cc) is a transformation of the type ( Cb) or ( Cc) respectively. We write X′ ∼C Y ′ if the decompo-sition Y ′ is obtained from the decomposition X′ by a single cusp transformation. Theorem 1.5 means that the set Max( a) is an equivalence class under the equivalence relation ∼A, i.e. the equivalence relation ∼A on Max( a) coincides with the equivalence relation ∼, by the second theorem of Ritt-Levi (the equivalence relation ∼ is defined in the proof of Theorem 1.1). We write X′ ∼R Y ′ if Y ′ is obtained from X′ by a single Ritt transformation. Let r := lA(a) = l(a). Since X, Y ∈ Max( a), we have X = p1 ◦ · · · ◦ pr and Y = q1 ◦ · · · ◦ qr for some irreducible polynomials pi, q i ∈ C . Case (α): K[x]∗pr = K[x]∗qr , i.e. qr = α ◦ pr for some polynomial α ∈ K[x]∗. Let b := p1 ◦ · · · ◦ pr−1. Then b ◦ pr = a = q1 ◦ · · · ◦ qr = q1 ◦ · · · ◦ (qr−1 ◦ α) ◦ pr. By (A4), we can delete pr at both ends of the chain of equalities above, and the result is b = p1 ◦ · · · ◦ pr−1 = q1 ◦ · · · ◦ (qr−1 ◦ α). By Corollary 2.2, the decomposition V := q1 ◦ · · ·◦ (qr−1 ◦ α) ∈ Dec( b) can be obtained from the decomposition U := p1 ◦ · · · ◦ pr−1 ∈ Dec( b) by applying, say t, Ritt transformations U = U0 ∼R U1 ∼R U2 ∼R · · · ∼ R Ut = V. 25 Then the decomposition Y = V ◦ pr can be obtained from the decomposition X = U ◦ pr by applying cusp transformations of the type (Adm) in the following way. First, we have the elements of the set Dec( a): X = W0 := U0 ◦ pr, . . . , W i := Ui ◦ pr, . . . , W t := Ut ◦ pr, W t+1 := Y. An important fact is that the last element of all decompositions, that is pr, is an element of A. Let Ui := P1 ◦ · · · ◦ Pr−1 where P1, . . . , P r−1 ∈ Irr( K[x]). Fort each polynomial Pj ,fix a Pj -admissible element, say uij , of K[x]∗, and consider the decomposition W ∗ i = P ∗ i ◦ · · · ◦ P ∗ r ∈ Max( a)where P ∗ 1 := P1 ◦ ui1, P ∗ 2 := u−1 i1 ◦ P2 ◦ ui2, . . . , P ∗ r−1 := u−1 i,r −2 ◦ Pr−1 ◦ ui,r −1, P ∗ r := u−1 i,r −1 ◦ pr. It is obvious that the decomposition W ∗ i is obtained from the decomposition Wi by applying r−1 transformations of the type (Adm). Let Adm( ui1, . . . , u i,r −1) denote their composition (in arbitrary order since the transformations commute). We assume that for i = 0 , t + 1 all the u’s are equal to x. This means that the transformation Adm( x, . . . , x ) is the identity transformation, and, obviously, W ∗ 0 = W0 = X and W ∗ t+1 = Wt+1 = Y . So, there is the chain of elements of the set Max( a): X = W ∗ 0 , W ∗ 1 , . . . , W ∗ t , W ∗ t+1 = Y. For each natural number i = 1 , . . . , t + 1, the decomposition W ∗ i is obtained from the decomposition W ∗ i−1 by applying cusp transformations of the type (Adm): Adm( u−1 i−1,1 ◦ ui1, . . . , u −1 i−1,r −1 ◦ ui,r −1). Therefore, X ∼A Y . Case (β): K[x]∗pr 6 = K[x]∗qr . By Corollary 2.2, this means that pr = λ−1 r−1 ◦ π ◦ λr for some units λr−1, λ r ∈ K[x]∗ such that λr is π-admissible and the polynomial π is one of the following types: (a) π = Tl, where l is an odd prime number , (b) π = xsg(xp), where s ≥ 1, g(x) ∈ K[x]\K, g(0) 6 = 0 , p is a prime number , (c) π = xp, where p is a prime number . Remark . We exclude the situation when s = 0 in the case (b) since otherwise we would have the case (c) due to irreducibility of the element π and the equality g(xp) = g(x) ◦ xp.We consider the three cases separately and label them respectively as ( βa ), ( βb ) and (βc ). Case (βa ): π = Tl where l is an odd prime number . By the second theorem of Ritt-Levi, the element qr in the decomposition Y = q1 ◦ · · ·◦ qr must be of the type μ ◦ Tm ◦ λr for some 26 prime number m such that m 6 = l (see Case ( β)) where λr is necessarily a Tm-admissible polynomial and μ ∈ K[x]∗. If ν is the only root of the polynomial λr then ν ∈ R (T ′ l ) ∩ R (T ′ m ) = ∅ (Lemma 2.12 .(2)) , a contradiction. Therefore, this case is impossible. Case (βb ): π = xsg(xp) (as in the case (b) above) . Then for the element qr there are two options either qr ∈ K[x]∗ ◦Tk ◦λr for some prime number k or, otherwise, qr ∈ K[x]∗ ◦xq ◦λr for some prime number q. For k 6 = 2, the first option is not possible since by interchanging X and Y we would have the impossible Case ( βa ) (recall that the cusp transformations are reversible). For k = 2, T2 = ( −1 + 2 x) ◦ x2, and so we have, in fact, only the second option, i.e. qr = μ ◦ xq ◦ λr for some unit μ ∈ K[x]∗. This means that the invariant number nP,q ≥ 1. Let i be the greatest index such that pi ∈ K[x]∗ ◦ xq ◦ K[x]∗. In this case, we call the element pi the largest xq in the decomposition X denoted L(X). The decompositions H(X) := p1 ◦ · · · ◦ pi−1 and T (X) := pi+1 ◦ · · · ◦ pr are called the head and the tail of the decomposition X respectively. The invariance of the number nP,q means that we can control the largest xq under Ritt transformations. The largest xq remains unchanged under a Ritt transformation either of the head or the tail of X, and it moves to the right or left by one point if the largest xq is involved in the Ritt transformation of the type (b) or (c) from the Introduction respectively. Let pi = λ−1 i−1 ◦ xq ◦ λi for some units λi−1, λ i ∈ K[x]∗. Then the tail T (X) of X has clear structure. We claim that there exist units λi+1 , . . . , λ r−2 ∈ K[x]∗ such that pj = λ−1 j−1 ◦ πj ◦ λj , j = i + 1 , . . . , r − 1, where πj is either xn for a prime number n or, otherwise, xtf (xq) for some t ≥ 1 and f (x) ∈ K[x] such that deg( f ) ≥ 1 and f (0) 6 = 0. The decomposition Y is obtained from the decomposition X by several Ritt transformations X = X0 ∼R X1 ∼R · · · ∼ R Xk ∼R · · · ∼ R Xm = Y. Using the explicit form of Ritt transformations the claim follows easily by the backward induction on k starting with the obvious case k = m − 1. Using the claim we can produce r − i cusp transformations X = Zi ∼C Zi+1 ∼C · · · ∼ C Zr such that on each step the largest xq moves one point to the right, and the last irreducible element in the decomposition Zr is qr = μ ◦ xq ◦ λr. On the first step, Zi ∼C Zi+1 , the cusp transformation changes the triple (pi, p i+1 , p i+2 ) = ( λ−1 i−1 ◦ xq ◦ λi, λ −1 i ◦ πi+1 ◦ λi+1 , p i+2 )27 into the triple (p∗ i , p ∗ i+1 , p ∗ i+2 ) = { (λ−1 i−1 ◦ xn, x q, λ i+1 ◦ pi+2 ) if πi+1 = xn, (λ−1 i−1 ◦ [xtf q] ◦ ν, ν ◦ xq, λ i+1 ◦ pi+2 ) if πi+1 = xtf (xq), provided i+ 1 < r where ν ∈ K[x]∗ is xtf q-admissible. If i+ 1 = r, the cusp transformation Zr−1 ∼C Zr changes the pair (pr−1, p r) = ( λ−1 r−2 ◦ xn ◦ λr−1, λ −1 r−1 ◦ [xsh(xq)] ◦ λr)into the pair (p∗ r−1 , p ∗ r ) = ( λ−1 r−2 ◦ [xshq], x n ◦ λr)where h(xq ) = g(xp). The remaining cusp transformations are defined by the same formulae as above by changing the index i accordingly. Now, the decompositions Zr and Y satisfy the assumption of the case ( α), and so Zr ∼A Y . Now, X ∼A Zr and Zr ∼A Y , and so X ∼A Y . Case (βc ): π = xp (as in the case (c) above) . The element qr has the form μ ◦ ˜π ◦ λr where for the element ˜π we have the same three options (a), (b) or (c) as for the element π. Interchanging X and Y , we reduce the cases (a) and (b) for the element ˜π to the cases (a) and (b) for π which have been considered already. For the last case, ˜π = xq , we repeat word for word the arguments of the case ( βb ) starting from the claim there. The proof of Theorem 1.5 is complete. Proof of Theorem 1.4. Theorem 1.4 follows easily from the first theorem of J. F. Ritt (or from Theorem 1.5 and the definition of the cusp transformations, i.e. the transformations (Adm), ( Ca), ( Cb)and ( Cc)). Proposition 3.6 In general, Theorem 1.4 does not hold for irregular elements. Proof . Let m and n be distinct prime numbers, g(x) and h(x) be non-scalar polynomials of K[x] such that h(0) 6 = 0, k := s + n deg( g) and l := 1 + m deg( h) are prime numbers for some natural number s ≥ 2. Then the degrees of the polynomials xn, xsg(xn) and xh (xm)are prime numbers. Hence, the polynomials xn, xsg(xn) and xsgn are elements of the set Irr( A), and xh (xm) ∈ Irr( K[x]) \A. It is obvious that p := [ xsg(xn)] ◦ [xh (xm)] , q := xn ◦ [xh (xm)] ∈ D , and the element a := xn ◦ [xsg(xn)] ◦ [xh (xm)] ∈ A is irregular since h(0) 6 = 0. Then a = xn ◦ p = xsgn ◦ q ∈ Dec A(a), 28 (deg( xn), deg( p)) = ( n, kl ) and (deg( xsgn), deg( q)) = ( k, nl ). Since k > n , we have (n, kl ) 6 = ( k, nl ) and ( n, kl ) 6 = ( nl, k ). This means that Theorem 1.4 does not hold for the irregular element a. In general, for an element a of A there exists a decomposition p1 ◦ · · · ◦ pt ∈ Dec A(a)with t < l A(a), i.e. Max( a) 6 = Dec A(a). (14) Example . Let k be an odd prime number, g be a non-scalar polynomial of K[x] such that l := s + 2 deg( g) is a prime number for some natural number s ≥ 2. Let λ be a root of the trigonometric polynomial Tk. Consider the element a := [ xsg2] ◦ Tk ◦ T2 ∈ A. The elements p1 := xsg2, p2 := Tk ◦ (x + λ) and p3 := ( x − λ) ◦ T2 of the algebra A are irreducible since their degrees are prime numbers. Let q1 := T2. Note that q2 := [ xsg(x2)] ◦ Tk ∈ D since Tk ∈ (x)(x2) and s ≥ 2. Then a = p1 ◦ p2 ◦ p3 = q1 ◦ q2 ∈ Dec A(a). For an element a of A, the number def( a) := l(a) − lA(a) is called the defect of the element a. The element a is irregular iff def( a) > 0. For each root λ of the derivative a′ of a polynomial a of K[x], the number ind a(λ) := max {i | ∃ p1 ◦ · · · ◦ pr ∈ Dec( a) such that p′ i (pi+1 ◦ · · · ◦ pr ◦ x)(0) = 0 } is called the index of λ. If a ∈ A then lA(a) = ind a(0) . (15) To prove this fact note that it is obvious that lA(a) ≤ ind a(0). For i := ind a(0), let us fix a decomposition p1 ◦ · · · ◦ pr ∈ Dec( a) with p′ i (pi+1 ◦ · · · ◦ pr ◦ x)(0) = 0. For each j = 1 , . . . , i − 1, let uj be a pj -admissible element of K[x]∗. The elements q1 := p1 ◦ u1, q 2 := u−11 ◦ p2 ◦ u2, . . . , q i−1 := u−1 i−2 ◦ pi−1 ◦ ui−1, q i := u−1 i−1 ◦ pi ◦ · · · ◦ pr belong to the algebra A, and a = q1 ◦ · · · ◦ qi. Hence, lA(a) ≥ ind a(0). This establishes the equality (15). For each element a of A with i := ind a(0), let Dec( a, 0) := {p1 ◦ · · · ◦ pr ∈ Dec( a) | p′ i (pi+1 ◦ · · · ◦ pr ◦ x)(0) = 0 }. The next lemma gives all the decompositions of maximal length for each element of A. Lemma 3.7 Let a be an element of A and i := ind a(0) . Then Max( a) = {(p1 ◦ u1) ◦ (u−11 ◦ p2 ◦ u2) ◦ · · · ◦ (u−1 i−2 ◦ pi−1 ◦ ui−1) ◦ (u−1 i−1 ◦ pi ◦ · · · ◦ pr) | p1 ◦ · · · ◦ pr ∈ Dec( a, 0) , u j ∈ K[x]∗ is pj − admissible }. 29 Proof . It is obvious that the RHS ⊆ Max( a). On the other hand, if q1 ◦ · · ·◦ qi ∈ Max( a)then q1 ◦ · · · ◦ qi ∈ the RHS. It suffices to put pj = qj and uj = x. By Lemma 3.7, if the element a of A is irregular and q1 ◦ · · · ◦ qi ∈ Max( a) then necessarily q1, . . . , q i−1 ∈ C and qi ∈ D . Acknowledgements The paper was finished during the author’s visit to the IHES. Support and hospitality of the IHES is greatly acknowledged. The author would like to thank M. Zieve for comments and interesting discussions. References R. M. Avanzi and U. M. Zannier, The equation f (X) = f (Y ) in rational functions X = X(t), Y = Y (t), Compositio Math. , 139 (2003), no. 3, 263–295. V. V. Bavula, Factorization of monomorphisms of a polynomial algebra in one variable, Glasgow Math. Journal , (to appear), ArXiv:math.RA/0701211. Yu. F. Bilu and R. F. Tichy, The Diophantine equation f (x) = g(y), Acta Arith. , 95 (2000), no. 3, 261–288. F. Binder, Characterization of polynomial prime bidecompositions: a simplified proof. Contributions to general algebra, 9 (Linz, 1994), 61–72, Holder-Pichler-Tempsky, Vi-enna, 1995. F. Dorey and G. Whaples, Prime and composite polynomials, J. Algebra , 28 (1974), 88–101. G. Eigenthalter and H. Woracek, Permutable polynomials and related topics. Contri-butions to general algebra, 9 (Linz, 1994), 163–182, Holder-Pichler-Tempsky, Vienna, 1995. H. T. Engstrom, Polynomial substitutions, Amer. J. Mathematics , 63 (1941), no. 2, 249–255. M. Fried, On a theorem of Ritt and related Diophantine problems, J. Reine Angew. Math. , 264 (1973), 40–55. M. Fried and R. Mac Rae, On the invariance of chains of fields, Illinois J. Math. , 13 (1969), 165–171. J. Gutierrez and D. Sevilla, On Ritt’s decomposition theorem in the case of finite fields, Finite Fields Appl. , 12 (2006), no. 3, 403–412. 30 H. Levi, Composite polynomials with coefficients in an arbitray field of characteristic zero, Amer. J. Mathematics , 64 (1942), no. 1, 389–400. F. Pakovich, Prime and composite Laurent polynomials, Arxiv:math.CV/0710.3860. J. F. Ritt, Prime and composite polynomials, Trans. Amer. Math. Soc. , 23 (1922), no. 1, 51–66. P. Tortrat, Sur la composition des polynomes, Colloq. Math. , 55 (1988), no. 2, 329– 353. A. Schinzel, Selected Topics on Polynomials, University of Michigan Press, Ann Arbor, 1982. U. Zannier, Ritt’s second theorem in arbitrary characteristic, J. Reine Angew. Math. 445 (1993), 175–203. M. E. Zieve, Decompositions of Laurent polynomials, arXiv:0710.1902. Department of Pure Mathematics University of Sheffield Hicks Building Sheffield S3 7RH UK email: [email protected] IHES Le Bois-Marie 35, Route de Chartes F-91440 Bures-sur-Yvette France email: [email protected] 31
190646
https://www.expii.com/t/boiling-point-elevation-overview-examples-8044
Expii Boiling Point Elevation — Overview & Examples - Expii Boiling point elevation is a colligative property of solutions. The boiling point of a pure solvent increases when a solute is added to it. Explanations (2) Eric Sears Text 1 We've seen that adding a solute lowers a solvent's freezing temperature. We called it freezing point depression. What about the boiling point? It turns out that the boiling point increases. That's boiling point elevation. Both freezing point depression and boiling point elevation are examples of colligative properties. Colligative properties are physical properties that change because of solute addition. But, the solute's identity does not matter! We only care about the amount. So, they are extensive properties. Only the amount? Really!? For colligative properties, we make an assumption. We assume the solute is not volatile. Typically we are talking about adding ionic salts to water. The salt's vaporization point is well above (about 1300oC) the water's. So, only the water vaporizes. Our assumption is valid. What Causes Boiling Point Elevation? What causes boiling point elevation? Entropy! Remember, entropy is a measure of energy dispersal. Solute addition increases the variety of Van Der Waal's forces. We have more electrostatic interactions. So, our energy is more spread out. Higher entropy means our system is more stable. We can also consider the vapor pressure. Remember, boiling happens when vapor pressure equals atmospheric pressure. By adding a solute, we lowered the vapor pressure of the solvent. On the surface, some solute molecules replace the solvent. But, the solutes only vaporize at very high temperatures. So, there are fewer surface molecules that can vaporize. What about the surface tension? Let's say our solvent is water. It's the most common! Let's say our solute is an ionic salt. It strengthens the surface tension! That's because ion-dipole interactions have a larger bond energy than hydrogen bonds. A greater surface tension means lower vapor pressure. Boiling Point Elevation Formula The boiling point elevation formula is similar to freezing point depression. The equation is ΔTB=KB×m. ΔTB is the change in the boiling point. KB is the boiling point elevation constant. Its units are oCm. The constant is unique for each solvent. The most common solvent is water. Its KB=0.51oCm. m is the solution's molality. Remember molality's units are molsolutekgsolvent. Boiling Point Elevation Practice Problem Let's return to our antifreeze problem. Our cars also use boiling point elevation. Remember, the antifreeze circulates through the coolant system. The engine transfers heat to the antifreeze. So, your car's engine stays at safe operating temperatures. If your coolant started to boil, your engine could overheat. Let's look at an example problem. Again, we'll say we mixed one gallon of antifreeze solution (about 3.8 liters). The ratio of ethylene glycol to water is 1:1. The density of ethylene glycol is 0.7857gml. Its molar mass is 62.1gmol. Calculate the boiling point elevation of water. Step 1: Find the kilograms of water and moles of ethylene glycol. Use dimensional analysis. 3.8Lsolution×1Lethyleneglycol2Lsolution×1000.0mlL×0.7857gml×1mol62.1g=24.0molsethlyeneglycol 3.8Lsolution×1.0Lwater2.0Lsolution×1.0kgL=1.9kgwater Step 2: Calculate the molality. m=molsolutekgsolvent=24.0molsethyleneglycol1.9kgwater=12.6molal Step 3: Calculate the freezing point depression. ΔTB=KB×m→0.51oCm×12.6m=6.4oC We raised the boiling point by 6.4oC. So the new boiling point would be 106.4oC or 223.5oF. Most engines run between 90oC and 104oC. That's 195oF and 220oF. If you added less ethylene glycol, your car could overheat! Report Share 1 Like Related Lessons Freezing Point Depression — Overview & Examples Lowering of Vapor Pressure — Overview & Examples Solute and Solvent Combinations — Overview & Examples Raoult's Law — Definition & Overview View All Related Lessons Eric Sears Video 1 (Video) Boiling Point Elevation With Example Problem by Denovo Tutor In this video, Denovo Tutor explains boiling point elevation. He starts with a quick description of vapor pressure, atmospheric pressure, and boiling. Next, he explains why adding a solvent affects the vapor pressure. Finally, he demonstrates the boiling point elevation equation and shows an example problem. Do you need help with boiling point elevation? This video will explain the concept and the math! Report Share 1 Like You've reached the end TOP How can we improve? General Bug Feature Send Feedback
190647
https://www.linkedin.com/advice/3/how-can-you-calculate-maximum-axial-stress-niibc
Sign in to view more content Create your free account or sign in to continue your search Welcome back By clicking Continue to join or sign in, you agree to LinkedIn€™s User Agreement, Privacy Policy, and Cookie Policy. New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn€™s User Agreement, Privacy Policy, and Cookie Policy. LinkedIn is better on the app Don€™t have the app? Get it in the Microsoft Store. Open the app All Engineering Mechanical Engineering How can you calculate the maximum axial stress in a member? Powered by AI and the LinkedIn community 1 Direct method 2 Differential method 3 Here€™s what else to consider If you are a mechanical engineer, you may have encountered situations where you need to design or analyze a member that is subjected to axial forces. Axial forces are those that act along the longitudinal axis of the member, such as tension or compression. The axial stress is the ratio of the axial force to the cross-sectional area of the member, and it is an important indicator of the strength and stability of the member. In this article, you will learn how to calculate the maximum axial stress in a member using two methods: the direct method and the differential method. Top experts in this article Selected by the community from 3 contributions. Learn more Anand Prakash Honeywell, Advanced Mfg & Product Engineering | Ex-WIKA | Six Sigma | Industrial & Mfg Engg | Industry 4.0 | PFMEA |€¦ 1 1 Direct method The direct method is the simplest way to calculate the maximum axial stress in a member. It assumes that the cross-sectional area of the member is constant along its length, and that the axial force is uniformly distributed over the cross-section. In this case, the maximum axial stress is equal to the average axial stress, which is given by: sigma = P / A where sigma is the maximum axial stress, P is the axial force, and A is the cross-sectional area of the member. For example, if a steel rod with a diameter of 10 mm and a length of 1 m is subjected to a tensile force of 5 kN, the maximum axial stress is: sigma = 5 x 10^3 / (pi x 0.01^2) = 159.15 MPa Add your perspective Help others by sharing more (125 characters min.) Fabio Gonçalves Cavalcante, Eng MSc Supervisor de Engenharia Mecânica de Material Rodante na Metrô de São Paulo Report contribution Thanks for letting us know! You'll no longer see this contribution Considerando a área da seção transversal constante ao longo do elemento, a tensão atuante (letra grega Sigma "σ") consiste na força axial "P" igualmente distribuída sobre a área da seção transversal "A", sendo dada por: σ=P/A. No sistema internacional (S.I.) de unidades temos N/m^2=Pa (Pascal). O mais usual em projetos utilizando S.I. temos N/mm^2 = MPa (Megapascal). Translated Like Like Celebrate Support Love Insightful Funny 2 Differential method The differential method is more accurate and general than the direct method, but it requires more calculations and assumptions. It applies to members that have varying cross-sectional areas along their length, or that have non-uniform axial force distributions. In this case, the maximum axial stress occurs at the point where the cross-sectional area is minimum, or where the axial force is maximum. To find this point, you need to use the following equation: dP / dx = -q where P is the axial force, x is the distance along the member, and q is the distributed axial force per unit length. This equation relates the change in axial force to the distributed axial force, and it can be integrated to find the axial force at any point. Then, you can divide the axial force by the cross-sectional area at that point to get the axial stress. For example, if a tapered steel rod with a diameter of 20 mm at one end and 10 mm at the other end, and a length of 1 m, is subjected to a uniformly distributed tensile force of 10 kN/m, the maximum axial stress is: dP / dx = -10 x 10^3 P = -10 x 10^3 x + C where C is a constant of integration. To find C, you can use the boundary condition that P = 0 at x = 0, which gives: C = 0 Therefore, the axial force at any point is: P = -10 x 10^3 x The cross-sectional area at any point is: A = pi x (0.02 - 0.01 x / 1)^2 The axial stress at any point is: sigma = -10 x 10^3 x / (pi x (0.02 - 0.01 x / 1)^2) The maximum axial stress occurs at x = 1, where the cross-sectional area is minimum, and it is: sigma = -10 x 10^3 x 1 / (pi x 0.01^2) = -318.31 MPa Add your perspective Help others by sharing more (125 characters min.) Anand Prakash Honeywell, Advanced Mfg & Product Engineering | Ex-WIKA | Six Sigma | Industrial & Mfg Engg | Industry 4.0 | PFMEA | Capex | OE | VSM | SMT | Layout Design | Productivity Improvement | Automation | CAD | MOST | Projects Report contribution Thanks for letting us know! You'll no longer see this contribution To calculate the maximum axial stress in a member using differential methodology: 1. Break the member into differential elements. 2. Apply axial force equilibrium to each element. 3. Use the stress formula: 4. Integrate along the length to obtain cumulative axial stress. 5. Identify critical points and apply boundary conditions. 6. Verify against design codes for compliance. 7. Perform sensitivity analysis and document the methodology and results. Like Like Celebrate Support Love Insightful Funny 1 3 Here€™s what else to consider This is a space to share examples, stories, or insights that don€™t fit into any of the previous sections. What else would you like to add? Add your perspective Help others by sharing more (125 characters min.) Fabio Gonçalves Cavalcante, Eng MSc Supervisor de Engenharia Mecânica de Material Rodante na Metrô de São Paulo Report contribution Thanks for letting us know! You'll no longer see this contribution Atualmente a forma mais prática de se obter as tensões atuantes em um elemento consiste no uso de Métodos dos Elementos Finitos orientado por software. Translated Like Like Celebrate Support Love Insightful Funny Mechanical Engineering Mechanical Engineering + Follow Rate this article We created this article with the help of AI. What do you think of it? It€™s great It€™s not so great Thanks for your feedback Your feedback is private. Like or react to bring the conversation to your network. Tell us more Tell us why you didn€™t like this article. If you think something in this article goes against our Professional Community Policies, please let us know. Report this article We appreciate you letting us know. Though we€™re unable to respond directly, your feedback helps us improve this experience for everyone. If you think this goes against our Professional Community Policies, please let us know. Report this article Report this article More articles on Mechanical Engineering No more previous content Struggling to bridge the gap between engineers and stakeholders in project meetings? 40 contributions Balancing innovation and cost-effectiveness in mechanical engineering projects. How can you achieve both? 21 contributions You're facing budget constraints in mechanical engineering projects. How do you innovate effectively? 19 contributions You're juggling urgent deadlines in mechanical engineering. How do you decide which tasks take top priority? You're prototyping a complex mechanical system under tight deadlines. How do you manage time constraints? No more next content See all Explore Other Skills Programming Web Development Agile Methodologies Machine Learning Software Development Data Engineering Data Analytics Data Science Artificial Intelligence (AI) Cloud Computing More relevant reading Mechanical Engineering How do you model and analyze mechanical stress and strain? Mechanical Engineering How can you optimize the interface between mechanical and electronic components? Materials Science How can you perform mechanical testing at high strain rates? Materials Science What factors impact the accuracy of mechanical testing results? Are you sure you want to delete your contribution? Are you sure you want to delete your reply? Like Like Celebrate Support Love Insightful Funny 3 Contributions
190648
https://www.learnzoe.com/blog/addition-with-carryover/
Addition Grade 1 Lessons Grade 2 Lessons Grade 3 Lessons Addition with Carryover What is Addition with Carry Over? Addition with carryover is a math concept used to add two or more numbers when the number of digits in a column is greater than or equal to the base of the number system. When this happens, the result of the addition in that column creates a “carry,” which is added to the next column during the calculation. For example, let’s consider the addition problem 78 + 44: Start by adding the rightmost digits together: 8 + 4 = 12. Since 12 is greater than 9 (the maximum digit in base 10), we create a carry of 1. Please record the digit located on the rightmost side of the total sum. (which is 2) and move the number 1 to the next column. Add the digits in the next column: 7 + 4 + 1 = 12. Again, since 12 is greater than 9, we create another carry of 1. Write down the last number on the right. (which is 2) and carry the 1 to the next column. Finally, add the carry in the leftmost column: 1 + 0 = 1. Write down the result, which is 122. This concept of carrying over the excess from one column to the next allows us to add larger numbers and perform more complex calculations. Addition with, carryover is an essential skill in mathematics. It is used in various fields, including computer science, accounting, and engineering. Remember to practice addition with carryover to enhance your mathematical and problem-solving skills. Carry Over Examples Illustrations of addition problems with carryover When performing addition with carryover, there are situations where the sum of two digits exceeds 9, resulting in a carryover to the next column. Here are a few examples to help understand this concept: Example 1: 74+ 36 110 In this example, the one’s column (4 + 6) results in a sum of 10, greater than 9. Therefore, we write down 0 and carry 1 to the tens column. Example 2: 237+ 486 723 In this example, the one’s column (7 + 6) results in a sum of 13, greater than 9. We write down 3 and carry over 1 to the tens column. Example 3: 1285+ 9768 11053 In this example, the one’s column (5 + 8) results in a sum of 13, greater than 9. We write down 3 and carry over 1 to the tens column. Similarly, the tens column (8 + 7 + 1) results in a sum of 16, greater than 9. We write down 6 and carry over 1 to the hundreds column. Carryover is also an essential concept, as it allows us to calculate the sum of larger numbers accurately. It is important to carefully observe and carry the digits properly to ensure correct results. You can visit the Carry (arithmetic) Wikipedia page for further information about addiction and carryover. Keep practicing, and soon you’ll be an expert in addition to carrying over! Step-by-Step Addition with Carry Over Detailed instructions on how to perform addition with carryover Performing addition with carryover is an essential skill in elementary mathematics. It is used when the sum of two digits in the same place value exceeds nine. Here is a step-by-step guide on how to perform addition with carryover: Start by writing the two numbers you want to add below each other. Align the digits according to their place value (ones, tens, hundreds, etc.). Begin adding the digits in the rightmost column (one place). Write the sum below if the sum is less than or equal to nine. If the total is more than nine, write the first number down and move the tens digit to the next column. Move to the next column (tens place) and add the carried-over and new digit from both numbers. Follow the same process as in step 2: write the sum below and carry over if necessary. Repeat this process for each subsequent column, carrying over as needed. Once you have added all the digits, check for any remaining carried-over digits. If there are, add them to the leftmost column. The final result is the sum of the two numbers. Remember to practice this method with various numbers to strengthen your addition skills for more information on addition and other mathematical concepts. Now you are ready to tackle additional problems that involve confidently carrying over! Importance of Carry Over in Mathematics In mathematics, carryover refers to moving a digit to the next column when adding or subtracting numbers. It is a basic idea that is very important for making correct calculations. Here are some reasons why carryover is important in mathematics: Ensures Accuracy: Carryover helps maintain the accuracy of calculations by correctly carrying forward the value of a digit. Without carryover, calculations can lead to errors and incorrect results. Allows for the Addition of Larger Numbers: Carryover allows us to add larger numbers by carrying the value of a digit to the next column. It enables us to perform addition operations on multiple digits and obtain accurate results. Facilitates Subtraction: Carryover is also essential in subtraction operations. When subtracting, if the digit in the subtrahend column is larger than the digit in the minuend column, carryover is necessary to borrow from the next higher column and adjust the calculation. Builds a Foundation for Advanced Math: Understanding and mastering carryover in basic arithmetic lays the foundation for more advanced mathematical concepts. Carryover is a fundamental skill extended and utilized in various mathematical operations like multiplication and division. Students develop critical problem-solving and logical reasoning skills by grasping the concept of carryover. It helps them understand the structure and patterns in numbers, leading to a deeper understanding of mathematics. In conclusion, carryover is a crucial concept in mathematics that ensures accuracy, enables the addition of larger numbers, facilitates subtraction, and builds a foundation for advanced math. It is an essential skill that students should have a firm grasp of to excel in their mathematical journey. Common Mistakes in Addition with Carry Over Identification and solution of common errors in addition to problems with carryover When it comes to additional problems with carryover, there are a few common things that many people need to correct. These mistakes can cause wrong results and a lot of confusion. Here are some of the most common mistakes and how to avoid them: Forgetting to carry over: One of the most common mistakes is forgetting to carry over the extra digit when adding numbers. It usually happens when the sum of the digits in a column is greater than 9. To avoid this mistake, double-check your work and ensure you carry the correct digit to the next column. Misplacing the carried-over digit: Another common error is misplacing the carried-over digit. It can happen when you are not careful with your writing or get confused with the placement of digits. To avoid this mistake, clearly label each column and keep your writing neat and organized. Adding incorrectly: Sometimes, errors can occur when adding the digits in a column. It can lead to incorrect sums and, ultimately, incorrect answers. To avoid this mistake, take your time and carefully add each digit in the column, starting from the rightmost digit. Not aligning the columns correctly: Improper alignment of columns can make it easier to add correctly and lead to errors. Make sure to align the digits correctly to ensure that each digit is added in the right place. Rounding errors: These can occur when working with decimals or approximating numbers. To minimize rounding errors, use precise calculations and avoid rounding until the final answer is required. By being aware of these common mistakes and avoiding them, you can improve your accuracy when working with additional problems involving carryover. Practice and repetition are also crucial in mastering this skill. So, remember to take your time, double-check your work, and ask for help if needed. Practice Exercises for Addition with Carry Over To reinforce your understanding of addition with carryover, here are some practice exercises to solve. Grab a pen and paper, and let’s get started! 527 + 348 = 964 + 582 = 178 + 648 = 736 + 829 = 492 + 341 = 865 + 219 = 617 + 463 = 974 + 256 = 348 + 671 = 582 + 389 = Remember to follow these steps: Step 1: Write the first number on top and the second below it, lining up the digits. Step 2: Start adding from the rightmost digits (one place) and work your way to the left. Step 3: If the sum of two digits is greater than 9, carry over the tens place to the left and write the ones place digit in the current column. Step 4: Keep doing this until you reach the last number on the left. Feel free to use a calculator if you’d like to check your answers. Practice makes perfect, so keep practicing addition with carryover, and soon it will become second nature to you! Real-World Applications of Addition with Carry Over In everyday life, addition with carryover is a fundamental mathematical operation used in various situations. Some instances where addition with carryover is commonly applied include: Counting Money: When counting money, especially when the total amount exceeds the value of a single denomination, addition with carryover is required to calculate the total sum accurately. Shopping: Addition with carryover is crucial when calculating the total cost of multiple items during shopping. It ensures that the correct total amount is calculated, considering additional costs or taxes. I am calculating Grades: In academic settings, addition with carryover is frequently used to calculate grades. Every assignment or test is important for determining the overall grade, and accurately carrying over any excess points is crucial for reflecting the student’s performance. Recipe Measurements: When following a recipe that requires scaling up or down the quantities of ingredients, addition with carryover is necessary to calculate the adjusted measurements accurately. Time Calculations: Addition with carryover is employed when calculating time durations that span multiple hours or days. Whether calculating work hours or planning events, this operation ensures accurate time management. These are just a few examples of how addition with carryover is applied in real-life situations. Mastering this mathematical concept is essential for everyday tasks that involve calculations. Advantages and Disadvantages of Addition with Carry Over Examining the pros and cons of using leftovers in addition Addition with carryover is a fundamental mathematical concept that allows us to solve more complex addition problems. Carryover is a method that can be useful, but it also has both pros and cons. Here’s a look at the good and bad things about using leftover addition: Advantages of Addition with Carry Over: Efficiency: Carryover allows us to easily add larger numbers without requiring multiple steps or calculations. Accuracy: Using carryover helps ensure accurate results by correctly accounting for the carrying value when adding digits in the same place value. Flexibility: Carryover can be applied to any place value, allowing for adding numbers with varying degrees of magnitude. Real-world relevance: The carryover concept applies in various real-life situations, such as balancing a checkbook or adding up grocery expenses. Disadvantages of Addition with Carry Over: Complexity: Carryover can add complexity to addition problems, especially when dealing with multiple digits in different place values. Potential for errors: If not executed correctly, carryover can lead to errors and incorrect results. Careful attention to detail is required. Time-consuming: In some cases, addition with carryover can be more time-consuming than other methods, especially with large numbers. It is important to note that while carryover is commonly used in addition, alternative methods such as column addition or a calculator can also be employed depending on the situation. Alternative Methods to Addition with Carry Over Introduction to alternative techniques for addition without carry over In addition, the traditional method with carryover is widely taught and used. However, alternative techniques can be as effective and provide a different approach to solving addition problems. These methods can help kids with trouble with carryover or people who want to try different ways to solve addition problems. Here are a few alternative techniques worth considering: The Break and Join Method involves breaking the numbers into smaller, more manageable parts and then rejoining them to find the sum. It can be especially useful when adding larger numbers. The Line-Up Method: Instead of stacking numbers vertically, the line-up method involves aligning the numbers horizontally. It makes it easier to see and add the corresponding place values. The Splitting Method: With the splitting method, numbers are split into place values and added separately. The partial sums are then combined to find the total sum. By exploring these alternative techniques, you can find new ways to approach addition problems and improve your speed and accuracy. It’s important to note that not every method works for everyone, so it’s worth experimenting to find the best method. Remember, the goal is to find the method that helps you visualize and understand addition in a way that makes sense. So try different techniques and see which works best for you. Conclusion In conclusion, addition with carryover is an important mathematical concept allowing us to add larger numbers accurately. Here are the key points to remember about addition with carryover: Addition with carryover is used when the sum of two numbers in a column exceeds the base value of that column. The carryover is the digit that is moved to the next column when adding two numbers with carryover. It is important to keep track of the carryovers to ensure accuracy in the final sum. Addition with carryover can be applied to both whole numbers and decimal numbers. This concept is fundamental in various mathematical operations, including multiplication and division. CATEGORIES Addition Algebra Division Fractions Grade 1 Lessons Grade 2 Lessons Grade 3 Lessons Grade 4 Lessons Grade 5 Lessons Grade 6 Lessons Grade 7 Lessons Grade 8 Lessons Kindergarten Math Activities Math Lessons Online Math Tutorial Multiplication Subtraction TAGS #basic mathematic #Basic Mathematical Operation #best math online math tutor #Best Math OnlineTutor #dividing fractions #effective teaching #Equations #Geometry #grade 8 math lessons #linear equation #Math Online Blog #mathematical rule #mutiplying fractions #odd and even numbers #Online Math Tutor #online teaching #order of math operations #pemdas rule #Point-Slope Form #Precalculus #Slope-Intercept Form #Tutoring Kids LearnZOE’s live and interactive online sessions allow students to work one-to-one in real time with our instructors through a personalized learning program custom designed for each student in order to attain mastery and retention and to build confidence in Mathematics. Thank you for signing up! Home Features Our Process Assessment Video Guides Get Started Blog Contact Us Login Privacy Policy Terms of Use Site Map Facebook Twitter Youtube 43812 Churchill Glen Dr. Chantilly, VA 20152 United States
190649
https://www.stvincentsprimary.org.uk/blog/maths-numbers-to-a-million-tue-12-5-20/
Maths - Numbers to a Million (Tue 12.5.20) - St Vincent's Catholic Primary School Our privacy policy and cookies This website uses cookies to deliver an optimal service, improve the website and keep track of statistics. Please read our privacy policy to find out how we use your data, along with how you can manage cookie settings in your web browser. By continuing to use this website it is understood you are happy to receive these cookies. OK, I understand Privacy Policy Help with cookies Home News School Information Curriculum & Ethos Enrichment Year Groups Contact Us Calendar Newsletters Blog Search Search Search Blog Maths – Numbers to a Million (Tue 12.5.20) Year 5 12 May 2020 21 comments Workbook:5A Chapter: 1 Numbers to 1000 000 Lesson: 1 and 2 Reading and Writing Numbers to 1 000 000 Good morning, Year 5, Thanks for attempting the work yesterday. Position and Movement is not something we have covered in class so from now on, just like the other classes, we will just be revisiting lessons we have already done so that you can consolidate your learning. This means they may be worksheets you have already completed but that’s okay, you can try them again in your yellow home learning book. Today, you will be revisiting how to read and write numbers to 1, 000, 000 (one million). Please watch this video to revisit place value: BBC Bitesize: What is place value? Here is a demonstration of how to complete your MNP questions: As you can see, there are two 10,000 discs which equal twenty thousand (20,000). Then there are three 1000 discs which equal three thousand (3000). There are six 100 discs which equal six hundred (600) There are four 10 discs which equal forty (40). Finally, there are nine 1 discus which equal nine (9). You then need to add the ones in pink together to work out the number before writing it in digits and words. So, 20,000 + 3,000= 23,000 600 + 40 + 9 = 649 The total is: 23,649 and in words this will be ‘tweny-three thousand six hundred and forty-nine’. Extension:Reading and Writing Numbers extension Good luck! Mrs Avdiu & Ms Robertson Printable version: Tuesday maths blog week 4_ place value 21 comments on “Maths – Numbers to a Million (Tue 12.5.20)” Tommaso says: 12 May 2020 at 9:23 am Good morning everyone! I hope you are all having a nice day. I really enjoyed doing today’s maths no problem worksheets. I hope you all have a nice day. Reply 2. Nika says: 12 May 2020 at 9:51 am Hello everyone! I hope that you have had a great day so far! I finished today’s task and it wasn’t as hard as I thought it would be….. Stay safe, be happy, have fun! ~ Nika ??? Reply 3. Maia says: 12 May 2020 at 10:25 am Thank you Mrs Avdiu and Ms Robertson, I really enjoyed today’s maths assignment. It was good to revise. I hope you enjoy the rest of your day. Maia? Reply 4. Mrs Avdiu says: 12 May 2020 at 11:36 am so glad you enjoyed it and it was lovely to speak to you today! 5. Elena says: 12 May 2020 at 10:35 am Hello Everyone !! I am a bit confused of what pages I should do Can someone please help me ? Thank You ?? Reply 6. Nika says: 12 May 2020 at 11:29 am Ok, so ‘Content Page’ in the book it tells you where the worksheets are. Then read that and you will find today’s chapter. The pages are 1 to 4. ~ Nika ? 7. Mrs Avdiu says: 12 May 2020 at 12:09 pm That’s right! It can also be accessed online. 8. Mrs Avdiu says: 12 May 2020 at 11:36 am Chapter: 1 Numbers to 1000 000 Lesson: 1 and 2 Reading and Writing Numbers to 1 000 000 9. Ms Robertson says: 12 May 2020 at 10:57 am Reading and writing large numbers always confuses me – so glad we are revisiting place value Mrs Avdiu. Good luck everyone! Reply 10. Mrs Avdiu says: 12 May 2020 at 12:16 pm thank you Miss! I am glad you also think it would be useful to revisit this concept 11. Ms Robertson says: 12 May 2020 at 11:06 am Hi Elena! Workbook; 5A Chapter; 1 – Numbers to 1000 000 Lessons; 1 + 2 Good luck – have a lovely day! Reply 12. Kayla says: 12 May 2020 at 11:22 am Hello everyone! I really enjoyed the worksheets that I was given and I found them quite easy!! I hope you all have a nice day! Kayla Reply 13. Elena says: 12 May 2020 at 11:26 am Ohh it is okay I have found out what pages I need to do ???? Reply 14. Mrs Avdiu says: 12 May 2020 at 12:10 pm no problem! I always write them on the top of the blog so you may have scrolled and missed it 15. SARA says: 12 May 2020 at 12:35 pm Hi everyone. The questions you had set were already done in the Maths No Problem book where already done so I did the Mind Workout instead, which practised my reading the big numbers skill.????? Reply 16. Mrs Avdiu says: 12 May 2020 at 12:49 pm Hi Sara. Please read the blog carefully as it says that the pages may have been completed already in school but this does not matter as we must revisit concepts studied. You can try the questions again in your yellow book by looking at them online or covering the answers? How was the Mind workout? Did you try the suggested activity on the blog page? 17. Elsa says: 12 May 2020 at 1:53 pm Hello everyone, I hope that you are all enjoying your day so far. I did the maths worksheets and found them quite easy. Have a nice day! Reply 18. Jeanne says: 12 May 2020 at 2:43 pm Hi Y5 I hope everybody’s well and safe! I found today’s math task really easy. However it’s always good to recap! I hope you enjoy the rest of the day! Reply 19. Regan & Erin says: 12 May 2020 at 2:46 pm Regan and Erin have completed today’s maths. Reply 20. Violette Thomas says: 12 May 2020 at 4:36 pm Hello everyone, I hope you have a great day so far!! ? I loved today math no problem!! ??? See you all tomorrow!!!!!!!!!!!!! ? Violette ? Reply 21. Renee says: 12 May 2020 at 4:38 pm Hello everyone. I hope your well. I found todays math really easy and I got all the questions correct. Doing the math task was fun overall.? I hope you have a lovely rest of the day.? Reply Leave a Reply Cancel reply Your email address will not be published.Required fields are marked Comment Name Email Website Personal Data Consent The information you submit using this form will be approved by staff before publication. If you are aged 13 or under, you will need parent/ guardian permission. For help and more details on how this data is used, please see our privacy policy. [x] I agree to my data being used in this way Δ Next English Task: Grammar 12.5.20 Previous English- Tuesday 12th May St Vincent's Catholic Primary School St Vincent Street Marylebone London W1U 4DF Tel: 020 3146 0743 Email: [email protected] Privacy Policy Copyright © 2025 St Vincent's Catholic Primary School. All rights reserved. Admissions Term Dates Contact Us Quick Links
190650
https://www.gauthmath.com/solution/1985519986481796/5-e-3-6-3-7
Solved: (e-3)/6 = 3/7 [Math] Drag Image or Click Here to upload Command+to paste Upgrade Sign in Homework Homework Assignment Solver Assignment Calculator Calculator Resources Resources Blog Blog App App Gauth Unlimited answers Gauth AI Pro Start Free Trial Homework Helper Study Resources Math Equation Questions Question (e-3)/6 = 3/7 Gauth AI Solution 100%(4 rated) Answer The answer is $$\frac{39}{7}$$7 39​ Explanation Multiply both sides by 6 To isolate the term $$e-3$$e−3, multiply both sides of the equation by 6: $$\frac{e-3}{6} \times 6 = \frac{3}{7} \times 6$$6 e−3​×6=7 3​×6 $$e-3 = \frac{18}{7}$$e−3=7 18​ Add 3 to both sides To solve for $$e$$e, add 3 to both sides of the equation: $$e = \frac{18}{7} + 3$$e=7 18​+3 Convert 3 to a fraction with a denominator of 7 $$3 = \frac{3 \times 7}{7} = \frac{21}{7}$$3=7 3×7​=7 21​ Add the fractions $$e = \frac{18}{7} + \frac{21}{7} = \frac{18 + 21}{7} = \frac{39}{7}$$e=7 18​+7 21​=7 18+21​=7 39​ State the final answer $$e = \frac{39}{7}$$e=7 39​ Helpful Not Helpful Explain Simplify this solution Gauth AI Pro Back-to-School 3 Day Free Trial Limited offer! Enjoy unlimited answers for free. Join Gauth PLUS for $0 Previous questionNext question Related Lquivalent Fractions 3.NF.3 _ Name: 1. Which number line shows M at 2/4 ? A C. D. 2. fraction is equivalen A 2/3 B 4/7 C. 2/4 D. 5/6 3. Which answer below is equivalent to the picture shown? A 4/6 B 6/2 C. 2/3 D. 2/6 4. Which fraction shown is not equivalent to 1/2 ? A. 2/4 4/8 3/6 B. C D. 2/1 5. Which comparison statement is not true? A 6/6 = 3/7 frac 4-frac 4 4 2 3 100% (4 rated) Part III: Substitute your results from Part II into the first equation. Solve to find the corresponding values of x. Show your work. 2 points Part IV: Write your solutions from Parts II and III as ordered pairs. 2 points __ and _ ' _ 100% (2 rated) In a right triangle, if one acute angle is 45 ° , what is the measure of the other acute angle? 60 ° 90 ° 30 ° 45 ° 100% (1 rated) How may different arrangements are there of the letters in The number of possible arrangements is MISSISSIPPI? 100% (2 rated) Write the quotient in the form a+bi. 7-i/3-6i 7-i/3-6i =square Simplify your answer. Type your answer in the form a+bi . Use integers or fractions for any numbers in the expressio 100% (4 rated) The product of eight and seven when multiplied by F is less than the product of four and seven plus ten. a. 8+7F<4+7+10 b. 87F>47+10 C. 87F ≤ 47+10 d. 87F<47+10 100% (5 rated) Which equation represents a circle with center -5,-6 and radius of 5? x+52+y+62= square root of 5 x+52+y+62=25 x+52+y+62=5 x-52+y-62=25 75% (4 rated) Multiply and simplify the following. 3-i-4-9i -21-24i -21-23i -21+23i ⑤ 21-23i 100% (5 rated) Which of the following lists only contains shapes that fall under the category of parallelogram? A square, rectangle, triangle B trapezoid, square, rectangle C quadrilateral, square, rectangle D rhombus, rectangle, square 100% (3 rated) Solve the following inequality algebraically. 5x-5/x+2 ≤ 4 What is the solution? -2,13 Type your answer in interval notation. Simplify your answer. Use integers or fractions for any numbers in the expression. 100% (4 rated) Gauth it, Ace it! [email protected] Company About UsExpertsWriting Examples Legal Honor CodePrivacy PolicyTerms of Service Download App
190651
https://en.wikipedia.org/wiki/Gauss%E2%80%93Lucas_theorem
Gauss–Lucas theorem - Wikipedia Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Formal statement 2 Special cases 3 Proof 4 See also 5 Notes 6 References 7 External links Gauss–Lucas theorem [x] 14 languages العربية Deutsch Español Français 한국어 Italiano עברית Magyar Nederlands Polski Русский Türkçe Українська 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item From Wikipedia, the free encyclopedia Geometric relation between the roots of a polynomial and those of its derivative In complex analysis, a branch of mathematics, the Gauss–Lucas theorem gives a geometric relation between the roots of a polynomialP and the roots of its derivativeP'. The set of roots of a real or complex polynomial is a set of points in the complex plane. The theorem states that the roots of P' all lie within the convex hull of the roots of P, that is the smallest convex polygon containing the roots of P. When P has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem. Illustration of Gauss–Lucas theorem, displaying the evolution of the roots of the derivatives of a polynomial. Formal statement [edit] If P is a (nonconstant) polynomial with complex coefficients, all zeros of P' belong to the convex hull of the set of zeros of P. Special cases [edit] It is easy to see that if P(x)=a x 2+b x+c{\displaystyle P(x)=ax^{2}+bx+c} is a second degree polynomial, the zero of P′(x)=2 a x+b{\displaystyle P'(x)=2ax+b} is the average of the roots of P. In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment. For a third degree complex polynomial P (cubic function) with three distinct zeros, Marden's theorem states that the zeros of P' are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of P. For a fourth degree complex polynomial P (quartic function) with four distinct zeros forming a concave quadrilateral, one of the zeros of P lies within the convex hull of the other three; all three zeros of P' lie in two of the three triangles formed by the interior zero of P and two others zeros of P. In addition, if a polynomial of degree n of real coefficients has n distinct real zeros x 1<x 2<⋯<x n,{\displaystyle x_{1}<x_{2}<\cdots <x_{n},} we see, using Rolle's theorem, that the zeros of the derivative polynomial are in the interval [x 1,x n]{\displaystyle [x_{1},x_{n}]} which is the convex hull of the set of roots. The convex hull of the roots of the polynomial p n x n+p n−1 x n−1+⋯+p 0{\displaystyle p_{n}x^{n}+p_{n-1}x^{n-1}+\cdots +p_{0}} particularly includes the point −p n−1 n⋅p n.{\displaystyle -{\frac {p_{n-1}}{n\cdot p_{n}}}.} Proof [edit] Proof By the fundamental theorem of algebra, P{\displaystyle P} is a product of linear factors as P(z)=α∏i=1 n(z−a i){\displaystyle P(z)=\alpha \prod {i=1}^{n}(z-a{i})} where the complex numbersa 1,a 2,…,a n{\displaystyle a_{1},a_{2},\ldots ,a_{n}} are the – not necessarily distinct – zeros of the polynomial P, the complex number α is the leading coefficient of P and n is the degree of P. For any root z{\displaystyle z} of P′{\displaystyle P'}, if it is also a root of P{\displaystyle P}, then the theorem is trivially true. Otherwise, we have for the logarithmic derivative 0=P′(z)P(z)=∑i=1 n 1 z−a i=∑i=1 n z¯−a i¯|z−a i|2.{\displaystyle 0={\frac {P^{\prime }(z)}{P(z)}}=\sum {i=1}^{n}{\frac {1}{z-a{i}}}=\sum {i=1}^{n}{\frac {{\overline {z}}-{\overline {a{i}}}}{|z-a_{i}|^{2}}}.} Hence ∑i=1 n z¯|z−a i|2=∑i=1 n a i¯|z−a i|2{\displaystyle \sum {i=1}^{n}{\frac {\overline {z}}{|z-a{i}|^{2}}}=\sum {i=1}^{n}{\frac {\overline {a{i}}}{|z-a_{i}|^{2}}}}. Taking their conjugates, and dividing, we obtain z{\displaystyle z} as a convex sum of the roots of P{\displaystyle P}: z=∑i=1 n|z−a i|−2∑j=1 n|z−a j|−2 a i{\displaystyle z=\sum {i=1}^{n}{\frac {|z-a{i}|^{-2}}{\sum {j=1}^{n}|z-a{j}|^{-2}}}a_{i}} See also [edit] Marden's theorem Bôcher's theorem Sendov's conjecture Routh–Hurwitz theorem Hurwitz's theorem (complex analysis) Descartes' rule of signs Rouché's theorem Properties of polynomial roots Cauchy interlacing theorem Notes [edit] ^Marden 1966, Theorem (6,1). ^Rüdinger, A. (2014). "Strengthening the Gauss–Lucas theorem for polynomials with Zeros in the interior of the convex hull". Preprint. arXiv:1405.0689. Bibcode:2014arXiv1405.0689R. References [edit] Lucas, Félix (1874). "Propriétés géométriques des fractionnes rationnelles". C. R. Acad. Sci. Paris. 77: 431–433. Lucas, Félix (1879). "Sur une application de la Mécanique rationnelle à la théorie des équations". C. R. Hebd. Séances Acad. Sci.LXXXIX: 224–226.. Marden, Morris (1966). Geometry of Polynomials. Mathematical Surveys and Monographs. Vol.3 (2nd ed.). American Mathematical Society, Providence, RI. Craig Smorynski: MVT: A Most Valuable Theorem. Springer, 2017, ISBN 978-3-319-52956-1, pp. 411–414 External links [edit] Wikimedia Commons has media related to Gauss–Lucas theorem. "Gauss-Lucas theorem". Encyclopedia of Mathematics. EMS Press. 2001 . Lucas–Gauss Theorem by Bruce Torrence, the Wolfram Demonstrations Project. Gauss-Lucas theorem - interactive illustration Retrieved from " Categories: Convex analysis Theorems in complex analysis Theorems about polynomials Hidden categories: Articles with short description Short description matches Wikidata Commons category link is on Wikidata Articles containing proofs This page was last edited on 12 May 2024, at 04:37(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Gauss–Lucas theorem 14 languagesAdd topic
190652
https://www.quora.com/What-happens-to-the-speed-frequency-and-wavelength-of-light-as-it-travels-from-a-material-of-low-refractive-index-to-a-material-of-high-refractive-index
What happens to the speed, frequency and wavelength of light as it travels from a material of low refractive index to a material of high refractive index? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Physics Index of Refraction Wavelength Geometrical Optics Speed of Light Frequency (physics) Light (science) Frequency of Light Refraction 5 What happens to the speed, frequency and wavelength of light as it travels from a material of low refractive index to a material of high refractive index? All related (34) Sort Recommended Jozef Mitros Retired Electrical Engineer, Ph.D. · Author has 7.2K answers and 4.6M answer views ·5y Light frequency f is constant. It doesn’t change when light travels from one material to another. The relations between the light speed, frequency and wavelength are described by the below formulas: where c = 300,000 km is speed of light in vacuum, v - speed of light in the specific material, and λ is the light wavelength in the specific material. Therefore the light frequency doesn’t change. The light speed v and the light wavelength λ are smaller in the material with a higher refractive index comparing to a material with a lower refractive index and vacuum. For the illustration below is a tabl Continue Reading Light frequency f is constant. It doesn’t change when light travels from one material to another. The relations between the light speed, frequency and wavelength are described by the below formulas: where c = 300,000 km is speed of light in vacuum, v - speed of light in the specific material, and λ is the light wavelength in the specific material. Therefore the light frequency doesn’t change. The light speed v and the light wavelength λ are smaller in the material with a higher refractive index comparing to a material with a lower refractive index and vacuum. For the illustration below is a table with green light parameters in three different environments: ‘E’ means the exponent of 10, i.e. 5.45E+14 = 5.45×10¹⁴ = 545,000,000,000,000 Upvote · 9 2 Sponsored by State Bank of India Start your Home Loan journey with SBI. Get interest subsidy up to ₹1.80 lakh on Home Loans up to Rs. 25 lakh and turn your dream into reality! Learn More 99 12 Related questions More answers below How does the refractive index affect the speed of light and frequency? What happens when a ray of light travels from a medium of higher refractive index to a medium of lower refractive index? What happens to light when light travels from a medium with a low index of refraction into a medium with a high index of refraction? If the refractive index of something is high enough could light just stop moving? What is the speed of light when the refractive index is 2.4? Richard Shagam PhD in College of Optical Sciences, University of Arizona (Graduated 1980) · Author has 1.2K answers and 550.6K answer views ·3y Frequency doesn’t change, while the propagation speed and wavelength do. The reason why light rays are refracted (deviated, or bent) when entering a material is because boundary conditions for the light oscillations have to be maintained for the light to enter the material. This is the basis for Snell’s law. Here is a link to some of the physics and math involved in wave propagation at an interface. Enjoy! Upvote · Gene M.S. in Physics, University of Minnesota - Twin Cities (Graduated 1971) · Author has 9K answers and 2.7M answer views ·4y Radiation frequency doesn’t change in that case. Reduces to speed/time/distance problem in refraction. Upvote · Giordon Stark PhD in Physics, University of Chicago (Graduated 2018) · Upvoted by Abhijeet Borkar , PhD in Physics (Astrophysics) · Author has 1.1K answers and 6.8M answer views ·Updated 12y Related Why does the light's wavelength change, and not frequency, during refraction? This is a fun question that I've answered before for my kids! The short (totally valid) answer: the boundary conditions of EM waves implies that the frequency cannot change. Quick math reference of boundary conditions: E⊥above−E⊥below=σ ϵ 0 E∥above−E∥below=0 E above⊥−E below⊥=σ ϵ 0 E above∥−E below∥=0 B⊥above−B⊥below=0 B above⊥−B below⊥=0 Continue Reading This is a fun question that I've answered before for my kids! The short (totally valid) answer: the boundary conditions of EM waves implies that the frequency cannot change. Quick math reference of boundary conditions: E⊥above−E⊥below=σ ϵ 0 E∥above−E∥below=0 E above⊥−E below⊥=σ ϵ 0 E above∥−E below∥=0 B⊥above−B⊥below=0 B∥above−B∥below=μ 0 K B above⊥−B below⊥=0 B above∥−B below∥=μ 0 K That explains it all? Imagine that the frequency of the wave changed, the frequency of the two waves on either side of the boundary would be out of phase and it would be impossible to meet the boundary conditions. But that's just math! Fine, fine. Maybe phase isn't for you. What about energy conservation? You're convinced about that, right? What if the two waves have different frequencies? If maximum amplitude comes in (at a peak of the wave), and maximum amplitude does not come out - that extra energy goes into the boundary... Oh sh. BUT THE SPEED OF THE WAVE CHANGES TOO. Well, you know. Just think about it like v=f λ v=f λ. If frequency doesn't change, both wavelength and speed have to change to accomodate it. Physically, I have a hard time coming up with a good analogy. But go back to what we were talking about with energy. If the wavelength changes, increases; I have to disperse my energy over a larger distance of the wave... but that means the energy per unit length coming out of the boundary decreases, but that violates energy conservation. To fix that, nature makes the wave faster. BAM. ( Ashwin Dasondhi [ ] ) But light changes its direction too...how can it be explained by above description? Another good question. First, the underlying concept is Fermat's Principle (principle of least action/time). It is so fundamental in nature that even ants follow Fermat's principle of least time [ ] (they change direction to minimize the time taken to cross a surface) In the same way that light has the wave-particle duality, one might see light doing the same thing to across a boundary. That is probably the strongest argument for it - but it raises another question: how does the light know which angle to turn at? This puzzles me the most, but I have a nice analogy that has helped before - the roller-blade analogy. You're on roller blades gliding along a blacktop and you're heading towards a gravel boundary at an angle. One foot will hit the boundary first before the other foot (as gravel has more friction, slows you down a bit), and you start to rotate a little bit while moving forward until both feet are on the gravel. In this way, the rotation occurs because of a differences in friction, or in the case of light, a difference in the indices of refraction. To tie this analogy better with light waves, imagine that light does travel slower when it hits the boundary causing the spacing between successive waves to decrease, which changes the interference pattern (self-interfering?) and creates the new angle we observe. Now, if you're like me, where I sometimes forget things, and wonder why this answer is totally sufficient... and where the boundary conditions come from... let's take a dive into some deeper physics. I'll assume you're familiar with some introductory electricity and magnetism that you might get at a college level. What are the boundary conditions we talk about here? This comes from a mixture of E&M books - I obtain pictures from Griffiths Electric Fields The Gaussian Pillbox. I guess you know what I'm going to ask, right? Gauss' Law ∮→E⋅d→A=q enc ϵ 0=σ A ϵ 0∮E→⋅d A→=q enc ϵ 0=σ A ϵ 0 (Note for teachers of E&M - I always make it a habit to put vectors on my electric field and area vector to emphasize that dot product is how we can evaluate this in special symmetrical cases) There are 5 faces visible on the pillbox to be aware of. The faces tangential to the area vector (perpendicular to the surface) contribute nothing to the total flux. The perpendicular face (the one with A A denoted) contributes to the total flux. If we make the pillbox smaller (make it's thickness ϵ→0 ϵ→0), then the surface integral (Gauss' Law) becomes E⊥above−E⊥below=σ ϵ 0 E above⊥−E below⊥=σ ϵ 0 If you have trouble understanding the math, that's ok. Remember that an integral is like a sum of pieces, so if we make ϵ→0 ϵ→0 (small enough) so that we just have two pieces of electric field to sum over, →E⊥above E→above⊥ and →E⊥below E→below⊥, then it's just summing over these two pieces multiplied by the d→A d A→ which happens to just be A A (recall the dot product involved). The minus above comes from the fact that the field is radiating outwards from the boundary (I believe the picture is subtly wrong here, but I'm not as smart as Griffiths). Does this make sense? It certainly does. This is just a statement about Gauss' Law in disguise - the electric field only observes a discontinuities in areas where a charge distribution exists. It is a pretty huge idea (in my opinion) and understanding this really ties a lot of this together. This is just the electric field perpendicular to a surface, the tangential component by contrast ∮→E⋅d→l=0⇒E∥above=E∥below∮E→⋅d l→=0⇒E above∥=E below∥ Note: the boundary conditions are HUGELY important because you can combine this with Laplace's Equation ∇2 ϕ=0∇2 ϕ=0 to guarantee uniqueness of your solution to a given electrostatics problem. I really cannot overstate this. Really. Magnetic Fields One of the reasons I went into a long elaboration with electrostatics is because magnetism has a fantastic parallel. I always find it easier to remember (in a hurry) that electrostatics works like I mentioned above, and just to reverse the situation for magnetism. Fun right? Let's explore... Note: Upvote · 999 104 99 11 9 2 Sponsored by All Out Kill Dengue, Malaria and Chikungunya with New 30% Faster All Out. Chance Mat Lo, Naya All Out Lo - Recommended by Indian Medical Association. Shop Now 999 620 Related questions More answers below How is the refractive index of a material related to the speed of light in it? What is the refractive index of glass material for which the speed of light in it is 1.92 x10? How can you relate the refractive index with the speed of light? What is the refractive index of material in terms of the speed of light? What is the refractive index and speed of light? Assistant Bot · Sep 6 When light passes from a medium of lower refractive index (n1) into one of higher refractive index (n2 > n1), its speed, frequency and wavelength change as follows: Speed (v): decreases. v = c / n, so v2 = c / n2 < v1 = c / n1. Frequency (f): remains constant across the boundary. The oscillation rate of the electromagnetic field does not change at an interface. Wavelength (λ): decreases. λ = v / f, so with v decreasing and f unchanged, λ2 = λ1 · (n1 / n2) < λ1. Additional notes (concise): The change in direction (refraction) is governed by Snell’s law: n1 sinθ1 = n2 sinθ2. Photon energy E = hf stays Continue Reading When light passes from a medium of lower refractive index (n1) into one of higher refractive index (n2 > n1), its speed, frequency and wavelength change as follows: Speed (v): decreases. v = c / n, so v2 = c / n2 < v1 = c / n1. Frequency (f): remains constant across the boundary. The oscillation rate of the electromagnetic field does not change at an interface. Wavelength (λ): decreases. λ = v / f, so with v decreasing and f unchanged, λ2 = λ1 · (n1 / n2) < λ1. Additional notes (concise): The change in direction (refraction) is governed by Snell’s law: n1 sinθ1 = n2 sinθ2. Photon energy E = hf stays the same (frequency constant), while momentum changes because wavelength and speed change. These relationships hold for non-dispersive approximation at a single frequency; in dispersive media n depends on frequency, so λ and v vary with frequency accordingly. Upvote · Leo C. Stein Ph.D. from MIT, B.S. from Caltech. Specializing in gravity. · Upvoted by Nickolas Fotopoulos , PhD in Physics · Author has 410 answers and 4M answer views ·12y Related Why does the light's wavelength change, and not frequency, during refraction? So many complicated answers! It's not really that tricky. Let's start with something close to home. Here is a trainer making waves in a rope (watch only the first few seconds, before he starts alternating hands): The trainer moves his arms up and down (a complete cycle) about 2 times every second—that's the frequency, 2 cycles/second or 4π radians/second. Frequency is just how frequently his hands go up and down. Now, if you watch any segment of rope go up and down, it also has a frequency of 2 times per second: The trainer's hands at A go up and down every 2 Continue Reading So many complicated answers! It's not really that tricky. Let's start with something close to home. Here is a trainer making waves in a rope (watch only the first few seconds, before he starts alternating hands): The trainer moves his arms up and down (a complete cycle) about 2 times every second—that's the frequency, 2 cycles/second or 4π radians/second. Frequency is just how frequently his hands go up and down. Now, if you watch any segment of rope go up and down, it also has a frequency of 2 times per second: The trainer's hands at A go up and down every 2 second, so the rope segment in his hands does too. The rope segment at A goes up and down every 2 second, and it pulls on the rope segment at B, so B goes up and down every 2 seconds. And B pulls on C, so every 2 second, C goes up and down. I haven't said anything about the wavelength at all. Calculating the wavelength requires more: how much each segment of the rope weighs, for example. I don't know that number. But I know that however often the trainer moves his hands up and down, that's how often the first segment of rope goes up and down; and that determines how often the second segment of rope goes up and down, etc. If you added a different type of rope at the end, with a different weight (per length), it will move a different distance—but just as frequently, so it has the same frequency. Next up is light. Obviously light is way more complicated than a rope! But it's quite similar. This time it's the electric field that's increasing and decreasing. Let's look at just one part of the electric field, in the "z" direction: Here we have a light ray coming from the left and entering some medium, say a crystal. Even though the electric field everywhere is dancing, we can look just along one ray. Now here's the way that the electric field is just like the rope: The electric field at A "pulls" on the electric field at B, and the electric field at B pulls on that at C, and so on. The electric field wants to be smooth, so if you make it bigger in one place, it will "pull" the electric field nearby to be bigger so as to smooth itself out. Once it gets to the crystal, the electric field also pulls on electrons in the crystal, and the electrons also pull on the electric field. That's just like adding another rope with a different weight (per length). The electric field outside of the crystal "pulls" on the electric field inside the crystal. Outside, at point E, we have light with the electric field getting bigger and smaller once every T seconds. That's going to pull on the electric field inside the crystal, bigger and smaller every T second, so F is going to change just as frequently—that means with the same frequency. Upvote · 99 99 9 4 Richard Shagam BS in Physics, University of Wisconsin - Madison (Graduated 1972) · Author has 1.2K answers and 550.6K answer views ·2y Related What happens to light when light travels from a medium with a low index of refraction into a medium with a high index of refraction? Snell’s law, as well as the Fresnel equations apply, same as if a light beam were traveling from air to glass, or from vacuum to glass. You just need to enter the proper refractive indices into the equations. Upvote · Sponsored by Qustodio Technologies Sl. Protect your kids online. Parenting made easy; monitor your kids, keep them safe, & gain peace of mind with Qustodio App. Learn More 999 643 Jeffrey Werbock musician, lecturer, documentary film maker · Author has 65.8K answers and 13.2M answer views ·2y Related What happens to light when light travels from a medium with a low index of refraction into a medium with a high index of refraction? The “classic” answer is that photons slow down when passing through transparent media and that the denser the medium, the slower the photons pass through it. The reality is a bit more complicated; transparent media are made of molecules with oscillating electric fields that the photons must pass through. Like a ship on water that can only go one speed, that ship will appear to go faster when the water is smooth and slower when the water is wavy. The waves on the water lengthen the path of the ship because of the vertical component of the wave; similarly, photons are “slowed down” when passing Continue Reading The “classic” answer is that photons slow down when passing through transparent media and that the denser the medium, the slower the photons pass through it. The reality is a bit more complicated; transparent media are made of molecules with oscillating electric fields that the photons must pass through. Like a ship on water that can only go one speed, that ship will appear to go faster when the water is smooth and slower when the water is wavy. The waves on the water lengthen the path of the ship because of the vertical component of the wave; similarly, photons are “slowed down” when passing through those oscillating electric fields because the paths are lengthened by the oscillations. Photons are massless and can only move at “c”. This explains why photons appear to speed up again when they exit the transparent medium of greater optical density (higher RI). They don’t really speed up again (where would that energy come from), they just appear to speed up again because they never actually slowed down to begin with, they just took longer to get through those oscillating electric fields. Upvote · 9 1 Ron Brown Decades of teaching physics to undergrads · Author has 13.6K answers and 84.3M answer views ·3y Related What happens when a ray of light travels from a medium of higher refractive index to a medium of lower refractive index? There are two things that happen at any interface between two transparent media when light is incident on that interface. Some of the light reflects and if a particular condition is met, some of the light refracts. And that condition relates to the indices of refraction of both media as well as the angle of incidence. So what ideas apply to determine what happens? For one thing, what one means by the index of refraction of any transparent medium is the ratio of the speed of light in a vacuum to that in the medium. The larger the index of refraction, the slower the light travels in the medium. If Continue Reading There are two things that happen at any interface between two transparent media when light is incident on that interface. Some of the light reflects and if a particular condition is met, some of the light refracts. And that condition relates to the indices of refraction of both media as well as the angle of incidence. So what ideas apply to determine what happens? For one thing, what one means by the index of refraction of any transparent medium is the ratio of the speed of light in a vacuum to that in the medium. The larger the index of refraction, the slower the light travels in the medium. If the light is incident perpendicularly to the interface between the two media, it continues perpendicularly into the second medium - independent of whether it goes from the larger index material to the smaller or the other way. (Look at Snell’s law to know why.) But there is an interesting principle that applies if the light is incident other than perpendicularly to the surface. That is, “refraction” refers to the change in direction of the light as it goes from one medium to the other. And the condition that determines how the incident angle compares to the refraction angle is called Snell’s law of refraction (after Willibrord Snellius who first published the law in 1621.) But Snell’s law can’t always be satisfied if the light travels from the higher index medium into the lower index medium. And if refraction can’t occur, that is, if Snell’s law can’t be satisfied, the only other choice is for all the light to be reflected (refer to the first paragraph). So there are three things that can happen if light goes from a higher index medium to a lower index medium: Some of the light reflects in the direction it came from and some of the light transmits in the same direction - exiting at a higher speed - if the incident light is perpendicular to the surface. Some of the light reflects at the same angle it is incident and some of the light refracts at a different angle given by Snell’s law. All of the light reflects and the same angle it is incident if Snell’s law can’t be satisfied. It’s called “total internal reflection”. Those are the ideas you should study to know which of the three cases occurs for any particular problem. It will help if you construct a careful figure - showing the incident, refracted, and reflected angles and then look carefully at what Snell’s law is saying. The reason Snell’s law is true is an entirely different story. [ Starting with Snell’s law, one can derive the equations that give the relative positions of object and image distances for thin lenses, can determine the image positions of thick lenses, as well as determine the focal lengths of lenses. It is the basis of much of geometric optics involving lenses, lens combinations, and lens design.] Upvote · Promoted by JetBrains Maria Goldade Worked at JetBrains ·1y Is the CLion IDE good for learning C or C++? Indeed, IDEs are sometimes more of a hindrance than a help when you're learning a new programming language. However, this isn't true of CLion, which can be very helpful right off the bat. Let's run through some of the advantages of using CLion: CLion makes it easy to start a new project. The wizard will generate a simple project structure with stub code that you can explore and run right away. Then, as you start writing code, CLion will highlight its structure and suggest improvements. This allows you to learn both the language itself and the best code practices from the very beginning. For many Continue Reading Indeed, IDEs are sometimes more of a hindrance than a help when you're learning a new programming language. However, this isn't true of CLion, which can be very helpful right off the bat. Let's run through some of the advantages of using CLion: CLion makes it easy to start a new project. The wizard will generate a simple project structure with stub code that you can explore and run right away. Then, as you start writing code, CLion will highlight its structure and suggest improvements. This allows you to learn both the language itself and the best code practices from the very beginning. For many error cases, CLion suggests quick-fixes, which means you can start memorizing the proper solutions right off the bat. For example, the IDE catches typical errors like dangling pointers – the types of errors that might pop up a lot when you’re a newbie but can also be hard to debug. The building, running, and debugging processes are completely transparent, as are VCS operations. You can control every step, see the exact commands, parameters, and flags that are being used, and experiment with them. If necessary, you can always perform the same actions in the built-in terminal. CLion allows you to become familiar with a variety of compilers and build systems. It works with gcc, Clang, MSVC compilers, and project formats, such as CMake, Makefile, Meson, and others. Understanding the underlying assembly is important when you’re learning a language like C++. CLion allows you to examine the assembly of a file without having to build the entire project. You can change the compiler flags, refresh the assembly view, and see the effect immediately. And, of course, the IDE debugger is very convenient for learning. The IDE will help you investigate any runtime problem, including those that require memory analysis and disassembly. To summarize, CLion will not overwhelm you with a complex set-up process and will not hide the essentials in the background. You’ll be able to get up and running with a project in no time, while learning best code practices and efficient debugging right out of the gate. And if you’re a student, you can get it all for free! Upvote · 999 183 9 6 9 5 Ander Johnson PhD in Physics, The University of Texas at Austin (Graduated 2017) · Author has 113 answers and 372.2K answer views ·8y Related What is the relation between wavelength and refractive index? What is the relation between wavelength and refractive index? For transparent materials, the index of refraction is approximately related to a material’s relative permittivity by the equation n=√ϵ r n=ϵ r. The relative permittivity tells us how strongly the material responds to an applied electric field; more specifically, it tells us how strongly the material opposes an applied electric field. To understand where this property comes from let us consider a transparent material that is made of a bunch of atoms. For simplicity I am going to represent the atoms as a positively charged nucleu Continue Reading What is the relation between wavelength and refractive index? For transparent materials, the index of refraction is approximately related to a material’s relative permittivity by the equation n=√ϵ r n=ϵ r. The relative permittivity tells us how strongly the material responds to an applied electric field; more specifically, it tells us how strongly the material opposes an applied electric field. To understand where this property comes from let us consider a transparent material that is made of a bunch of atoms. For simplicity I am going to represent the atoms as a positively charged nucleus (red circle) with one electron (blue circle). I want to point out that typical transparent materials are not made from one electron ions; this is just a toy model that I’m using to explain relevant concepts. Without an applied electric field, the orientation between the nuclei and the electrons will be random, as shown below. When an electric field is applied to the material, the atoms will have a tendency to align themselves with the electric field, with the electrons being attracted to the positively charged electrode. One thing to notice: this alignment of the atoms creates its own electric field, one that opposes the applied electric field. This new electric field causes a reduction of the net electric field. The concept of relative permittivity comes from this behavior; materials that experience a large decrease in the net electric field are said to have large relative permittivity. A similar thing happens when light propagates through a transparent material, shown below as the green sine wave. Since light is an oscillating electromagnetic field, it causes an oscillation of the electrons in the atoms. Like in the previous example, the orientation of the atoms cause an electric field that opposes the light’s electric field. This induced electric field oscillates along with the electrons. There is a natural frequency that the electrons like to oscillate at which is not necessarily the same as the frequency of the light. The closer the frequency of the light gets to the electrons’ natural frequency, the stronger the electrons can oscillate, and the larger the opposing electric field is that is created. This is very similar to being pushed on a swing, if a person is pushing you at the same frequency that you are swinging, you swing higher; if the person’s timing is a bit off, you can’t swing as high. This larger opposing electric field translates into a larger relative permittivity and, therefore, a larger index of refraction. This is why different wavelengths of light experience a different index of refraction in transparent materials. The different wavelengths correspond to different frequencies and the index of refraction is dependent on how close these frequencies are to the electrons’ natural frequency. For transparent material, the natural frequency of the electrons is in the ultraviolet region. That means that shorter wavelength light, such as blue light, have frequencies closer to the natural electron frequency than longer wavelength light, like red light. This is why blue light experiences a higher index of refraction than red light when passing through the same material. The above discussion involves a simplified model for transparent materials. In real transparent materials, the electrons can have several natural frequencies, complicating the system. In practice, if you want to know the relationship for a particular material between the wavelength of light and the material’s refractive index, you will need to look up the information in a database. Addendum Upon reflection, I realize that answering this question in terms of relative permittivity and just stating that relative permittivity is related to the refractive index may not have fully answered the question. So I would like to more directly connect what I was talking about earlier to refractive index. The refractive index of a material describes how fast light travels through the material is defined as n n=c/v=c/v where c c is the speed of light in vacuum and v v is the speed of light in the material. When light travels through a transparent material, it slows down. The reason for this is because the light traveling through the material is a combination of the original light that entered the material and a induced oscillating electromagnetic field (light) that is created by the oscillating electrons. This induced light is slightly delayed (phase shifted) compared to the original light. As I talked about earlier, the closer the frequency of the light is to the natural frequency of the electrons, the stronger the oscillations of the electrons; this results in a larger induced electromagnetic field and therefore a larger amount of delayed light, slowing the overall propagation of light within the material. This process repeats itself many times before the light exits the material, delaying the light even more each time. Upvote · 99 43 9 3 9 1 Paul Manhart Studied at Optical Sciences Center. University of Arizona. Physics and Astronomy, Univ of Arizona. · Author has 6.1K answers and 3.9M answer views ·1y Related How does the refractive index affect the travel of light at different wavelengths through an optical fiber? The refractive index of a material determines how fast light travels inside. The speed of light in a vacuum is called c. The speed of light inside a medium whose refractive index is n, is given by v = c/n So if n = 2, then light slows down to 1/2 its speed in a vacuum. On top of this, the index is not constant for all frequencies. Higher frequencies (shorter wavelengths) like blue will see a higher i Continue Reading The refractive index of a material determines how fast light travels inside. The speed of light in a vacuum is called c. The speed of light inside a medium whose refractive index is n, is given by v = c/n So if n = 2, then light slows down to 1/2 its speed in a vacuum. On top of this, the index is not constant for all frequencies. Higher frequencies (shorter wavelengths) like blue will see a higher index than lower frequencies (longer wavelengths) like red. This is called dispersion. The figure below shows... Upvote · Ayesha Shahzad MS Optometry from University of Faisalabad (Graduated 2016) ·6y Related What does a high refractive index mean? It means the Speed of passing light is slower from the material that have high refractive of index as compared to the light passing from the vacuum. The higher the density of material , the slower the light will pass through that material and higher will be the refractive index of that material. Continue Reading It means the Speed of passing light is slower from the material that have high refractive of index as compared to the light passing from the vacuum. The higher the density of material , the slower the light will pass through that material and higher will be the refractive index of that material. Upvote · 9 6 Jacob VanWagoner Engineer with a focus on physics. Or am I a physicist with an engineering degree? · Author has 3.5K answers and 22.1M answer views ·12y Related What determines the refractive index of a material? The others haven't actually answered your real question -- what primary properties of a material give it an index of refraction. I'm going to answer this from a more interesting perspective, hopefully the perspective you intended. I'm going to start by describing the polarization vector. This applies to both electric fields and magnetic fields, but keep in mind that as of yet there are no magnetic charges, but there are magnetic dipoles that are magnetic equivalents to electric dipoles. Take a point-object that is charge neutral and not charge separated (that is, it doesn't have any electric fie Continue Reading The others haven't actually answered your real question -- what primary properties of a material give it an index of refraction. I'm going to answer this from a more interesting perspective, hopefully the perspective you intended. I'm going to start by describing the polarization vector. This applies to both electric fields and magnetic fields, but keep in mind that as of yet there are no magnetic charges, but there are magnetic dipoles that are magnetic equivalents to electric dipoles. Take a point-object that is charge neutral and not charge separated (that is, it doesn't have any electric fields coming from it at all). Now place an electric field on it and if is polarizable then it will have an induced dipole moment when you apply an electric field to it -- that is, if it is composed of charges (and everything is), then the applied electric field will pull the charges apart. This induced dipole may not look exactly like 2 charges being pulled apart, but the fields generated by the charge separation will appear the same as if it were truly two charges being pulled apart. Mathematically, we can define a dipole moment, which is the charge times the separation, or p = qd a vector in the direction of the separation. Furthermore, for relatively small applied electric fields, the induced dipole moment will be nearly directly proportional to the strength of the applied electric field, so we have p = αE, where α is the polarizability of the particle. The greater the polarizability, the stronger the induced dipole for a given electric field. On a macro level, what happens if we have a whole bunch of electric dipoles through a material?? Then we define a macro-quantity of polarizability called susceptibility, often denoted as ᵡ (greek letter chi). In small densities (that is, the dipoles are far enough apart that the local fields from the dipoles don't interact with other dipoles), the susceptibility increases exactly with the density, or ᵡ = Nα. We define a macro polarization vector P = ᵡE, which is the sum of all the micro polarization vectors p. Next, we define the electric displacement vector D = ε0E + P, where ε0 is the permittivity of free space. To make matters simpler, we define D = εE, where ε is the permittivity of the substance, found from the electric susceptibility, combining the previous two formulas to get ε = ε0 (1 + ᵡ) But when they get close enough to start interacting, it becomes dependent on the arrangement of the dipoles. To get it into a macro factor susceptibility, one must take the response of induced electric fields and average them over the material. In a crystal arranged periodically, the average is taken over the unit cell. Since each induced dipole has its own electric field pattern, if another dipole is close enough then the induced field will add more field to induce a stronger dipole moment in the nearby dipoles, making the total susceptibility more than just the polarizability and the density. This depends on how the dipoles are arranged. For a cubic lattice, it is fairly easy to solve, and the result is the classic Clausius-Mossotti homogenization formula: where N is the density of the dipoles,, d is the the density, epsilon_ro is the macro electric permittivity, and alpha is the polarizability of the molecule. The equation is meant to be solved for epsilon_ro, which can be done in closed form, but this form is much easier to write it in. This formula is valid for a material with cubic order, but not for non-cubic materials. There is a similar formulation for magnetic permeability, using μ instead of ε and some other small differences but with the same concept. There is a time response of polarizability, which leads to dispersion (having a different index of refraction at different frequencies, or having an RF permittivity different from an optical permittivity). In addition, larger particles significantly smaller than the wavelength of an incoming electromagnetic wave can also be characterized as dipoles and can be used in the Clausius-Mossotti formula to create a synthetic permittivity and permeability, which is where metamaterials come from. As for refractive index, the others have covered it quite well -- the refractive index is (I won't derive the wave equation for the explanation of that formula). Upvote · 99 34 9 1 9 3 Related questions How does the refractive index affect the speed of light and frequency? What happens when a ray of light travels from a medium of higher refractive index to a medium of lower refractive index? What happens to light when light travels from a medium with a low index of refraction into a medium with a high index of refraction? If the refractive index of something is high enough could light just stop moving? What is the speed of light when the refractive index is 2.4? How is the refractive index of a material related to the speed of light in it? What is the refractive index of glass material for which the speed of light in it is 1.92 x10? How can you relate the refractive index with the speed of light? What is the refractive index of material in terms of the speed of light? What is the refractive index and speed of light? Does the refractive index depend on the wavelength of the light? Why? What happens when the refractive index of light is less than one? How fast does the light travel in a glass of refractive index 1.5? The speed of light in a certain medium is 125,000 km/s. What is the refractive index of that medium? What material is that? Why is refractive index dimensionless? Related questions How does the refractive index affect the speed of light and frequency? What happens when a ray of light travels from a medium of higher refractive index to a medium of lower refractive index? What happens to light when light travels from a medium with a low index of refraction into a medium with a high index of refraction? If the refractive index of something is high enough could light just stop moving? What is the speed of light when the refractive index is 2.4? How is the refractive index of a material related to the speed of light in it? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
190653
https://math.stackexchange.com/questions/3038379/orthogonal-vectors-lambda-a-mu-b
linear algebra - Orthogonal vectors, $||\lambda a+\mu b||$ - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Orthogonal vectors, ||λ a+μ b||||λ a+μ b|| Ask Question Asked 6 years, 9 months ago Modified6 years, 9 months ago Viewed 194 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. If a a and b b are orthogonal, and ||a||=||b||=1||a||=||b||=1, calculate ||λ a+μ b||||λ a+μ b||. ⟨λ a+μ b,λ a+μ b⟩=⟨λ a,λ a⟩+2⟨λ a,μ b⟩+⟨μ b,μ b⟩⟨λ a+μ b,λ a+μ b⟩=⟨λ a,λ a⟩+2⟨λ a,μ b⟩+⟨μ b,μ b⟩ ⟨λ a,μ b⟩=0⟨λ a,μ b⟩=0 Here is where I am unsure. λ 2⟨a,a⟩+μ 2⟨b,b⟩=λ 2+μ 2 λ 2⟨a,a⟩+μ 2⟨b,b⟩=λ 2+μ 2 Is this correct? linear-algebra Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Dec 13, 2018 at 18:46 Zach Langley 1,377 12 12 silver badges 19 19 bronze badges asked Dec 13, 2018 at 17:58 LillysLillys 161 3 3 silver badges 12 12 bronze badges 7 Can you please edit your question by writing the mathematical logic and steps in MathJax? The answer should rather be λ 2+μ 2−−−−−−√λ 2+μ 2, but in order for us to see where your mistake on this occurs, it's much more helpful if you provide us with the work you completed.Decaf-Math –Decaf-Math 2018-12-13 18:11:24 +00:00 Commented Dec 13, 2018 at 18:11 @Decaf-Math yeah I will use math Jax I currently don’t have a full keyboard so many symbols are not possible, but I will correct later. Thanks forgot about the square root Lillys –Lillys 2018-12-13 18:13:24 +00:00 Commented Dec 13, 2018 at 18:13 1 Do you know the definition of inner product?William M. –William M. 2018-12-13 18:57:39 +00:00 Commented Dec 13, 2018 at 18:57 That would be true iff the vectors were orthogonal.Michael Hoppe –Michael Hoppe 2018-12-13 19:23:13 +00:00 Commented Dec 13, 2018 at 19:23 @WillM. Function into r, or c, for which linearity ,positive defitness and symmetry are true. .Lillys –Lillys 2018-12-13 20:19:52 +00:00 Commented Dec 13, 2018 at 20:19 |Show 2 more comments 1 Answer 1 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Given:a a and b b are orthogonal, so ⟨a,b⟩=⟨b,a⟩=0⟨a,b⟩=⟨b,a⟩=0, and ∥a∥=∥b∥=1‖a‖=‖b‖=1, which means ⟨a,a⟩−−−−−√=⟨b,b⟩−−−−√=1⟨a,a⟩=⟨b,b⟩=1. We want to evaluate: ∥λ a+μ b∥2=⟨λ a+μ b,λ a+μ b⟩=⟨λ a,λ a+μ b⟩+⟨μ b,λ a+μ b⟩=⟨λ a,λ a⟩+⟨λ a,μ b⟩+⟨μ b,λ a⟩+⟨μ b,μ b⟩=λ λ¯¯¯⟨a,a⟩+λ μ¯¯¯⟨a,b⟩+μ λ¯¯¯⟨b,a⟩+μ μ¯¯¯⟨b,b⟩=λ λ¯¯¯(1)+λ μ¯¯¯(0)+μ λ¯¯¯(0)+μ μ¯¯¯(1)=λ λ¯¯¯+μ μ¯¯¯,‖λ a+μ b‖2=⟨λ a+μ b,λ a+μ b⟩=⟨λ a,λ a+μ b⟩+⟨μ b,λ a+μ b⟩=⟨λ a,λ a⟩+⟨λ a,μ b⟩+⟨μ b,λ a⟩+⟨μ b,μ b⟩=λ λ¯⟨a,a⟩+λ μ¯⟨a,b⟩+μ λ¯⟨b,a⟩+μ μ¯⟨b,b⟩=λ λ¯(1)+λ μ¯(0)+μ λ¯(0)+μ μ¯(1)=λ λ¯+μ μ¯, Hence, ∥λ a+μ b∥=λ λ¯¯¯+μ μ¯¯¯−−−−−−−√‖λ a+μ b‖=λ λ¯+μ μ¯ where ξ¯¯ξ¯ denotes the complex conjugate of ξ ξ, note if ξ∈R ξ∈R then ξ¯¯=ξ ξ¯=ξ. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Dec 13, 2018 at 21:27 answered Dec 13, 2018 at 21:07 DragoniteDragonite 2,478 1 1 gold badge 13 13 silver badges 29 29 bronze badges 2 It looks like you're missing a square: you should have ∥x∥2=⟨x,x⟩‖x‖2=⟨x,x⟩.B. Mehta –B. Mehta 2018-12-13 21:18:12 +00:00 Commented Dec 13, 2018 at 21:18 @B.Mehta Yes, oops! Thanks for the catch:)Dragonite –Dragonite 2018-12-13 21:28:04 +00:00 Commented Dec 13, 2018 at 21:28 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions linear-algebra See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 2All vectors in R 4 R 4 orthogonal to two vectors 1What do non-invariant and non-orthogonal eigenvectors of an orthogonal matrix look like? 3Mutually orthogonal vectors in a complex vector space? 1prove that x−⟨a,x⟩⟨a,a⟩a x−⟨a,x⟩⟨a,a⟩a is orthogonal to a a 0Proof: Projection is orthogonal if it is nonexpanding 3Given two vectors a a, b b with only strictly positive coordinates, can those two vectors be orthogonal? 0If two vectors are orthogonal then ∥a+b∥2=∥a∥2+∥b∥2‖a+b‖2=‖a‖2+‖b‖2 7Orthogonal vectors in complex vector space 0Inverse of an orthogonal transformation is also orthogonal (without using matrices) Hot Network Questions Cannot build the font table of Miama via nfssfont.tex How to home-make rubber feet stoppers for table legs? How to rsync a large file by comparing earlier versions on the sending end? Making sense of perturbation theory in many-body physics Why do universities push for high impact journal publications? How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? Is encrypting the login keyring necessary if you have full disk encryption? Why are LDS temple garments secret? ConTeXt: Unnecessary space in \setupheadertext в ответе meaning in context Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Languages in the former Yugoslavia Discussing strategy reduces winning chances of everyone! Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? How do you emphasize the verb "to be" with do/does? What meal can come next? Alternatives to Test-Driven Grading in an LLM world Does the curvature engine's wake really last forever? how do I remove a item from the applications menu Proof of every Highly Abundant Number greater than 3 is Even Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? Interpret G-code Bypassing C64's PETSCII to screen code mapping RTC battery and VCC switching circuit more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
190654
https://www.quantamagazine.org/archive/
An editorially independent publication supported by the Simons Foundation. Follow Quanta Gift Store Shop Quanta gear Type search term(s) and press enter Archive Latest Articles Self-Assembly Gets Automated in Reverse of ‘Game of Life’ artificial intelligence ### Self-Assembly Gets Automated in Reverse of ‘Game of Life’ By George Musser September 10, 2025 Read Later In cellular automata, simple rules create elaborate structures. Now researchers can start with the structures and reverse-engineer the rules. Tiny Tubes Reveal Clues to the Evolution of Complex Life origins of life ### Tiny Tubes Reveal Clues to the Evolution of Complex Life By Veronique Greenwood September 8, 2025 Read Later Scientists have identified tubulin structures in primitive Asgard archea that may have been the precursor of our own cellular skeletons. Analog vs. Digital: The Race Is On To Simulate Our Quantum Universe quantum computing ### Analog vs. Digital: The Race Is On To Simulate Our Quantum Universe By Shalma Wegsman September 5, 2025 Read Later Recent progress on both analog and digital simulations of quantum fields foreshadows a future in which quantum computers could illuminate phenomena that are far too complex for even the most powerful supercomputers. What Is the Fourier Transform? harmonic analysis ### What Is the Fourier Transform? By Shalma Wegsman September 3, 2025 Read Later Amid the chaos of revolutionary France, one man’s mathematical obsession gave way to a calculation that now underpins much of mathematics and physics. The calculation, called the Fourier transform, decomposes any function into its parts. ‘World Models,’ an Old Idea in AI, Mount a Comeback By John Pavlus Read Later You’re carrying around in your head a model of how the world works. Will AI systems need to do the same? The Sudden Surges That Forge Evolutionary Trees evolutionary biology ### The Sudden Surges That Forge Evolutionary Trees By Jake Buehler August 28, 2025 Read Later An updated evolutionary model shows that living systems evolve in a split-and-hit-the-gas dynamic, where new lineages appear in sudden bursts rather than during a long marathon of gradual changes. Astrophysicists Find No ‘Hair’ on Black Holes gravity ### Astrophysicists Find No ‘Hair’ on Black Holes By Matt von Hippel August 27, 2025 Read Later According to Einstein’s theory of gravity, black holes have only a small handful of distinguishing characteristics. Quantum theory implies they may have more. Now an experimental search finds that any of this extra ‘hair’ has to be pretty short. ‘Ten Martini’ Proof Uses Number Theory to Explain Quantum Fractals mathematical physics ### ‘Ten Martini’ Proof Uses Number Theory to Explain Quantum Fractals By Lyndie Chiou +1 authors Joseph Howlett August 25, 2025 Read Later The proof, known to be so hard that a mathematician once offered 10 martinis to whoever could figure it out, connects quantum mechanics to infinitely intricate mathematical structures. Busy Beaver Hunters Reach Numbers That Overwhelm Ordinary Math Turing machines ### Busy Beaver Hunters Reach Numbers That Overwhelm Ordinary Math By Ben Brubaker August 22, 2025 Read Later The quest to find the longest-running simple computer program has identified a new champion. It’s physically impossible to write out the numbers involved using standard mathematical notation. The Quanta Newsletter Get highlights of the most important news delivered to your email inbox Recent newsletters Log in to Quanta Use your social network or We care about your data, and we'd like to use cookies to give you a smooth browsing experience. Please agree and read more about our privacy policy.AGREEDISMISS
190655
https://brilliant.org/wiki/matrices/
Matrices Alexander Katz, Ram Mohith, David Stiff, and Aditya Narayan Sharma Anton Kriksunov A Former Brilliant Member Jimin Khim Eli Ross contributed Contents Formal Definition Basic Operations Matrix Multiplication Transpose and Determinant Inverting Matrices Solving Systems of Linear Equations See Also Formal Definition A matrix is a rectangular array of any objects for which addition and multiplication are defined. Generally, these objects are numbers, but it is equally valid to have a matrix of symbols like M=(♣§​∘✓​■★​) so long as there is a suitable understanding of what (for example) ✓×★ and ■+♣ are. More formally speaking, a matrix's elements can be drawn from any field. However, it is generally best to consider matrices as collections of real numbers. Generally, in a matrix, the vertical elements are termed as columns and the horizontal elements are termed as rows. The size of a matrix is measured in the number of rows and columns the matrix has. The above matrix, for instance, has 2 rows and 3 columns, and thus it is a 2×3 matrix. Matrices that have the same number of rows as columns are called square matrices and are of particular interest. The elements of a matrix are specified by the row and column they reside in. For example, the ✓ in the above matrix M is at position (2,2): the 2nd row and 2nd column. More explicitly, M2,2​=✓. This notation is especially convenient when the elements are related by some formula; for instance, the matrix M=​234​345​456​​ can be more succinctly written as Mi,j​=i+j for 1≤i,j≤3, or even more compactly as M=(i+j)3,3​ (where 3,3 denotes the size of the matrix). The ith row of the matrix can also be denoted by Mi,∗​, and the jth column by M∗,j​. In a given matrix of order m×n, there are m⋅n elements present. For example, in a 3 by 3 matrix the number of elements are 3×3=9, and in case of a 2 by 4 matrix there are 2×4=8 elements present. Finally, it is worth defining a matrix with exactly one column as a column vector, as they are especially useful in representing points in the n-dimensional plane. Basic Operations There are several simple operations on matrices and one somewhat complicated one (multiplication). The first is addition: matrix addition is defined only on two matrices of the same size and works by adding corresponding elements: What is (21​06​58​)+(31​50​72​)? The matrices are added element-wise, so the result is (2+31+1​0+56+0​5+78+2​)=(52​56​1210​). □​ If A=(26​3−1​15​) and B=(10​2−1​−13​), then find a matrix X such that A+B−2X=0. We have A+B−2X=0⟹X​=2A+B​=21​((26​3−1​15​)+(10​2−1​−13​))=21​(36​5−2​08​)=(23​3​25​−1​04​). □​​ More formally, we can state as follows: The sum S of two matrices A,B of the same size satisfies the relation Si,j​=Ai,j​+Bi,j​ for all i,j within the size of the matrices. It is also possible to multiply matrices by scalars, i.e. single numbers, by multiplying element-wise: What is 3.5(21​06​58​)? The elements are each multiplied by 3.5, so the result is (3.5⋅23.5⋅1​3.5⋅03.5⋅6​3.5⋅53.5⋅8​)=(73.5​021​17.528​). □​ More formally, we can state as follows: The product P of a constant c and matrix A satisfies the relation Pi,j​=c⋅Ai,j​ for all i,j within the size of the matrices. If α=[14​−68​] and β=[28​43​], what is the value of 3α−2β? Express your answer as the sum of all elements in the final matrix. The correct answer is: -13 Matrix Multiplication Finally, there is the more complicated operation of matrix multiplication. The product of two matrices is defined only when the number of columns of the first matrix is the same as the number of rows of the second; in other words, it is only possible to multiply m×n and n×p size matrices. The reason for this becomes clear upon defining the product: The product P of an m×n matrix A and an n×p matrix B satisfies Pi,j​=Ai,∗​⋅B∗,j​ for all i,j within the size of the matrices. Here Ai,∗​ denotes the ith row of A, which is a vector, and B∗,j​ denotes the jth column of B, which is also a vector. Thus, the dot (⋅) in this sense refers to multiplying vectors, defined by the dot product. Note that i and j are defined on 1≤i≤m and 1≤j≤p, so the product P will be an m×p matrix. This rule seems rather arbitrary, so it is best illustrated by an example: What is (14​25​36​)​135​246​​? Firstly, note that the first matrix is 2×3 and the second is 3×2, so their product is indeed defined and will be a 2×2 matrix. Firstly consider the 1,1 element of the product: (P1,1​P2,1​​P1,2​P2,2​​)=(14​25​36​)​135​246​​. It is equal to the dot product of the 1st row of the first matrix and the 1st column of the second matrix, i.e. (P1,1​P2,1​​P1,2​P2,2​​)P1,1​​=(14​25​36​)​135​246​​=(1,2,3)⋅(1,3,5)=1 ⋅ 1 + 2 ⋅ 3 + 3 ⋅ 5=22.​ So the top left entry of the result is 22. The rest of the matrix can be filled out in the same way; for instance, (P1,1​P2,1​​P1,2​P2,2​​)=(14​25​36​)​135​246​​. The final result is P=(2249​2864​). □​ If A=​12−3​−231​3−12​​ and B=​101​012​220​​, then find AB and BA. What can you conclude from the final two matrices? Both A and B are square matrices of order 3×3. Hence both AB and BA are well-defined and are matrices of the same order 3×3. ABBA​=​12−3​−231​3−12​​⋅​101​012​220​​=​1⋅1+(−2)⋅0+3⋅12⋅1+3⋅0+(−1)⋅1(−3)⋅1+1⋅0+2⋅1​1⋅0+(−2)⋅1+3⋅22⋅0+3⋅1+(−1)⋅2−3⋅0+1⋅1+2⋅2​1⋅2+(−2)⋅2+3⋅02⋅2+3⋅2+(−1)⋅0−3⋅2+1⋅2+2⋅0​​=​41−1​415​−210−4​​=​101​012​220​​⋅​12−3​−231​3−12​​=​−5−45​054​731​​.​ Clearly, you can see that AB=BA. Thus, we can conclude that multiplication of matrices need not be commutative. □​ Suppose that x and y satisfy the following equation: (x2​y1​)(xy​0x​)=2(103​6−x0​)+(54​2xx​). Evaluate x+y. The correct answer is: 7 It is still admittedly unclear why matrix multiplication is defined this way. One major reason is in the use of systems of linear equations. The coefficients of each equation can be assembled into a coefficient matrix, and the variables can be arranged into a column vector. The product of the coefficient matrix and the column vector will itself be a column vector, the values of each equation. For example, the system of equations ⎩⎨⎧​x+2y+3z3x+y+4z2x+4y−z​=9=12=4​ can be more succinctly written in the form ​132​214​34−1​​​xyz​​=​9124​​. This is a very useful transformation besides the saving of space; in particular, if it were possible to "divide" matrices, it would be easy to find out what x,y,z are by dividing out the coefficient matrix. Unfortunately, division takes some more effort to define, so further explanation of this is left to a later section. As a warning about matrix multiplication, it is extremely important to understand the following: Matrix multiplication is not commutative. In other words, it is not generally true that AB=BA. The simplest way to see this is that matrix multiplication is defined only on m×n and n×p matrices; reversing their order gives the product of an n×p matrix and an m×n matrix, and p is not necessarily equal to m. Even in such a case (e.g. square matrices), the multiplication is generally not commutative. Matrices A,B that do indeed satisfy AB=BA are (appropriately) called commuting matrices. Do the two matrices (10​10​) and (21​01​) commute? No, since (10​10​)(21​01​)=(30​10​) but (21​01​)(10​10​)=(21​21​). □​ Finally, it is worth noting a special matrix: the identity matrix In​=​100⋮0​010⋮0​001⋮0​………⋱…​000⋮1​​, which is an n×n matrix that is zero everywhere except for the main diagonal, which contains all 1s. For instance, I2​=(10​01​),I3​=​100​010​001​​. It satisfies the property that IA=AI=A for any n×n matrix A. The reason should be clear from the above definitions. Transpose and Determinant Two useful functions on matrices are the transpose and the determinant. The transpose of an m×n matrix A is an n×m matrix AT such that the rows of A are the columns of AT, and the columns of A are the rows of AT. For instance, (14​25​36​)T=​123​456​​. The transpose satisfies a few useful properties: (A+B)T=AT+BT (AB)T=BTAT (AT)T=A. The second of these is the most useful since it (roughly) means that properties true of left multiplication hold true for right multiplication as well. More interesting is the determinant of a matrix. There are several equally valid definitions of the determinant, though all would seem arbitrary at this point without an understanding of what the determinant is supposed to compute. Formally, the determinant is a function det from the set of square matrices to the set of real numbers that satisfies 3 important properties: det(I)=1; det is linear in the rows of the matrix; if two rows of a matrix M are equal, det(M)=0. The second condition is by far the most important. It means that any of the rows of the matrix is written as a linear combination of two other vectors, and the determinant can be calculated by "splitting" that row. For instance, in the below example, the second row (0,2,3) can be written as 2⋅(0,1,0)+3⋅(0,0,1), so det​100​020​031​​=2⋅det​100​010​001​​+3⋅det​100​000​011​​=2. A key theorem shows this: There is exactly one function satisfying the above 3 relations. Unfortunately, this is very difficult to work with for all but the simplest matrices, so an alternate definition is better to use. There are two major ones: determinant by minors and determinant by permutations. The first of the two, determinant by minors, uses recursion. The base case is simple: the determinant of a 1×1 matrix with element a is simply a. Note that this agrees with the conditions above, since det(a​)=a⋅det(1​)=a because det(1​)=I. The recursive step is as follows: denote by Aij​ the matrix formed by deleting the ith row and jth column. For instance, A=​147​258​369​​⟹A11​=(58​69​). Then the determinant is given as follows: The determinant of an n×n matrix A is det(A)=i=1∑n​(−1)i+1a1,i​det(A1i​)=a1,1​detA11​−a1,2​detA12​+⋯. For example, What is the determinant of (ac​bd​)? We write det(ac​bd​)=a det(d​)−b det(c​)=ad−bc. □​ If det(12​ab​)=4 and det(12​ba​)=1, what is a2+b2? The correct answer is: 13 Since det(12​ab​)=4, we have b−2a=4. (1) Since det(12​ba​)=1, we have a−2b=1. (2) We multiply equation (1) by 2 to obtain 2b−4a=8 and add this to equation (2) to obtain −3a=9, or a=−3. Therefore, a=−3 and b=−2, implying a2+b2=(−3)2+(−2)2=9+4=13. Unfortunately, these calculations can get quite tedious; already for 3×3 matrices, the formula is too long to memorize in practice. An alternate definition uses permutations. Let σ be a permutation of {1,2,3,…,n}, and S the set of those permutations. Then the determinant of an n×n matrix A is σ∈S∑​(sgn(σ)i=1∏n​ai,σ(i)​). This may look more intimidating than the previous formula, but in fact it is more intuitive. It essentially says the following: Choose n elements of A such that no two are in the same row and no two are in the same column, and multiply them, possibly also by −1 if the permutation has an odd sign. The determinant is the sum over all choices of these n elements. This definition is especially useful when the matrix contains many zeros, as then most of the products vanish. Find the determinant of the matrix ​10000​0−6000​−1−131​00​99−8090​111131​7−5​​. The correct answer is: 90 Since the matrix is upper triangular, the determinant is equal to the product of the diagonal entries: det​10000​0−6000​−1−131​00​99−8090​111131​7−5​​=1⋅(−6)⋅31​⋅9⋅(−5)=90. Here is an example: What is the determinant of (ac​bd​)? There are two permutations of {1,2}: {1,2} itself and {2,1}. The first has a positive sign (as it has 0 transpositions) and the second has a negative sign (as it has 1 transposition), so the determinant is det(A)=σ∈S∑​(sgn(σ)i=1∏n​ai,σ(i)​)=1⋅a1,1​a2,2​+(−1)⋅a1,2​a2,1​=ad−bc. Unsurprisingly, this is the same result as above. □​ Calculate det​2−39​613​457​​. The correct answer is: 308 We have det​2−39​613​457​​​=2⋅1⋅7+6⋅5⋅9+4⋅(−3)⋅3−2⋅5⋅3−6⋅(−3)⋅7−4⋅1⋅9=308​ The determinant is a very important function because it satisfies a number of additional properties that can be derived from the 3 conditions stated above. They are as follows: Multiplicativity: det(AB)=det(A)det(B). Invariance under row operations: If A′ is a matrix formed by adding a multiple of any row to another row, then det(A)=det(A′). Invariance under transpose: det(A)=det(AT). Sign change under row swap: If A′ is a matrix formed by swapping the positions of two rows, then det(A′)=−det(A). As the next section shows, the multiplicative property is of special importance. Inverting Matrices At the end of the matrix multiplication section, it was noted that "dividing" matrices would be an extremely useful operation. To attempt to create one, it is important to understand the definition of division in numbers: Dividing by a number c is equivalent to multiplying by a number c1​. In other words, dividing by c is equivalent to multiplying by a number d such that cd=1. This makes sense for what division "should" do: dividing by c followed by multiplying by c should be the equivalent of doing nothing, i.e. multiplying by 1. The above definition ensures this. Matrix "division," should it exist, should follow the same principle: multiplying by a matrix and then dividing by it should be the equivalent of doing nothing. In matrix multiplication, however, the equivalent of doing nothing is multiplying by I. This leads to a natural definition: The inverse of a matrix A is a matrix A−1 such that AA−1=A−1A=I. A natural question to ask is whether all matrices have inverses. Unfortunately, the answer is no, but this should not be surprising: not all numbers have inverses either (it is impossible to divide by 0). Indeed, the multiplicative property of the determinant from the previous section shows this: since AA−1=I, det(AA−1)det(A)det(A−1)​=det(I)=det(I)=1.​ So it is necessary for det(A) to be nonzero. It is somewhat more difficult for this to be a sufficient condition, but this is indeed the case: A matrix has an inverse if and only if it has a nonzero determinant. It is worth remembering the formula in the 2×2 case: The inverse of the matrix (ac​bd​), if it exists, is ad−bc1​(d−c​−ba​). This isn't too difficult to verify. In general, the inverse matrix can be found by analyzing the cofactor matrix, an n×n matrix cof(A)satisfying cof(A)i,j​=(−1)i+jdetAji​, where Aji​ refers to the matrix formed by removing the jth row and ith column from A. This matrix satisfies the property that cof(A)A=A cof(A)=det(A)I. This provides yet another reason that A is invertible if and only if it has nonzero determinant. It is also worth noting that the 2×2 cofactor matrix is (d−c​−ba​), which aligns with the formula above. Solving Systems of Linear Equations See full article here: Solving Linear Systems Using Matrices. The above sections provide a general method for solving systems of linear equations: Arrange the coefficients in the coefficient matrix A. Arrange the variables in a vector v. Arrange the resulting values into another vector b. The goal is now to solve the equation Av=b. Calculate A−1, the inverse of A, for example, by the cofactor method from the previous section. Multiply both sides of the above equation by A−1 on the left. Then v=A−1b, which is a simple matrix multiplication. There is one potential pitfall: the inverse of A does not exist. This means that the determinant of A is 0, so there are two rows of A that are multiples of one another; this means that the original system of equations had two equations that were multiples of one another. This means that there are either no or infinite solutions. See Also Linear Algebra Group Theory Introduction Hessian Matrix Cite as: Matrices. Brilliant.org. Retrieved 11:13, September 11, 2025, from
190656
https://cvgstrategy.com/wp-content/uploads/2019/08/MIL-STD-810H-Method-516.8-Shock.pdf
MIL-STD-810H METHOD 516.8 516.8-i METHOD 516.8 SHOCK CONTENTS Paragraph Page 1. SCOPE ........................................................................................................................................................... 1 1.1 PURPOSE .......................................................................................................................................................... 1 1.2 APPLICATION ................................................................................................................................................... 1 1.3 LIMITATIONS .................................................................................................................................................... 1 2. TAILORING GUIDANCE ........................................................................................................................... 2 2.1 SELECTING THE SHOCK METHOD ..................................................................................................................... 2 2.1.1 EFFECTS OF SHOCK .......................................................................................................................................... 2 2.1.2 SEQUENCE AMONG OTHER METHODS ............................................................................................................. 2 2.2 SELECTING A PROCEDURE ................................................................................................................................ 3 2.2.1 PROCEDURE SELECTION CONSIDERATIONS ...................................................................................................... 3 2.2.2 DIFFERENCE AMONG PROCEDURES .................................................................................................................. 4 2.3 DETERMINE TEST LEVELS AND CONDITIONS ................................................................................................... 5 2.3.1 GENERAL CONSIDERATIONS - TERMINOLOGY AND PROCESSING PROCEDURES WITH ILLUSTRATION ............... 5 2.3.1.1 THE SHOCK MODEL ......................................................................................................................................... 5 2.3.1.2 LABORATORY SHOCK TEST OPTIONS ............................................................................................................... 7 2.3.2 TEST CONDITIONS ............................................................................................................................................ 9 2.3.2.1 SRS BASED ON MEASURED DATA ................................................................................................................. 10 2.3.2.2 SRS IN THE ABSENCE OF MEASURED DATA ................................................................................................... 10 2.3.2.3 CLASSICAL SHOCK PULSE .............................................................................................................................. 12 2.3.3 TEST AXES AND NUMBER OF SHOCK EVENTS – GENERAL CONSIDERATIONS ................................................ 16 2.3.3.1 SPECIAL CONSIDERATIONS FOR COMPLEX TRANSIENTS ................................................................................ 16 2.4 TEST ITEM CONFIGURATION .......................................................................................................................... 16 3. INFORMATION REQUIRED ................................................................................................................... 16 3.1 PRETEST ......................................................................................................................................................... 16 3.2 DURING TEST ................................................................................................................................................. 17 3.3 POST-TEST ..................................................................................................................................................... 17 4. TEST PROCESS ......................................................................................................................................... 17 4.1 TEST FACILITY ............................................................................................................................................... 17 4.2 CONTROLS ..................................................................................................................................................... 18 4.2.1 CALIBRATION ................................................................................................................................................. 18 4.2.2 TOLERANCES .................................................................................................................................................. 18 4.2.2.1 CLASSICAL PULSES AND COMPLEX TRANSIENT PULSE-TIME DOMAIN .......................................................... 18 4.2.2.2 COMPLEX TRANSIENT PULSES-SRS ............................................................................................................... 18 4.3 TEST INTERRUPTION ...................................................................................................................................... 19 4.3.1 INTERRUPTION DUE TO LABORATORY EQUIPMENT MALFUNCTION ............................................................... 19 4.3.2 INTERRUPTION DUE TO TEST ITEM OPERATION FAILURE .............................................................................. 19 4.4 INSTRUMENTATION ........................................................................................................................................ 20 4.5 DATA ANALYSIS ............................................................................................................................................ 21 4.6 TEST EXECUTION ........................................................................................................................................... 22 4.6.1 PREPARATION FOR TEST................................................................................................................................. 22 4.6.1.1 PRELIMINARY GUIDELINES ............................................................................................................................ 22 4.6.1.2 PRETEST CHECKOUT ...................................................................................................................................... 22 4.6.1.3 PROCEDURES’ OVERVIEW .............................................................................................................................. 22 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-ii CONTENTS – Continued Paragraph Page 4.6.2 FUNCTIONAL SHOCK (PROCEDURE I) ............................................................................................................. 22 4.6.2.1 TEST CONTROLS - FUNCTIONAL SHOCK (PROCEDURE I) ................................................................................ 24 4.6.2.2 TEST TOLERANCES - FUNCTIONAL SHOCK (PROCEDURE I) ............................................................................ 24 4.6.2.3 TEST PROCEDURE I - FUNCTIONAL SHOCK (PROCEDURE I) ............................................................................ 24 4.6.3 TRANSPORTATION SHOCK (PROCEDURE II) .................................................................................................... 25 4.6.3.1 TEST CONTROLS - TRANSPORTATION SHOCK (PROCEDURE II) ...................................................................... 26 4.6.3.2 TEST TOLERANCES - TRANSPORTATION SHOCK (PROCEDURE II) ................................................................... 26 4.6.3.3 TEST PROCEDURE - TRANSPORTATION SHOCK (PROCEDURE II)..................................................................... 26 4.6.4 FRAGILITY (PROCEDURE III) .......................................................................................................................... 26 4.6.4.1 TEST CONTROLS - FRAGILITY (PROCEDURE III) ............................................................................................. 29 4.6.4.2 TEST TOLERANCES - FRAGILITY (PROCEDURE III) ......................................................................................... 30 4.6.4.3 TEST PROCEDURE - FRAGILITY (PROCEDURE III) ........................................................................................... 30 4.6.5 TRANSIT DROP (PROCEDURE IV) ................................................................................................................... 31 4.6.5.1 TEST CONTROLS - TRANSIT DROP (PROCEDURE IV) ..................................................................................... 32 4.6.5.2 TEST TOLERANCES - TRANSIT DROP (PROCEDURE IV) .................................................................................. 37 4.6.5.3 TEST PROCEDURE - TRANSIT DROP (PROCEDURE IV) .................................................................................... 37 4.6.6 CRASH HAZARD SHOCK (PROCEDURE V) ....................................................................................................... 38 4.6.6.1 TEST CONTROLS - CRASH HAZARD SHOCK (PROCEDURE V) ......................................................................... 38 4.6.6.2 TEST TOLERANCES - CRASH HAZARD SHOCK (PROCEDURE V) ...................................................................... 38 4.6.6.3 TEST PROCEDURE - CRASH HAZARD SHOCK (PROCEDURE V) ....................................................................... 38 4.6.7 BENCH HANDLING (PROCEDURE VI).............................................................................................................. 39 4.6.7.1 TEST CONTROLS - BENCH HANDLING (PROCEDURE VI) ................................................................................ 39 4.6.7.2 TEST TOLERANCES - BENCH HANDLING (PROCEDURE VI) ............................................................................. 39 4.6.7.3 TEST PROCEDURE - BENCH HANDLING (PROCEDURE VI) .............................................................................. 39 4.6.8 PENDULUM IMPACT (PROCEDURE VII) .......................................................................................................... 39 4.6.8.1 TEST CONTROLS - PENDULUM IMPACT (PROCEDURE VII) ............................................................................. 39 4.6.8.2 TEST TOLERANCES - PENDULUM IMPACT (PROCEDURE VII) .......................................................................... 40 4.6.8.3 TEST PROCEDURE - PENDULUM IMPACT (PROCEDURE VII) ........................................................................... 40 4.6.9 CATAPULT LAUNCH/ARRESTED LANDING (PROCEDURE VIII) ....................................................................... 41 4.6.9.1 TEST CONTROLS - CATAPULT LAUNCH/ARRESTED LANDING (PROCEDURE VIII) .......................................... 41 4.6.9.2 TEST TOLERANCES - CATAPULT LAUNCH/ARRESTED LANDING (PROCEDURE VIII) ...................................... 43 4.6.9.3 TEST PROCEDURE - CATAPULT LAUNCH/ARRESTED LANDING (PROCEDURE VIII) ........................................ 43 5. ANALYSIS OF RESULTS ......................................................................................................................... 44 6. REFERENCE/RELATED DOCUMENTS ............................................................................................... 44 6.1 REFERENCED DOCUMENTS ............................................................................................................................. 44 6.2 RELATED DOCUMENTS ................................................................................................................................... 45 FIGURES FIGURE 516.8-1. BASE INPUT SDOF SYSTEM MODEL FOR SHOCK CONSIDERATIONS ................................................... 6 FIGURE 516.8-2. TEST SRS FOR USE IF MEASURED DATA ARE NOT AVAILABLE (FOR PROCEDURE I - FUNCTIONAL SHOCK, AND PROCEDURE V - CRASH HAZARD SHOCK TEST) ........................................................ 11 FIGURE 516.8-3. TERMINAL PEAK SAWTOOTH SHOCK PULSE CONFIGURATION AND ITS TOLERANCE LIMITS ............. 13 FIGURE 516.8-4. TRAPEZOIDAL SHOCK PULSE CONFIGURATION AND TOLERANCE LIMITS ......................................... 13 FIGURE 516.8-5. HALF-SINE SHOCK PULSE CONFIGURATION AND TOLERANCE LIMITS ............................................. 14 FIGURE 516.8-6. ILLUSTRATION OF TEMPORAL AND SPECTRAL DISTORTION ASSOCIATED WITH A COMPENSATED CLASSICAL TERMINAL PEAK SAWTOOTH ........................................................................................ 15 FIGURE 516.8-7. TRAPEZOIDAL PULSE: VELOCITY CHANGE VERSUS DROP HEIGHT ................................................... 28 FIGURE 516.8-8. STANDARD DROP ORIENTATIONS FOR RECTANGULAR AND CYLINDRICAL PACKAGES ..................... 36 FIGURE 516.8-9. ILLUSTRATION OF EDGE DROP CONFIGURATION (CORNER DROP END VIEW IS ALSO ILLUSTRATED) 37 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-iii CONTENTS - Continued Paragraph Page FIGURE 516.8-10. PENDULUM IMPACT TEST ............................................................................................................... 40 FIGURE 516.8-11. SAMPLE MEASURED STORE THREE AXIS CATAPULT LAUNCH COMPONENT RESPONSE ACCELERATION TIME HISTORIES ..................................................................................................... 42 FIGURE 516.8-12. SAMPLE MEASURED STORE THREE AXIS ARRESTED LANDING COMPONENT RESPONSE ACCELERATION TIME HISTORIES ..................................................................................................... 43 TABLES TABLE 516.8-I. SHOCK TEST PROCEDURES AND CONFIGURATIONS SUMMARY ......................................................... 3 TABLE 516.8-II. LABORATORY TEST OPTIONS ............................................................................................................ 8 TABLE 516.8-III. TEST SHOCK RESPONSE SPECTRA FOR USE IF MEASURED DATA ARE NOT AVAILABLE ..................... 11 TABLE 516.8-IV. TERMINAL PEAK SAWTOOTH DEFAULT TEST PARAMETERS FOR PROCEDURE I - FUNCTIONAL TEST (REFER TO FIGURE 516.8-3) .................................................................................................. 23 TABLE 516.8-V. HSC STANDARDIZED REQUIREMENTS ........................................................................................... 23 TABLE 516.8-VI. HSC LIMITED APPLICATION REQUIREMENTS BY CRAFT SIZE .......................................................... 24 TABLE 516.8-VII. PROCEDURE II - TRANSPORTATION SHOCK TEST SEQUENCE ........................................................... 25 TABLE 516.8-VIII. FRAGILITY SHOCK TRAPEZOIDAL PULSE PARAMETERS (REFER TO FIGURE 516.8-4) ....................... 28 TABLE 516.8-IX. LOGISTIC TRANSIT DROP TEST ........................................................................................................ 33 TABLE 516.8-X. TACTICAL TRANSPORT DROP TEST ................................................................................................ 34 TABLE 516.8-XI. SEVERE TACTICAL TRANSPORT TROP TEST ................................................................................... 35 TABLE 516.8-XII. FIVE STANDARD DROP TEST ORIENTATIONS ................................................................................... 36 TABLE 516.8-XIII. TERMINAL PEAK SAWTOOTH DEFAULT TEST PARAMETERS FOR PROCEDURE V – CRASH HAZARD (REFER TO FIGURE 516.8-3) ........................................................................................................... 38 METHOD 516.8 ANNEX A MEASUREMENT SYSTEM CHARACTERIZATION AND BASIC PROCESSING 1. SINGLE SHOCK EVENT MEASUREMENT SYSTEM CHARACTERIZATION AND BASIC PROCESSING ......................................................................................................................................................... A-1 1.1 MEASUREMENT SYSTEM AND SIGNAL CONDITIONING PARAMETERS ........................................................... A-1 1.2 MEASUREMENT SHOCK IDENTIFICATION ..................................................................................................... A-3 1.3 EFFECTIVE SHOCK DURATION FOR NON-CLASSICAL SHOCKS ..................................................................... A-4 1.3.1 CALCULATION OF Te .................................................................................................................................... A-6 1.3.2 CALCULATION OF TE ................................................................................................................................... A-6 1.3.3 IMPLEMENTATION CONSIDERATIONS .......................................................................................................... A-7 1.4 SHOCK RESPONSE SPECTRUM ...................................................................................................................... A-7 1.4.1 PROCESSING GUIDELINES ............................................................................................................................ A-7 1.4.2 PROCESSING EXAMPLE .............................................................................................................................. A-10 1.5 FREQUENCY DOMAIN IDENTIFICATION ENERGY SPECTRAL DENSITY (ESD) ............................................. A-11 1.6 SINGLE EVENT / MULTIPLE CHANNEL MEASUREMENT PROCESSING GUIDELINES ..................................... A-11 1.7 MEASUREMENT PROBABILISTIC / STATISTICAL SUMMARY ....................................................................... A-11 1.8 OTHER PROCESSING ................................................................................................................................... A-12 ANNEX A FIGURES FIGURE 516.8A-1. a. FILTER ATTENUATION .............................................................................................................. A-1 FIGURE 516.8A-1. b. ILLUSTRATION OF SAMPLING RATES AND OUT OF BAND “FOLD OVER” FREQUENCIES FOR DATA ACQUISITION SYSTEMS ................................................................................................................. A-3 FIGURE 516.8A-2. EXAMPLE ACCELERATION TIME HISTORY ..................................................................................... A-4 FIGURE 516.8A-3. EXAMPLE SIMPLE SHOCK TIME HISTORY WITH SEGMENT IDENTIFICATION ................................... A-5 FIGURE 516.8A-4. MAXIMAX PSEUDO-VELOCITY SRS ESTIMATES FOR SHOCK AND NOISE FLOOR SEGMENTS .......... A-8 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-iv CONTENTS - Continued Paragraph Page FIGURE 516.8A-5. SHOCK MAXIMUM AND MINIMUM PSEUDO-VELOCITY SRS ESTIMATES ........................................ A-9 FIGURE 516.8A-6. SHOCK MAXIMUM AND MINIMUM ACCELERATION SRS ESTIMATES ............................................ A-10 FIGURE 516.8A-7. MAXIMAX ACCELERATION SRS ESTIMATES FOR SHOCK AND NOISE FLOOR SEGMENTS .............. A-11 METHOD 516.8 ANNEX B GUIDELINES FOR ADDITIONAL SHOCK TIME HISTORY VALIDATION AND PROCESSING 1. INTRODUCTION ..................................................................................................................................... B-1 2. COMPLEX SHOCKS ............................................................................................................................... B-1 3. ADDITIONAL SIMPLE SHOCK PROCESSING AND VALIDATION ............................................ B-3 3.1 INTRODUCTION ............................................................................................................................................ B-3 3.2 INSTANTANEOUS ROOT-MEAN-SQUARE (RMS) .......................................................................................... B-3 3.3 SHOCK VELOCITY/DISPLACEMENT VALIDATION CRITERIA ......................................................................... B-5 3.4 ESD ESTIMATE ............................................................................................................................................ B-6 4. SHOCK IDENTIFICATION AND ANOMALOUS MEASUREMENT BEHAVIOR ....................... B-7 ANNEX B FIGURES FIGURE 516.8B-1. SHOCK TIME HISTORY WITH SEGMENT IDENTIFICATION AND Te AND TE TIME INTERVALS ILLUSTRATED ............................................................................................................................... B-2 FIGURE 516.8B-2. A COMPLEX SHOCK ...................................................................................................................... B-2 FIGURE 516.8B-3. SHOCK TIME HISTORY INSTANTANEOUS ROOT-MEAN-SQUARE ..................................................... B-4 FIGURE 516.8B-4. MEASUREMENT VELOCITY VIA INTEGRATION OF MEAN (DC) REMOVED ACCELERATION ............. B-5 FIGURE 516.8B-5. MEASUREMENT DISPLACEMENT VIA INTEGRATION OF VELOCITY AFTER MEAN (DC) REMOVAL .. B-6 FIGURE 516.8B-6. SHOCK ESD ESTIMATE ................................................................................................................. B-7 FIGURE 516.8B-7. MEASUREMENT INPUT OVERDRIVING THE SIGNAL CONDITIONING WITH CLIPPING ....................... B-8 FIGURE 516.8B-8. NOISY OR MISSING MEASUREMENT SIGNALS ................................................................................ B-8 FIGURE 516.8B-9. COMBINATION AMPLIFIER OVERDRIVING AND NOISE ................................................................... B-9 METHOD 516.8 ANNEX C STATISTICAL AND PROBABILISTIC CONSIDERATIONS FOR DEVELOPING LIMITS ON PREDICTED AND PROCESSED DATA ESTIMATES 1. SCOPE ....................................................................................................................................................... C-1 1.1 PURPOSE ...................................................................................................................................................... C-1 1.2 APPLICATION ............................................................................................................................................... C-1 2. DEVELOPMENT ...................................................................................................................................... C-1 2.1 LIMIT ESTIMATE SET SELECTION ................................................................................................................. C-1 2.2 ESTIMATE PREPROCESSING CONSIDERATIONS ............................................................................................. C-1 2.3 PARAMETRIC UPPER LIMIT STATISTICAL ESTIMATE ASSUMPTIONS ............................................................. C-2 2.3.1 NTL - UPPER NORMAL ONE-SIDED TOLERANCE LIMIT ............................................................................... C-3 2.3.2 NPL - UPPER NORMAL PREDICTION LIMIT .................................................................................................. C-5 2.4 NON-PARAMETRIC UPPER LIMIT STATISTICAL ESTIMATE PROCEDURES ...................................................... C-5 2.4.1 ENVELOPE (ENV) - UPPER LIMIT ................................................................................................................. C-5 2.4.2 DISTRIBUTION FREE LIMIT (DFL) - UPPER DISTRIBUTION-FREE TOLERANCE LIMIT ................................... C-5 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-v CONTENTS - Continued Paragraph Page 2.4.3 EMPIRICAL TOLERANCE LIMIT (ETL) - UPPER EMPIRICAL TOLERANCE LIMIT ............................................ C-6 3. EXAMPLE ................................................................................................................................................. C-6 3.1 INPUT TEST DATA SET ................................................................................................................................. C-6 3.2 PARAMETRIC UPPER LIMITS ......................................................................................................................... C-7 3.3 NON-PARAMETRIC UPPER LIMITS ................................................................................................................ C-8 3.4 OBSERVATIONS ............................................................................................................................................ C-8 4. RECOMMENDED PROCEDURES ....................................................................................................... C-8 4.1 RECOMMENDED STATISTICAL PROCEDURES FOR UPPER LIMIT ESTIMATES ................................................. C-8 4.2 UNCERTAINTY FACTORS .............................................................................................................................. C-8 ANNEX C FIGURES FIGURE 516.8C-1. INPUT TEST DATA SET ................................................................................................................... C-7 FIGURE 516.8C-2. PARAMETRIC AND NON-PARAMETRIC UPPER LIMITS ..................................................................... C-7 ANNEX C TABLES TABLE 516.8C-I. NORMAL TOLERANCE FACTORS FOR UPPER TOLERANCE LIMIT ..................................................... C-4 TABLE 516.8C-II. INPUT TEST DATA SET ................................................................................................................... C-6 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-vi (This page is intentionally blank.) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-1 METHOD 516.8 SHOCK NOTE: Tailoring is essential. Select methods, procedures, and parameter levels based on the tailoring process described in Part One, paragraph 4.2.2, and its Annex C. Apply the general guidelines for laboratory test methods described in Part One, paragraph 5 of this Standard. Due to extensive revision to this method, no change bars have been provided. 1. SCOPE. 1.1 Purpose. Shock tests are performed to: a. Provide a degree of confidence that materiel can physically and functionally withstand the shocks encountered in handling, transportation, and service environments. This may include an assessment of the overall materiel system integrity for safety purposes in any one or all of the handling, transportation, and service environments. b. Determine the materiel's fragility level, in order that packaging, stowage, or mounting configurations may be designed to protect the materiel's physical and functional integrity. c. Test the strength of devices that attach materiel to platforms that may be involved in a crash situation and verify that the material itself does not create a hazard or that parts of the materiel are not ejected during a crash situation. 1.2 Application. Use this Method to evaluate the physical and functional performance of materiel likely to be exposed to mechanically induced shocks in its lifetime. Such mechanical shock environments are generally limited to a frequency range not to exceed 10,000 Hz, and a duration of not more than 1.0 second. (In most cases of mechanical shock, the significant materiel response frequencies will not exceed 4,000 Hz, and the duration of materiel response will not exceed 0.1 second.) 1.3 Limitations. This method does not include: a. The effects of shock experienced by materiel as a result of pyrotechnic device initiation. For this type of shock, see Method 517.3, Pyroshock. b. The effects experienced by materiel to very high level localized impact shocks, e.g., ballistic impacts. For this type of shock, see Method 522.2, Ballistic Shock. c. The high impact shock effects experienced by materiel aboard a ship due to wartime service. Consider performing shock tests for shipboard materiel in accordance with MIL-DTL-901 (paragraph 6.1, reference c). d. The effects experienced by fuse systems. Perform shock tests for safety and operation of fuses and fuse components in accordance with MIL-STD-331 (paragraph 6.1, reference d). e. The effects experienced by materiel that is subject to high pressure wave impact, e.g., pressure impact on a materiel surface as a result of firing of a gun. For this type of shock and subsequent materiel response, see Method 519.8, Gunfire Shock. f. The shock effects experienced by very large extended materiel, e.g., building pipe distribution systems, over which varied parts of the materiel may experience different and unrelated shock events. For this type of shock, devise specialized tests based on analytical models and/or experimental measurement data. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-2 g. Special provisions for performing combined mechanical/climatic environment tests (e.g. shock tests at high or low temperatures). Guidelines found in the climatic test methods may be helpful in setting up and performing combined environment tests. h. Shocks integrated with transient vibration that are better replicated under Time Waveform Replication (TWR) methodology. See Method 525.2. i. Guidance on equivalence techniques for comparison of shock and vibration environments. Method 516, Annex C (Autospectral Density with Equivalent Test Shock Response Spectra) that was in previous revisions of MIL-STD-810 has been removed. j. Repetitive shocks associated with unrestrained cargo in ground transport vehicles that may be best replicated under loose cargo transportation methodology. See Method 514.8, Procedure II. 2. TAILORING GUIDANCE. 2.1 Selecting the Shock Method. After examining requirements documents and applying the tailoring process in Part One of this Standard to determine where mechanical shock environments are foreseen in the life cycle of the materiel, use the following to confirm the need for this Method and to place it in sequence with other methods. 2.1.1 Effects of Shock. Mechanical shock has the potential for producing adverse effects on the physical and functional integrity of all materiel. In general, the damage potential is a function of the amplitude, velocity, and the duration of the shock. Shocks with frequency content that correspond with materiel natural frequencies will magnify the adverse effects on the materiel's overall physical and functional integrity. The materiel response to the mechanical shock environment will, in general, be highly oscillatory, of short duration, and have a substantial initial rise time with large positive and negative peak amplitudes of about the same order of magnitude (for high velocity impact shock, e.g., penetration shocks, there may be significantly less or no oscillatory behavior with substantial area under the acceleration response curve). The peak responses of materiel to mechanical shock will, in general, be enveloped by a decreasing form of exponential function in time. In general, mechanical shock applied to a complex multi-modal materiel system will cause the materiel to respond to (1) forced frequencies of a transient nature imposed on the materiel from the external excitation environment, and (2) the materiel's resonant natural frequencies either during or after application of the external excitation environment. Such response may cause: a. Materiel failure as a result of increased or decreased friction between parts, or general interference between parts. b. Changes in materiel dielectric strength, loss of insulation resistance, variations in magnetic and electrostatic field strength. c. Materiel electronic circuit card malfunction, electronic circuit card damage, and electronic connector failure. (On occasion, circuit card contaminants having the potential to cause short circuit may be dislodged under materiel response to shock.) d. Permanent mechanical deformation of the materiel as a result of overstress of materiel structural and non-structural members. e. Collapse of mechanical elements of the materiel as a result of the ultimate strength of the component being exceeded. f. Accelerated fatiguing of materials (low cycle fatigue). g. Potential piezoelectric activity of materials. h. Materiel failure as a result of cracks in fracturing crystals, ceramics, epoxies, or glass envelopes. 2.1.2 Sequence Among Other Methods. a. General. Use the anticipated life cycle sequence of events as a general sequence guide (see Part One, paragraph 5.5). Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-3 b. Unique to this Method. Sequencing among other methods will depend upon the type of testing, i.e., developmental, qualification, endurance, etc., and the general availability of test items for test. Normally, schedule shock tests early in the test sequence, but after any vibration tests with the following additional guidelines: (1) If the shock environment is deemed particularly severe, and the chances of materiel survival without structural or operational failure are small, the shock test should be first in the test sequence. This provides the opportunity to redesign the materiel to meet the shock requirement before testing to the more benign environments. (2) If the shock environment is deemed severe, but the chance of the materiel survival without structural or functional failure is good, perform the shock test after vibration and thermal tests, allowing the stressing of the test item prior to shock testing to uncover combined mechanical and thermal failures. (3) There are often advantages to applying shock tests before climatic tests, provided this sequence represents realistic service conditions. Test experience has shown that climate-sensitive defects often show up more clearly after the application of shock environments. However, internal or external thermal stresses may permanently weaken materiel resistance to vibration and shock that may go undetected if shock tests are applied before climatic tests. 2.2 Selecting a Procedure. Table 516.8-I summarizes the eight test procedures covered in the Method with respect to the applicable configurations and operation states of the unit under test. Table 516.8-I. Shock Test Procedures and Configurations Summary. Procedure Description Packaged Unpackaged Operational Non-Operational I Functional Shock X X II Transportation Shock X X X III Fragility X X IV Transit Drop X X X V Crash Hazard Shock X VI Bench Handling X X VII Pendulum Impact X X VIII Catapult Launch/Arrested Landing X X X 2.2.1 Procedure Selection Considerations. Based on the test data requirements, determine which test procedure, combination of procedures, or sequence of procedures is applicable. In many cases, one or more of the procedures will apply. Consider all shock environments anticipated for the materiel during its life cycle, both in its logistic and operational modes. When selecting procedures, consider: a. The Operational Purpose of the Materiel. From requirement documents, determine the operations or functions to be performed by the materiel before, during and after the shock environment. b. The Natural Exposure Circumstances. Procedures I through VII are based on single shock events that result from momentum exchange between materiel or materiel support structures and another body. Procedure VIII (Catapult Launch/Arrested Landing) contains a sequence of two shocks separated by a comparatively short duration transient vibration for catapult launch, and a single shock for arrested landing. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-4 c. Data Required. The test data required to document the test environment, and to verify the performance of the materiel before, during, and after test. 2.2.2 Difference Among Procedures. a. Procedure I - Functional Shock. Procedure I is intended to test materiel (including mechanical, electrical, hydraulic, and electronic) in its functional mode, and to assess the physical integrity, continuity, and functionality of the materiel to shock. In general, the materiel is required to function during and after the shock, and to survive without damage resulting from shocks representative of those that may be encountered during operational service. b. Procedure II - Transportation Shock. Procedure II is used to evaluate the response of an item or restraint system to transportation environments that create a repetitive shock load. The procedure uses a classical terminal peak sawtooth, either measured or a synthetic shock waveform, to represent the shock excitation portion of the transportation scenario. The shock can be a repetitive event of similar amplitude, or an irregular event that varies in amplitude and frequency bandwidth. Ground vehicle transportation is a common source for transportation shock. Procedure II is not equivalent or a substitute for Method 514.8, Secured Cargo Vibration or Category 5, Loose Cargo, or other Method 516.8 shock test procedures. c. Procedure III - Fragility. Procedure III is used early in the item development program to determine the materiel's fragility level, in order that packaging, stowage, or mounting configurations may be designed to protect the materiel's physical and functional integrity. This procedure is used to determine the critical shock conditions at which there is chance of structural and/or operational system degradation based upon a systematic increase in shock input magnitudes. To achieve the most realistic criteria, perform the procedure at environmental temperature extremes. d. Procedure IV - Transit Drop. Procedure IV is a physical drop test, and is intended for materiel either outside of, or within its transit or combination case, or as prepared for field use (carried to a combat situation by man, truck, rail, etc.). This procedure is used to determine if the materiel is capable of withstanding the shocks normally induced by loading and unloading when it is (1) outside of its transit or combination case, e.g., during routine maintenance, when being removed from a rack, being placed in its transit case, etc., or (2) inside its transit or combination case. Such shocks are accidental, but may impair the functioning of the materiel. This procedure is not intended for shocks encountered in a normal logistic environment as experienced by materiel inside bulk cargo shipping containers (ISO, CONEX, etc.). See Procedure II (Transportation Shock), and Procedure VII (Pendulum Impact). e. Procedure V - Crash Hazard Shock Test. Procedure V is for materiel mounted in air or ground vehicles that could break loose from its mounts, tiedowns, or containment configuration during a crash, and present a hazard to vehicle occupants and bystanders. This procedure is intended to verify the structural integrity of materiel mounts, tiedowns or containment configuration during simulated crash conditions. Use this test to verify the overall structural integrity of the materiel, i.e., parts of the materiel are not ejected during the shock. In some instances, the crash hazard can be evaluated by a static acceleration test (Method 513.8, Procedure III, or a transient shock (Method 516.8, Procedure V)). The requirement for one or both procedures must be evaluated based on the test item. f. Procedure VI - Bench Handling. Procedure VI is intended for materiel that may typically experience bench handling, bench maintenance, or packaging. It is used to determine the ability of the materiel to withstand representative levels of shock encountered during such environments. This procedure is appropriate for materiel out of its transit or combination case. Such shocks might occur during materiel repair. This procedure may include testing for materiel with protrusions that may be easily damaged without regard to gross shock on the total materiel. The nature of such testing must be performed on a case-by-case basis, noting the configuration of the materiel protrusions, and the case scenarios for damage during such activities as bench handling, maintenance, and packaging. g. Procedure VII – Pendulum Impact. Procedure VII is intended to test the ability of large shipping containers to resist horizontal impacts, and to determine the ability of the packaging and packing methods to provide protection to the contents when the container is impacted. This test is meant to simulate accidental handling impacts, and is used only on containers that are susceptible to accidental end impacts. The pendulum impact test is designed specifically for large and/or heavy shipping containers that are likely to be handled mechanically rather than manually. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-5 NOTE: The rail impact test, formerly Procedure VII, has been moved to Method 526.2. h. Procedure VIII - Catapult Launch/Arrested Landing. Procedure VIII is intended for materiel mounted in or on fixed-wing aircraft that is subject to catapult launches and arrested landings. For catapult launch, materiel may experience a combination of an initial shock followed by a low level transient vibration of some duration having frequency components in the vicinity of the mounting platform’s lowest frequencies, and concluded by a final shock according to the catapult event sequence. For arrested landing, materiel may experience an initial shock followed by a low level transient vibration of some duration having frequency components in the vicinity of the mounting platform’s lowest frequencies. 2.3 Determine Test Levels and Conditions. Having selected this Method and relevant procedures (based on the materiel's requirements documents and the tailoring process), complete the tailoring process by identifying appropriate parameter levels, applicable test conditions, and test techniques for the selected procedures. Base these selections on the requirements documents, the Life Cycle Environmental Profile (LCEP), and information provided with the appropriate procedure. Many laboratory shock tests are conducted under standard ambient test conditions as discussed in Part One, paragraph 5. However, when the life cycle events being simulated occur in environmental conditions significantly different than standard ambient conditions, consider applying those environmental factors during shock testing. Individual climatic test procedures of this Standard include guidance for determining levels of other environmental loads. For temperature-conditioned environmental tests, (high temperature tests of explosive or energetic materials in particular), consider the materiel degradation due to extreme climatic exposure to ensure the total test program climatic exposure does not exceed the life of the materiel. (See Part One, paragraph 5.19.). Consider the following when selecting test levels: 2.3.1 General Considerations - Terminology and Processing Procedures with Illustration. Much of the core terminology associated with shock testing is addressed in the following topics: (1) the shock model, (2) laboratory shock test options including tailoring when measured data are available, (3) single shock event characterization (in particular the crucial issue of shock duration with detailed additional information supplied in Annex A), (4) procedures for single shock event with multiple channel measurement processing for laboratory tests, (5) reference to statistical and probabilistic summary information for multiple shock events over possible multiple related measurements provided in Annex C, and (6) references to more advanced analysis techniques for characterizing a shock environment and its effects on materiel. Information in Annex C is crucial for processing measured data and test specification development. 2.3.1.1 The Shock Model. This paragraph is essential to understanding the nature of the shock environment applied to materiel. The shock model represents materiel with a shock input defined by a comparatively short time and a moderately high-level impulse. The duration of the input is usually much less than the period of the fundamental frequency of the mounted materiel, and the amplitude of the input is above peaks of extreme materiel vibration response levels. Generally, the impulse input is distributed to the materiel surface or body directly or, more commonly, to the materiel through its mounts to a primary structure. It is difficult to directly measure such an impulse in time versus magnitude. When the impulse is applied to the materiel through its mounting points to a structure, a simple base-excited single-degree-of-freedom (SDOF) linear system can serve as a shock model for the materiel at a single resonant frequency of the materiel. Figure 516.8-1 displays such a system with the mass representing the materiel, and the combination spring/damper representing the path that supplies the impulse to the materiel. This model is used to define the Shock Response Spectra (SRS) considered throughout the subparagraphs of 2.3.1 and Annex A. Figure 516.8-1 displays the second order differential equations of motion that justify base input impulse specified as displacement/velocity. The solution can be in terms of absolute mass motion acceleration, or in terms of relative motion between the base and the mass. For an assumed base input acceleration measurement, the second-order differential equation of motion is “solved” by filtering the shock acceleration using a series of SDOF systems based upon a ramp-invariant digital filter algorithm (paragraph 6.1, reference i). The SRS is provided by a plot of natural frequency (undamped SDOF natural frequency) versus specified mass response amplitude, and is obtained as the output of the SDOF bandpass filters when the transient shock time history acceleration serves as the input to the base. Materiel response acceleration, (usually measured at a materiel mount location or, less preferably, at a materiel subcomponent with potential for local resonant response), will generally be the variable used in characterization of the effects of the shock. This does not preclude Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-6 other variables of materiel response such as velocity, displacement, or strain from being used and processed in an analogous manner, as long as the interpretation of the measurement variable is clear, and the measurement/signal conditioning configuration is valid, e.g., measurements made within the significant frequency range of materiel response, etc. If, for example, base input velocity is obtained from measurement, all relative and absolute quantities will be transformed from those based upon base input acceleration (see Annex A). It can be established that stress within materiel at a particular location is proportional to the velocity of the materiel at that same location (paragraph 6.1, references e and f). For the SDOF model, this implies that stress within the materiel is proportional to the relative velocity between the base and the mass, and not the absolute velocity of the mass. Annex A discusses the modeling of SDOF systems in more detail, and places emphasis on the fact that materiel with many resonant modes can often be thought of in terms of a series of independent SDOF systems as defined at the resonant frequencies of the materiel. ( ) ( ) ( ) ( ) ( ) ( ) ( ) Base Input SDOF Differential Equation of Motion: For base input motion coordinate and mass absolute motion coordinate where for x t y t my t cy t ky t kx t cx t + + = +    ( ) ( ) ( ) [ ] ( ) ( ) [ ] ( ) ( ) ( ) ( ) ( ) -- 0 with 0 for = inertial force on mass = viscous damping force related to viscous dampi m c k m c my t c y t x t k y t x t F t F t F t F t m F t + + = + + =    ( ) ( ) ( ) ( ) ( ) ( ) ng coefficient = linear spring force related to linear spring stiffness coefficient If - then k c F t k z t y t x t mz t cz t = +   ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) - or -kz t mx t z t c m z t k m z t x t + = + + =     Figure 516.8-1. Base input SDOF system model for shock considerations. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-7 2.3.1.2 Laboratory Shock Test Options. The following paragraphs address the various options for conduct of laboratory shock tests. Consideration will be discussed regarding availability of field data. 2.3.1.2.1 Summary. For any configured materiel, ideally there exist “representative” field measurements of shock to which the materiel might be exposed during its life according to the LCEP. The eight procedures in this Method generally describe the scenarios in which field shock to materiel may occur. The procedures go beyond scenarios, and suggest default drop, default pulses, and/or default SRSs for applying laboratory shock. These “defaults” may have originated from field measurement data on some generic materiel in a particular configuration that were summarized and documented at one time, but this documentation no longer exists. Such lack of documentation leaves this Method with some procedures that are based upon the best laboratory test information currently available. The reality is that obtaining accurate item specific field measurements can be difficult, cost prohibitive, or not possible to acquire in a timely manner. However, to the maximum extent possible, tests based on measured data are the recommended option before use of the provided default test criteria. NOTE: For materiel design and development, the option of tailoring of a laboratory shock test from field measurement information is superior to any of the test procedures within this Method, and should be the first laboratory test option. This assumes that the measurement data bandwidth and the laboratory test bandwidths are strictly compatible. 2.3.1.2.2 Test Implementation Options. Table 516.8-II summarizes the options for the eight laboratory test procedures. The options are defined as follows: a. “TWR” (Time Waveform Replication), means that the measurement time history will be reproduced on the laboratory exciter with “minimal amplitude time history error” according to Method 525.2 Typically implemented using special shock package software for replication. b. “Drop” is an explicit free fall drop event. c. “Classical Pulse” refers to classical pulses to be used in testing. Classical pulses defined within this method are the terminal peak sawtooth, trapezoidal and half-sine pulses. This category is generally employed when suitable field measurement information is unavailable, and traditional testing is relied upon. d. “SRS” refers to cases in which an SRS is used for the test specification, and exciter shock is synthesized based upon amplitude modulated sine waves or damped sinusoids. This category may be based on the SRS equivalent of a classical pulse to reduce adverse effects associated with conducting classical shock testing on a shaker, or may be defined based upon an ensemble of measured field data. The application notes in Annex A paragraph A.1.3 are important for defining the appropriate duration for the synthesized SRS pulse. From Table 516.8-II, it is clear that the test procedures are divided according to use of TWR, drop test procedures, classical pulses, or synthesized waveforms from SRS. TWR is considered the most realistic as it is based upon direct replication of field measured data. Software vendors have generally incorporated an option for TWR within their “shock package,” so that it is unnecessary to plan testing under specialized TWR software as called out in Methods 525.2 and 527.2, however, both of these Methods provide insight into tolerance and scaling related to a more general TWR methodology. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-8 Table 516.8-II. Laboratory Test Options. Procedure Test Methodology Drop1 Classical Pulse SRS TWR Half-Sine 2 Trapezoidal TP Sawtooth I Functional Shock X X X X II Transportation Shock X X X III Fragility X X IV Transit Drop X V Crash Hazard Shock3 X X X VI Bench Handling X VII Pendulum Impact4 X VIII Catapult Launch/ Arrested Landing5 X Note 1. The Drop test includes vertical free fall towers, impact machines, and other test methods with similar equipment. Note 2. High Speed Craft is a special case of Functional Shock that is specified in terms of classical half-sine. Note 3. In some cases the Crash Hazard Shock may be evaluated by a constant acceleration, see paragraph 2.2.2e. Note 4. Pendulum Impact is a test item with horizontal motion that impacts a stationary barrier. Note 5. A Catapult Launch/Arrested Landing test can be based on a measured waveform or a two second damped (Q=20) sine burst of required amplitude and frequency, see the test procedure. 2.3.1.2.3 Tailoring When Measured Data Are Available - General Discussion. Since test tailoring to field measured data is considered a superior technique for shock testing, information and guidelines in this and subsequent paragraphs are very important. Beyond the classical pulse, two techniques of shock replication in the laboratory are possible. a. The first technique takes a measurement shock, and conditions it for direct waveform replication on the laboratory exciter. Conditioning may consist of bandwidth limiting via lowpass, highpass, or bandpass filtering, and re-sampling into an ASCII or other general file format. Vendor packages may have this capability within the “shock package” or in a special “Time Waveform Replication (TWR) package”. b. The second technique takes a measurement shock, computes an SRS estimate, and subsequently uses this SRS estimate to synthesize a representative time domain reference using a “wavelet” or a damped sine-based synthesis approach. In order to maintain a reasonable correlation between the effective pulse durations in the field measured and laboratory synthesized signals, in addition to the SRS reference to be synthesized, the test operator will require knowledge of the basic temporal characteristics of the time domain signal(s) from which the reference SRS is computed. More on this subject follows in Annex A Paragraph 1.3. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-9 In summary, when test tailoring based upon available field measured data is employed, there are basically two laboratory test options available (assuming that repetition of the laboratory shock is under the guidance of the LCEP). Depending on the conditions of the test in which the data was acquired and the intended use for the data, the typical application of TWR or SRS test methods are described below. a. TWR. (1) Measured shock is a single shock field measurement or highly repeatable multiple shock field measurement. (2) Complex shocks. (3) Adequate measurement or ability to predict time histories at relevant locations in order to have adequate information at mounting locations of the test article. (4) Examples of such measurements are catapult launches, aircraft landing, and gunfire loads. NOTE: The bandwidth of the measurement shock and the ability of the laboratory exciter system to “replicate the bandwidth” is an important consideration under TWR. TWR input time histories may be band-limited, and yet the materiel response may have broader bandwidth as a result of mounting. This area has not been studied to any extent, and can be a function of the materiel and its mounting. Time history bandwidths that exceed the laboratory exciter bandwidth place a rather severe limitation on use of TWR for laboratory testing. b. SRS. (1) Single or multiple shock measurements where SRS values fit to a statistical distribution. Confirmation of statistical trend must be made. (2) Sensor placement is sparse relative to the area in which it is to characterize. (3) The shock load is known to have a statistically high variance. (4) An example of SRS preference would be the shock assigned to a ground vehicle’s hull as a function of multiple terrains. Scaling for conservatism is ill-defined, but may be applied at the discretion of the analyst. NOTE: SRS synthesis requires not only the SRS estimate, but (1) a general amplitude correspondence with field measured or a predicted pulse, and (2) an estimate of the field measured or predicted pulse duration. In general, synthesis is applicable only for “simple shocks” (see Annex A paragraphs 1.2-1.3) with high frequency information very near the peak amplitude, i.e., for shocks whose rms duration is short. By the nature of the composition of the synthesized shock (i.e., damped sinusoids or “wavelets”), it is possible to inappropriately extend the duration of a time history that matches a given SRS to an indefinitely long time. Note also that when measurement data are available, certain shocks, in particular “complex shocks” (see Annex B), may only be adequately applied under TWR. 2.3.2 Test Conditions. When defining shock test levels and conditions, every attempt needs to be made to obtain measured data under conditions similar to service environment conditions in the Life Cycle Environmental Profile. Consider the following test execution ranking from the most desirable to the least desirable as follows: a. TWR: Measured time histories summarized, and laboratory exciter shock created by way of direct reproduction of one or more selected time histories under exciter waveform control (see Method 525). Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-10 b. SRS based on Measured Data: Measured time histories summarized in the form of an SRS and laboratory exciter shock synthesized by way of a complex transient making sure that effective shock durations ( e T and E T ) for the test pulse are consistent with the measured data and the character of the synthesized waveform is “similar” to the measured time histories with respect to amplitude and zero crossings (see Annex A Paragraph 1.3 for a discussion and example of effective shock durations). c. SRS in the absence of Measured Data: No measured time histories but previous SRS estimates available, and laboratory exciter shock synthesized by way of a complex transient such that effective shock durations ( e T and E T ) are specified taking into consideration the nature of the environment and the natural frequency response characteristics of the materiel (see Annex A Paragraphs 1.3 and 1.4). d. Classical Shock Pulse: No measured time histories, but classical pulse shock descriptions available for use in reproducing the laboratory exciter shock (see Paragraph 2.3.2.3). 2.3.2.1 SRS Based on Measured Data When measured data is available, the SRS required for the test will be determined from analytical computations. e T and E T required for the test will be determined from statistical processing of time history measurements of the materiel’s environment (see Annex A, Paragraph 1.3). Unless otherwise specified, the SRS analysis will be performed on the AC coupled time history for Q = 10 at a sequence of natural frequencies spaced at 1/12 octave or less to span a minimum bandwidth of 5 Hz to 2,000 Hz. a. When a sufficient number of representative shock spectra are available, employ an appropriate statistical enveloping technique to determine the required test spectrum with a statistical basis (see Annex C of this Method). b. When insufficient measured time histories are available for statistical analysis (only one or two time histories of like character), use an increase over the maximum of the available SRS spectra to establish the required test spectrum (if two spectra are available, determine a maximum envelope according to the ENV procedure of Annex C). The resulting spectra should account for stochastic variability in the environment, and uncertainty in any predictive methods employed. The degree of increase over measured time history spectra is based on engineering judgment, and should be supported by rationale. In these cases, it is often convenient to add either a 3 dB or 6 dB margin to the enveloped SRS, depending on the degree of test level conservatism desired (see Annex C, paragraph 4.2). Effective durations e T and E T for test should be taken as the respective maximums as computed from each of the measured time histories. 2.3.2.2 SRS in the Absence of Measured Data If measured data is not available, the SRS and the corresponding values of e T and E T may be derived from (1) a carefully scaled measurement of a dynamically similar environment, (2) structural analysis or other prediction methods, or (3) from a combination of sources. For Procedure I (Functional Shock with Terminal Peak Sawtooth Reference Criteria), and Procedure V (Crash Hazard Shock), employ the applicable SRS spectrum from Figure 516.8-2 as the test spectrum for each axis, provided e T and E T of the test shock time history is in compliance with the accompanying Table 516.8-III. This spectrum approximates that of the perfect terminal-peak sawtooth pulse. General guidance for selecting the crossover frequency, co F , for any classical pulse is to define it as the lowest frequency at which the corresponding SRS magnitude reaches the convergence magnitude (the constant magnitude reached in the high frequency portion of the SRS) for the damping ratio of interest. Once co F is defined, the effective duration considered in the complex pulse synthesis is then defined as 2 ≤ E co T F . This guidance allows for a longer effective duration than previous versions of this standard that were found to be too restrictive. Refer to Annex A paragraph 1.3 for additional guidance on customizing the bandwidth of the SRS and corresponding values of e T and E T as required. It is recommend that the test be performed with a waveform that is synthesized from either (1) a superposition of damped sinusoids with selected properties at designated frequencies, or (2) a superposition of various amplitude modulated sine waves with selected properties at designated frequencies, such that this waveform has an SRS that Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-11 approximates the SRS on Figure 2. In reality, any complex test transient with major energy in the initial portion of the time trace is suitable if it is within tolerance of this spectrum requirement over the minimum frequency range of 10 to 2000 Hz, and meets the duration requirements. Implementing a classical terminal-peak sawtooth pulse or trapezoidal pulse on a vibration exciter are the least permissible test alternatives. In the case in which a classical pulse is given as the reference criteria, it is permissible to synthesize a complex pulse based on the SRS characteristics of the referenced classical pulse. In such cases, e T and E T should be defined as in Table 516.8-III. Figure 516.8-2. Test SRS for use if measured data are not available (for Procedure I - Functional Shock, and Procedure V - Crash Hazard Shock). Table 516.8-III. Test shock response spectra for use if measured data are not available. Test Category Peak Acceleration (G-Pk) Te (ms)1 TE (ms)1 Cross-over Frequency Fco (Hz) Functional Test for Flight Equipment 20 min 2.5 f 2 / co F 45 Functional Test for Ground Equipment2 40 min 2.5 f 2 co F 45 Launch/Eject During Captive Carry 30 min 2.5 f 2 co F 45 Crash Hazard Shock Test for Flight Equipment 40 min 2.5 f 2 co F 45 Crash Hazard Shock Test for Ground Equipment 75 min 2.5 f 2 co F 80 Note 1: The default value for min f is 10 Hz as shown in Figure 516.8-2. Refer to guidance in paragraphs 4.2.2.2.c and 4.2.2.2.d to customize the bandwidth of the SRS and corresponding values of Te and TE. Note 2. For materiel mounted only in trucks and semi-trailers, use a 20G peak value. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-12 2.3.2.3 Classical Shock Pulse Classical shock pulses (e.g., half-sine, terminal peak sawtooth, or trapezoidal) may be defined by (1) time history measurements of the materiel’s environment, (2) from a carefully scaled measurement of a dynamically similar environment, (3) from structural analysis or other prediction methods, or (4) from a combination of sources. The terminal peak sawtooth is often referenced due to its relatively flat spectral characteristics in the SRS domain as approximated in Figure 516.8-2. In the event that a-priori information regarding rise time of the transient event being considered is determined to be a critical parameter, consider a half-sine pulse or a trapezoidal pulse with a tailored rising edge in lieu of the terminal peak sawtooth. Shock pulse substitution (e.g., half-sine in lieu of terminal peak sawtooth) requires adjustment in the amplitude such that the velocity of the substituted shock pulse is equivalent to the original specification. The resulting over-test or under-test with respect to the difference in the SRS must be considered, documented, and approved by the appropriate testing authority. If a classical shock pulse is defined in lieu of more complex measured time history data it must be demonstrated that SRS estimates of the classical shock pulse are within the tolerances established for the SRS estimates of the measured time history data. In most cases, classical shock pulses will be defined as one of the following: a. Terminal Peak Sawtooth Pulse: The terminal peak sawtooth pulse along with its parameters and tolerances are provided in Figure 516.8-3, and is an alternative for testing in Procedure I - Functional Shock, Procedure II - Transportation Shock and Procedure V - Crash Hazard Shock Test. b. Trapezoidal Shock Pulse: The trapezoidal pulse along with its parameters and tolerances is provided in Figure 516.8-4. The trapezoidal pulse is specified for Procedure III - Fragility. c. Half-Sine Shock Pulse: The half-sine pulse along with its parameters and tolerances is provided in Figure 516.8-5. The Half-Sine Pulse is specified for Procedure I – High Speed Craft Functional Shock. As discussed in paragraph 2.3.2.3.1, the half-sine pulse is often used in lieu of other classical pulses based upon equipment availability and or limitations. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-13 Figure 516.8-3. Terminal peak sawtooth shock pulse configuration and its tolerance limits. Figure 516.8-4. Trapezoidal shock pulse configuration and tolerance limits. -12 -7 -2 3 8 13 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 Amplitude Time Required Waveform Upper Tolerance Lower Tolerance 0.2A -0.2A 0 0.2A -0.2A 0 1.2A 0.8A A 6TD = T2 1.5TD Integration time 2.5TD TD TD 0.4TD 0.1TD 2.5TD 0.1TD 2.4TD = T1 -12 -7 -2 3 8 13 18 0 0 0 0 0 0 0 0 Amplitude Time Required Waveform Upper Tolerance Lower Tolerance 0.2A -0.2A 0 0.2A -0.2A 0 1.2A 0.8A A 6TD = T2 1.5TD Integration time 2.5TD TD 0.4TD 2.5TD TD 2.4TD = T1 0.1TD 0.1TD 0.2A 0.2A Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-14 Figure 516.8-5. Half-Sine shock pulse configuration and tolerance limits. 2.3.2.3.1 Classical Shock Pulses (Mechanical Shock Machine). It is recognized that conducting a terminal peak sawtooth or trapezoidal pulse on a mechanical shock machine requires the use of special programmers (e.g., lead or gas programmers) and requires higher impact velocity than equivalent half-sine shocks since the half-sine pulse contains significant rebound velocity that is not characteristic of the terminal peak sawtooth pulse. Such programmers or high velocity shock machines are not available in all laboratories. In such cases, it may be necessary to resort to the use of more readily available programmers used in the conduct of half-sine shock pulses. When substitution of shock pulses is necessary, follow the general guidance of maintaining equivalent velocity to that of the original reference pulse. 2.3.2.3.2 Classical Shock Pulses (Vibration Exciter). If a vibration exciter is to be employed to conduct a test with a classical shock pulse, it will be necessary to optimize the reference pulse such that the net velocity and displacements are zero. Unfortunately, the need to compensate the reference pulse distorts the temporal and spectral characteristics, resulting in two specific problems that will be illustrated through example using a terminal peak sawtooth (the same argument is relevant for any classical pulse test to be conducted on a vibration exciter). First, any pre and/or post-pulse compensation will be limited by the ± 20 -12 -7 -2 3 8 13 0 0 0 0 0 0 0 0 Amplitude Time Required Waveform Upper Tolerance Lower Tolerance 0.2A -0.2A 0 0.2A -0.2A 0 1.2A 0.8A A 0.2A 6TD = T2 2.5TD 2.5TD TD TD 0.4TD 0.1TD 1.5TD 2.4TD = T1 Integration time Key to Figures 516.8-3 through 516.8-5: TD: duration of nominal pulse (tolerance on TD is ± 10%). A: peak acceleration of nominal pulse T1: minimum time duration which the pulse shall be monitored for shocks produced using a conventional mechanical shock machine. T2: minimum time during which the pulse shall be monitored for shocks produced using a vibration exciter. The duration associated with the post-pulse slope of a terminal peak sawtooth and durations associated with the pre and post slopes of a trapezoidal pulse should be less than 10% TD. The tolerance on velocity, due to combined effects of any amplitude and/or duration deviations from the nominal pulse, is limited to +/- 20% of the pulse’s nominal velocity. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-15 percent tolerances given in Figures 516.8-3 to 516.8-5. Second, as illustrated by the pseudo-velocity SRS in Figure 516.8-6, the velocities in the low frequency portion of the SRS will be significantly reduced in amplitude. Also, there is generally an area of increased amplitude associated with the duration of the pre- and post-test compensation. Observe that the low frequency drop-off in SRS levels between the compensated and uncompensated pulse is readily identifiable and labeled low f . Likewise, the frequency at which the compensated and uncompensated pulses converge is readily identifiable and labeled hi f . The drop-off at low f is considered to be acceptable if and only if the lowest resonant frequency of the item being tested, 1 f , is at least one octave greater than low f . The amount of gain in the region low hi f f f ≤ ≤ is directly related to the duration and magnitude of the compensation pulse and the percent of critical damping employed in the SRS computation (Q=10 in Figure 516.8-6). The potential for over-test in this spectral band must also be carefully considered prior to proceeding. Figure 516.8-6. Illustration of temporal and spectral distortion associated with a compensated classical terminal peak sawtooth. 10 1 10 2 10 3 10 0 10 1 10 2 Natural Frequency (Hz) Pseudo velocity (in/sec) Pseudo-Velocity SRS - TP Sawtooth (40G-11ms) Uncompensated (Solid) and Compensated (Dashed) 10 g 1 g 0.1 g 10000 g 1000 g 100 g 1 in 0.1 in 0.01 in 0.001 in 0.0001 in 10 in fhi flow Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-16 2.3.3 Test Axes and Number of Shock Events - General Considerations. Generally, the laboratory test axes and the number of exposures to the shock events should be determined based upon the LCEP. However as a minimum requirement, subject the test item to a sufficient number of suitable shocks to meet the specified test conditions in both directions along each of three orthogonal axes. A suitable test shock for each direction of each axis is defined to be one classical shock pulse or complex transient pulse that yields a response spectrum that is within the tolerances of the required test spectrum over the specified frequency range, and has an effective duration within the tolerance of E T as defined in paragraph 4.2.2.2. In general, complex transient pulses generated by modern control systems will be symmetric and the maximax positive and negative SRS levels will be the same. However, this must be verified for each shock event by computing the spectra for positive and negative maximum (i.e., maximum and minimum) accelerations, generally at Q = 10, and at least 1/12-octave frequency intervals. If the required test spectrum can be satisfied simultaneously in both directions along an axis (i.e., symmetric pulse), one shock event will satisfy a single shock requirement for that axis in both directions. If the requirement can only be satisfied in one direction (e.g., polarity consideration for classical shock inputs, non-symmetric complex transient pulses), it is permissible to change the test setup and impose an additional shock to satisfy the spectrum requirement in the other direction. This may be accomplished by either reversing the polarity of the test shock time history or reversing the test item orientation. The following guidelines may also be applied for either classical shock pulses or complex transient pulses. a. For materiel that is likely to be exposed only rarely to a given shock event, perform a minimum of one shock in each direction of each axis. For shock conditions with a high potential of damage (e.g., large velocity change associated with the shock event, fragile test article), perform no more than one shock in each direction of each axis. Note that some high velocity shock tests with safety implications (i.e., crash hazard) may require two shocks in each direction of each axis. b. For materiel likely to be exposed more frequently to a given shock event, and there are little available data to substantiate the number of shocks, apply a minimum of three shocks in each direction of each axis. 2.3.3.1 Special Considerations for Complex Transients. There is no unique synthesized complex transient pulse satisfying a given SRS. In synthesizing a complex transient pulse from a given SRS, and this complex transient pulse either (1) exceeds the capability of the shock application system (usually in displacement or velocity), or (2) the duration of the complex transient pulse is more than 20 percent longer than E T , some compromise in spectrum or duration tolerance may be necessary. It is unacceptable to decompose an SRS into a low frequency component (high velocity and displacement), and a high frequency component (low velocity and displacement) to meet a shock requirement. Often an experienced analyst may be able to specify the input parameters to the complex transient pulse synthesis algorithm in order to satisfy the requirement for which the shock application system manufacturer “optimum” solution will not. Refer to paragraphs 4.2.2.2.c and 4.2.2.2.d. 2.4 Test Item Configuration. (See Part One, paragraph 5.8.) The configuration of the test item strongly affects test results. Use the anticipated configuration of the materiel in the life cycle environmental profile. As a minimum, consider the following configurations: a. In a shipping/storage container or transit case. b. Deployed in the service environment. 3. INFORMATION REQUIRED. 3.1 Pretest. The following information is required to conduct a shock test. a. General. Information listed in Part One, paragraphs 5.7, 5.9, and 5.11 of this Standard; and in Part One, Annex A, Task 405. b. Specific to this Method. (1) Test fixture modal survey procedure. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-17 (2) Test item/fixture modal survey procedure. (3) Shock environment. Either: (a) The predicted SRS or the complex shock pulse synthesis form (superposition of damped sinusoids, amplitude modulated sine waves, or other) specifying spectrum shape, peak spectrum values, spectrum break points, and pulse duration. (b) The measured data selected for use in conjunction with the SRS synthesis technique outlined in the procedures. (If the SRS synthesis technique is used, ensure both the spectral shape and synthesized shock duration are as specified.). (c) The measured data that are input as a compensated waveform into an exciter/shock system under Time Waveform Replication (TWR). (See Method 525.2.) (d) Specified test parameters for transit drop and fragility shock. (4) Techniques used in the processing of the input and the response data. (5) Note all details of the test validation procedures. c. Tailoring. Necessary variations in the basic test procedures to accommodate LCEP requirements and/or facility limitations. 3.2 During Test. Collect the following information during conduct of the test. a. General. Information listed in Part One, paragraph 5.10 and in Part One, Annex A, Task 406 of this Standard. b. Specific to this Method. Information related to failure criteria for test materiel under acceleration for the selected procedure or procedures. Pay close attention to any test item instrumentation, and the manner in which the information is received from the sensors. For large velocity shock, ensure instrumentation cabling does not add noise to measurements as a result of cable movement. c. If measurement information is obtained during the test, examine the time histories and process according to procedures outlined in the test plan. 3.3 Post-Test. The following information shall be included in the test report. a. General. Information listed in Part One, paragraph. 5.13 of this Standard; and in Part One, Annex A, Task 406. b. Specific to this Method. (1) Duration of each exposure and number of exposures. (2) Status of the test item after each visual examination. (3) All response time histories and the information processed from these time histories. In general, under-processed information, the absolute acceleration maximax SRS, and the pseudo-velocity SRS should be supplied as a function of single degree-of-freedom oscillator undamped natural frequency. In certain cases, the ESD and FS may be supplied. (4) Test item and/or fixture modal analysis data and, if available, a mounted item/fixture modal analysis. (5) Any deviation from the test plan or default severities (e.g., drop surface). 4. TEST PROCESS. 4.1 Test Facility. Use a shock-producing apparatus capable of meeting the test conditions as determined according to the appropriate paragraphs of this Method. The shock apparatus may be of the free fall, resilient rebound, non-resilient rebound, hydraulic, compressed gas, electrodynamic exciter, servo-hydraulic exciter, or other capable configuration. Careful attention needs to be paid to the time, amplitude, and frequency ranges over which the apparatus is capable of Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-18 delivering a shock input. For example, electrodynamic exciters can suitably reproduce synthesized shock records from 5 Hz to 2000 Hz or above; however, a servo-hydraulic exciter may have only a DC to 500 Hz controllable frequency range. Procedures II and III require test apparatus capable of producing relatively large displacement. Procedure VII is a special test setup in that large containers impact a rigid barrier. Procedure VIII for catapult launch is best satisfied by application of two shock pulses with an intervening “transient vibration” for which TWR Method 525.2 may be appropriate. Generally, shock on either electrodynamic or servo-hydraulic exciters will be controlled using classical shock, SRS shock, or time waveform replication control software. 4.2 Controls. 4.2.1 Calibration. The shock apparatus will be user-calibrated for conformance with the specified test requirement from the selected procedure where the response measurements will be made with traceable laboratory calibrated measurement devices. Conformance to test specifications may require use of a “calibration load” in the test setup. If the calibration load is required, it will generally be a mass/stiffness simulant of the test item. “Mass/stiffness simulants” imply that the modal dynamic characteristics of the test item are replicated to the extent possible in the simulant - particularly those modal dynamic characteristics that may interact with the modal dynamic configuration of the fixturing and/or the test device. For calibration, produce two consecutive input applications to a calibration load that satisfy the test conditions outlined in Procedures I, II, III, V, or VIII. After processing the measured response data from the calibration load, and verifying that it is in conformance with the test specification tolerances, remove the calibration load and perform the shock test on the test item. Use of calibration loads for setup to guard against excessive over test or unproductive under test is highly recommended in all cases. 4.2.2 Tolerances. For test validation, use the tolerances specified under each individual procedure, along with the guidelines provided below. In cases in which such tolerances cannot be met, establish achievable tolerances that are agreed to by the cognizant engineering authority and the customer prior to initiation of test. In cases, in which tolerances are established independently of the guidance provided below, establish these tolerances within the limitations of the specified measurement calibration, instrumentation, signal conditioning, and data analysis procedures. 4.2.2.1 Classical Pulses and Complex Transient Pulses-Time Domain. For the classical pulses in this Method, tolerance limits on the time domain representation of the pulses are as specified in Figures 516.8-3 through 516.8-5. If a classical shock pulse is defined in lieu of more complex measured time history data it must be demonstrated that SRS estimates of the classical shock pulse are within the tolerances established for the SRS estimates of the measured time history data. For complex transient pulses specified in the time domain, it is assumed that testing will be performed under TWR (Method 525.2), and that the tolerance guidance related to that Method will be used. 4.2.2.2 Complex Transient Pulses-SRS. For a complex transient pulse specified by way of the maximax SRS, e.g., Figure 516.8-2, the frequency domain and time domain tolerances are specified in terms of a tolerance on the SRS amplitude values over a specified frequency bandwidth and a tolerance on the effective pulse duration. If a series of shocks are performed, all acceleration maximax SRS shall be computed at the center frequency of one-twelfth octave bands with a default damping quality factor Q of 10 (5 percent critical damping factor). Tolerances on the individual points (values associated with each one-twelfth octave center frequency) are to be within -1.5 dB and +3 dB over a minimum of 90 percent of the overall values in the frequency bandwidth from 10 Hz to 2000 Hz. For the remaining part of the frequency band, all SRS values are to be within -3 dB and +6 dB (this places a comparatively narrow tolerance on the major frequency band of interest, but allows a wider tolerance on 10 percent of this frequency band and a wider tolerance on the SRS above 2 KHz). Note that if an SRS is within tolerance for both SRS-minimum and SRS-maximums, the pulse is considered symmetric. While the reference criteria is often limited in bandwidth as a result of excitation equipment limitations, the analyst may require response data to be viewed through the bandwidth at which the SRS amplitude flattens. The duration of the complex transient is defined by e T and E T as discussed in Annex A paragraph 1.3 and shall have a tolerance of 0.8 1.2 ≤ ≤ E E E T T T . In addition, the following guidance is provided for use of (1) the pseudo-velocity response spectra, and (2) multiple measurements to specify a shock environment. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-19 a. All tolerances are specified on the maximax acceleration SRS. Any tolerances specified on the pseudo-velocity response spectra must be derived from the tolerances on the maximax acceleration SRS. (For three-coordinate paper, the pseudo-velocity tolerance can be determined by placing tolerance bands along the SRS acceleration axis, and then extracting the tolerance values along the ordinate for the pseudo-velocity SRS tolerance.) Note that SRS estimates scale directly in amplitude, i.e., multiplication of the time history by a factor is translated directly into multiplication of the SRS estimate by the same factor. b. The test tolerances are stated in terms of a single measurement tolerance, i.e., each individual laboratory test must fit within the tolerance bands to provide a satisfactory test. For an array of measurements defined in terms of a "zone" (paragraph 6.1, reference b), amplitude tolerance may be specified in terms of an average of the measurements within a "zone". However, this is, in effect, a relaxation of the single measurement tolerance in that individual measurements may be substantially out of tolerance while the average is within tolerance. In general, when specifying test tolerances based on averaging for more than two measurements within a zone, the tolerance band should not exceed the 95/50 one-sided normal tolerance upper limit computed for the logarithmically transformed SRS estimates, nor be less than the mean minus 1.5 dB. Any use of "zone" tolerances and averaging must have support documentation prepared by a trained analyst. The tolerance on the duration of the test pulse when more than one measurement is present, may be specified either as a percentage of the harmonic mean of the pulses (the nth root of the product of the n durations as defined by n n j=1 for 1,2,..., i.e., = = ∏ E j T E E j j n T T ), or on some statistical based measure taking account of the variance of the effective durations. For example, a 95/50 two-sided normal tolerance limit will provide the upper and lower limits of duration for which it is expected that 95 percent of future measurements will fall with 50 percent confidence coefficient. 10 percent of the difference in these limits might be a reasonable duration tolerance. For further possible ways of statistically defining specification of duration tolerance see Annex C). c. If the test item has no significant low frequency modal response, it is permissible to allow the low frequency portion of the SRS to fall out of tolerance in order to satisfy the high frequency portion of the SRS, provided the high frequency portion begins at least one octave below the first natural mode frequency, 1 f , of the mounted test item. Recall that min f was defined to be one octave below 1 f . The reference pulse synthesis should be conducted such that as much of the spectrum below min f remains in tolerance as possible without exceeding the specified duration E T . d. If the test item has significant low frequency modal response, it is permissible to allow the duration of the complex transient pulse to fall outside of the E T range (provided in Table 516.8-III), in order to satisfy the low frequency portion of the SRS. The effective duration contained in Table 516.8-III may be increased by as much as ( ) min 1 2 f in addition to E T , (e.g., ( ) min 1 2 + E T f ), in order to have the low frequency portion of the SRS within tolerance. If the duration of the complex transient pulse must exceed ( ) min 1 2 + E T f in order to have the low frequency portion of the SRS within tolerance, use a new shock procedure. 4.3 Test Interruption. Test interruptions can result from two or more situations, one being from malfunction of the shock apparatus or associated laboratory test support equipment. The second type of test interruption results from malfunction of the test item itself during operational checks. 4.3.1 Interruption Due To Laboratory Equipment Malfunction. a. General. See Part One, paragraph 5.11 of this Standard. b. Specific to this Method. Interruption of a shock test sequence is unlikely to generate any adverse effects. Normally, continue the test from the point of interruption. 4.3.2 Interruption Due To Test Item Operation Failure. Failure of the test item(s) to function as required during operational checks presents a situation with several possible options. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-20 a. The preferable option is to replace the test item with a “new” one and restart from Step 1. b. A second option is to repair the failed or non-functioning component or assembly of the test item with one that functions as intended, and restart the entire test from Step 1. NOTE: When evaluating failure interruptions, consider prior testing on the same test item, and consequences of such. 4.4 Instrumentation. In general, acceleration will be the quantity measured to meet a specification, with care taken to ensure acceleration measurements can be made that provide meaningful data. Always give special consideration to the measurement instrument amplitude and frequency range specifications in order to satisfy the calibration, measurement and analysis requirements. With regard to measurement technology, accelerometers, strain gages and laser Doppler vibrometers are commonly used devices for measurement. In processing shock data, it is important to be able to detect anomalies. For example, it is well documented that accelerometers may offset or zero-shift during mechanical shock, pyroshock, and ballistic shock (paragraph 6.1, references m, n, and s). Additional discussion on this topic is found in the pyro shock and ballistic shock methods. A part of this detection is the integration of the acceleration amplitude time history to determine if it has the characteristics of a physically realizable velocity trace. For mechanical shock various accelerometers are readily available which may or may not contain mechanical isolation. Transducer performance continues to improve with time, however, inventories across all laboratories may not be of the latest generation, and thereby making detailed calibrations critical in understanding individual transducer performance. a. Accelerometers. Ensure the following: (1) Amplitude Linearity: It is desired to have amplitude linearity within 10 percent over the entire operating range of the device. Since accelerometers (mechanically isolated or not) may show zero-shift (paragraph 6.1, reference o), there is risk in not characterizing these devices over their entire amplitude range. To address these possible zero-shifts, high pass filtering (or other data correction technique) may be required. Such additional post-test correction techniques increases the risk of distorting the measured shock environment. Consider the following in transducer selection: (a) It is recognized that accelerometers may have both non-linear amplification and non-linear frequency content below 10,000 Hz (paragraph 6.1, reference o). In order to understand the non-linear amplification and frequency characteristics, it is recommended that shock linearity evaluations be conducted at intervals of 20 to 30 percent of the rated amplitude range (inclusive of the maximum rated range) of the accelerometer to identify the actual amplitude and frequency linearity characteristics and useable amplitude and frequency range. If a shock based calibration technique is employed, the shock pulse duration for the evaluation is calculated as: max 1 2 = D T f Where TD is the duration (baseline) of the acceleration pulse and fmax is the maximum specified frequency range for the accelerometer. For mechanical shock, the default value for fmax is 10,000 Hz. (b) For cases in which response below 2 Hz is desired, a piezoresistive accelerometer measurement is required. (2) Frequency Response: A flat response within ± 5 percent across the frequency range of interest is required. Since it is generally not practical or cost effective to conduct a series of varying pulse width shock tests to characterize frequency response, a vibration calibration is typically employed. For the case of a high range accelerometer with low output, there may be SNR issues associated with a low level vibration calibration. In such cases a degree of engineering judgment will be required in the Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-21 evaluation of frequency response with a revised requirement for flat frequency response to be within ± 1 dB across the frequency range of interest. (3) Accelerometer Sensitivity: The sensitivity of a shock accelerometer is expected to have some variance over its large amplitude dynamic range. (a) If the sensitivity is based upon the low amplitude vibration calibration, it is critical that the linearity characteristics of the shock based “Amplitude Linearity” be understood such that an amplitude measurement uncertainty is clearly defined. (b) Ideally, vibration calibration and shock amplitude linearity results should agree within 10 percent over the amplitude range of interest for a given test. (4) Transverse sensitivity should be less than or equal to 7 percent. (5) The measurement device and its mounting will be compatible with the requirements and guidelines provided in paragraph 6.1, reference a. (6) Piezoelectric or piezoresistive accelerometers may be used for mechanical shock in scenarios in which levels are known to be within the established (verified through calibration) operating range of the transducer, thereby avoiding non-linear amplification and frequency content. b. Other Measurement Devices. (1) Any other measurement devices used to collect data must be demonstrated to be consistent with the requirements of the test, in particular, the calibration and tolerance information provided in paragraph 4.2. (2) Signal Conditioning. Use only signal conditioning that is compatible with the instrumentation requirements of the test, and is compatible with the requirements and guidelines provided in paragraph 6.1, reference a. In particular, filtering of the analog voltage signals will be consistent with the time history response requirements (in general, demonstrable linearity within ± 5º of phase throughout the desired frequency domain of response), and the filtering will be so configured that anomalous acceleration data caused by clipping will not be misinterpreted as response data. In particular, use extreme care in filtering the acceleration signals at the amplifier output. Never filter the signal into the amplifier for fear of filtering erroneous measurement data, and the inability to detect the erroneous measurement data. The signal from the signal conditioning must be anti-alias filtered before digitizing as defined in Annex A paragraph 1.1. 4.5 Data Analysis. a. In subsequent processing of the data, use any additional digital filtering that is compatible with the anti-alias analog filtering. In particular, additional digital filtering must maintain phase linearity for processing of shock time histories. Re-sampling for SRS computational error control is permitted using standard re-sampling algorithms. b. Analysis procedures will be in accordance with those requirements and guidelines provided in paragraph 6.1, reference a. In particular, validate the shock acceleration amplitude time histories according to the procedures in paragraph 6.1, reference a. Use integration of time histories to detect any anomalies in the measurement system, e.g., cable breakage, amplifier slew rate exceedance, data clipped, unexplained accelerometer offset, etc., before processing the response time histories. If anomalies are detected, discard the invalid measured response time history. For unique and highly valued measured data, a highly trained analyst may be consulted concerning the removal of certain anomalies but, generally, this will leave information that is biased by the technique for removal of the anomaly. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-22 4.6 Test Execution. 4.6.1 Preparation for Test. Test preparation details will be procedure specific as discussed in the previous paragraphs. Ensure that all test specific equipment such as fixturing, environmental conditioning equipment, instrumentation and acquisition equipment has been properly calibrated, validated and documented. 4.6.1.1 Preliminary Guidelines. Prior to initiating any testing, review the pretest information in the test plan to determine test details (e.g., procedure, calibration load, test item configuration, measurement configuration, shock level, shock duration, climatic conditions, and number of shocks to be applied, as well as the information in paragraph 3.1 above). Note all details of the test validation procedures. 4.6.1.2 Pretest Checkout. After calibration of the excitation input device and prior to conducting the test, perform a pretest checkout of the test item at standard ambient conditions (Part One, paragraph 5.1.a) to provide baseline data. Conduct the checkout as follows: Step 1 Conduct a complete visual examination of the test item with special attention to stress areas or areas identified as being particularly susceptible to damage and document the results. Step 2 Where applicable, install the test item in its test fixture. Step 3 Conduct a test item operational check in accordance with the approved test plan, and document the results for compliance with Part One, paragraph 5.15. Step 4 If the test item operates satisfactorily, proceed to the first test. If not, resolve the problem and restart at Step 1. 4.6.1.3 Procedures’ Overview. Paragraphs 4.6.2 through 4.6.9 provide the basis for collecting the necessary information concerning the system under shock. For failure analysis purposes, in addition to the guidance provided in Part One, paragraph 5.14, each procedure contains information to assist in the evaluation of the test results. Analyze any failure of a test item to meet the requirements of the system specifications, and consider related information such as follows in paragraphs 4.6.2 through 4.6.9. It is critical that any deviations to the test or test tolerances must be approved by the appropriate test authority and must be clearly documented in the test plan and final report. 4.6.2 Functional Shock (Procedure I). The intent of this test is to disclose materiel malfunction that may result from shocks experienced by materiel during use in the field. Even though materiel may have successfully withstood even more severe shocks during shipping or transit shock tests, there are differences in support and attachment methods, and in functional checking requirements that make this test necessary. Tailoring of the test is required when data are available, can be measured, or can be estimated from related data using accepted dynamic scaling techniques (for scaling guidance see Method 525.2). When measured field data are not available for tailoring, use the information in Figure 516.8-2 and the accompanying Table 516.8-III to define the shock test system input SRS or Tables 516.8-IV-VI for classical pulse definitions. In the calibration procedure, the calibration load will be subject to a properly compensated complex waveform in accordance with the SRS described above for electrodynamic or servo-hydraulic shock testing. In general, tests using classical pulses, e.g., terminal peak sawtooth, etc., are unacceptable unless it can be demonstrated during tailoring that the field shock environment time trace approximates such a form. If all other testing resources have been exhausted, it will be permissible to use the information on Table 516.8-IV-VI for employing a classical pulse. However, such testing must be performed in both a positive and negative direction to assure meeting the spectrum requirements on Figure 516.8-2 in both the positive and negative direction. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-23 Table 516.8-IV. Terminal peak sawtooth default test parameters for Procedures I -Functional Test (refer to Figure 516.8-3). Test Reference Peak Value and Pulse Duration Am (G-Pk) & TD (ms) Flight Vehicle Materiel1 Weapon Launch1,2 Captive Carry Ground Materiel1,3 Procedure I -Functional 20 G 11 ms 30 G 11 ms 40 G 11 ms Note 1. For material that is shock mounted or weighing more than 136 kg (300 lbs), an 11 ms half-sine pulse of such amplitude that yields an equivalent velocity to the default terminal peak sawtooth may be employed. Equivalent Velocity Relationship: 𝐴𝐴𝑚𝑚(ℎ𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎) = (𝜋𝜋4 ⁄ )𝐴𝐴𝑚𝑚(𝑎𝑎𝑎𝑎𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠ℎ) Note 2. Launch Shock is a special case of Functional Shock (see paragraph 6.1k) Note 3. For materiel mounted only in trucks and semi-trailers, use a 20G peak value. A special category of functional shock has been established for Navy high speed craft (HSC). Tables 516.8-V and 516.8-VI document two functional standardized laboratory shock test requirements to mitigate the risk of equipment malfunction or failure of hard mounted electrical and electronics equipment in HSC due to wave impacts (reference paragraph 6.1, reference p). These test requirements are applicable for equipment with internal vibration mounts, but not applicable for equipment on shock mounts (paragraph 6.1, reference q) or for shock isolated seats (paragraph 6.1, reference r). Two types of half-sine shock tests are required to minimize the risk of equipment malfunction or failure in HSC. The first test, (HSC-I), is to be repeated three times in each direction of the three mutually orthogonal axes. The second test, (HSC-II), employs a lower severity shock pulse which is to be repeated 800 times in each direction per axis with the nominal spacing between pulses set at 1-second intervals (in the event the previous transient has not completely decayed within the nominal 1-second, contact the proper test authority for further guidance). HSC equipment orientation during testing should represent realistic conditions in which the equipment may experience wave impact shock. Dominant wave impact shock loads occur only in craft axes +Z (vertical up), -X (aft), and +/- Y (port/starboard). Equipment that can be installed in any orientation should be tested in positive and negative test orientations for all three equipment axes. The +X and –Z craft orientations should be omitted during Procedure I testing for equipment installed only in a vertical up orientation. Table 516.8-V. High Speed Craft - Standardized Requirements1 (refer to Figure 516.8-5). Test2 Half-Sine Pulse Amplitude Duration HSC-I 20 G 23 ms HSC-II 5 G 23 ms Note 1. The half-sine classical pulse specified for HSC may not be substituted by an SRS equivalent complex pulse. Note 2. For equipment mounted ONLY in the Z (vertical up) direction, with the exception of equipment mounted on a mast, arch, or cabin top, HSC-I X (negative aft) and +/- Y (port/starboard)axis amplitudes may be reduced to 10 G. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-24 For unique situations (e.g., high value or fragile components) where general cross platform use at any location is not anticipated, the 20 G HSC-I default amplitude may be modified as defined in Table 516.8-VI (the pulse duration will remain at 23 ms). Table 516.8-VI: Limited Application Requirements by Craft Size1 Craft Size Location Length (ft) Weight (Klbs) Longitudinal Center of Gravity (LCG) Coxswain Bow 65-85 105-160 10 G 15 G 20 G 40-70 35-70 10 G 15 G 15 G 35-40 14-25 15 G 15 G 20 G Note 1. The half-sine classical pulse specified for HSC may not be substituted by an SRS equivalent complex pulse. 4.6.2.1 Test Controls - Functional Shock (Procedure I). Table 516.8-IV provides general classical shock references for functional shock. Figure 516.8-2 provides predicted input SRS for the functional shock test for use when measured data are not available, and when the test item configuration falls into one of two specified categories - (1) flight equipment, or (2) ground equipment. The durations, e T and E T , are defined in Annex A paragraph 1.3, and are specified in Table 516.8-III. Tables 516.8-V and VI provide classical shock defaults for the special case of HSC. 4.6.2.2 Test Tolerances - Functional Shock (Procedure I). For complex transients from measured data, ensure test tolerances are consistent with the general guidelines provided in paragraph 4.2.2 with respect to the information provided in Table 516.8-III and accompanying Figure 516.8-2. For classical pulse testing, the test tolerances are specified on Figures 516.8-3 thru 5 with respect to default information in Tables 516.8-IV-VI. 4.6.2.3 Test Procedure - Functional Shock (Procedure I). Step 1 Select the test conditions and calibrate the shock test apparatus as follows: a. Select accelerometers and analysis techniques that meet or exceed the criteria outlined in paragraph 6.1, reference a. b. Mount the calibration load to the shock test apparatus in a configuration similar to that of the test item. If the materiel is normally mounted on vibration/shock isolators, ensure the corresponding test item isolators are functional during the test. If the shock test apparatus input waveform is to be compensated via input/output impulse response function for waveform control, exercise care to details in the calibration configuration and the subsequent processing of the data. c. Perform calibration shocks until two consecutive shock applications to the calibration load produce waveforms that meet or exceed the derived test conditions consistent with the test tolerances in paragraph 4.6.2.2 for at least the test direction of one axis. d. Remove the calibration load and install the test item on the shock apparatus. Step 2 Perform a pre-shock operational check of the test item. If the test item operates satisfactorily, proceed to Step 3. If not, resolve the problems and repeat this step. Step 3 Subject the test item (in its operational mode) to the test shock input. Step 4 Record necessary data to show the shock met or exceeded desired test levels within the specified tolerances in paragraph 4.6.2.2. This includes test setup photos, test logs, and photos of actual shocks from the transient recorder or storage oscilloscope. For shock and vibration isolated Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-25 assemblies inherent within the test item, make measurements and/or inspections to assure these assemblies did not impact with adjacent assemblies. If required, record the data to show that the materiel functions satisfactorily during shock. Step 5 Perform a post-test operational check of the test item. Record performance data. If the test item does not operate satisfactorily, follow the guidance in paragraph 4.3.2 for test item failure. Step 6 Repeat Steps 2, 3, 4, and 5 two additional times if the SRS form of specification is used and the synthesized pulse is symmetric (yielding a total of three shocks in each orthogonal axis). If the SRS based time history is not symmetric, shock in both positive and negative polarities are required (yielding a total of six shocks in each orthogonal axis). If the classical shock form of specification is used, subject the test item to both a positive and a negative input pulse (a total of six shocks in each mutually orthogonal axis). Step 7 Perform a post-test operational check on the test item. Record performance data, document the test sequence, and see paragraph 5 for analysis of results. 4.6.3 Transportation Shock (Procedure II). The Transportation Shock test procedure is representative of the repetitive low amplitude shock loads that occur during logistical or tactical materiel transportation. Vibration testing excludes transient events, thus Procedure II functions with vibration testing to sequentially represent the loads that may occur. The default testing configuration is a packaged or unpackaged test item(s) in a non-operational configuration. The test procedure may also be applied to evaluate the influence of shock loading on a cargo restraint system, or an operational test item if required. The test plan should define the operational mode and testing in commercial manufacturer packaging, as fielded materiel, or a bare item that is secured or installed on the transport platform. A default classical terminal peak sawtooth shock test sequence is defined in Table 516.8-VII. Alternatively, the shock waveform applied can be tailored with measured data and implemented via shock replication techniques such as Method 525.2, Time Waveform Replication. Transportation shock tests can frequently be completed following a vibration test using an electrodynamic or servo-hydraulic test system, and the same test setup configuration. Table 516.8-VII Procedure II - Transportation shock test sequence1, 2, 3. On Road (5000 km)4 Terminal Peak Sawtooth Pulse Duration: 11 ms Off Road (1000 km)4 Terminal Peak Sawtooth Pulse Duration: 5 ms Amplitude (G-Pk) Number of Shocks Amplitude (G-Pk) Number of Shocks 5.1 42 10.2 42 6.4 21 12.8 21 7.6 3 15.2 3 Note 1: The shocks set out in Table 516.8-VII must always be carried out together with ground transportation vibration testing as specified in Method 514.8, Category 4 and/or Category 20. Note 2: The above tabulated values may be considered for both restrained cargo and installed materiel on wheeled and tracked vehicles. Transportation shock associated with two-wheeled trailers may exceed off-road levels as defined. Note 3: The shock test schedule set out in Table 516.8-VII can be undertaken using either terminal peak sawtooth pulses applied in each sense of each orthogonal axis, or a synthesis based on the corresponding SRS that encompasses both senses of each axis. Note 4: The above number of shocks is equivalent to the following distances: a) On-road vehicles: 5000 km; b) Off-road vehicles: 1000 km. If greater distances are required, more shocks must be applied in multiples of the figures above. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-26 4.6.3.1 Test Controls - Transportation Shock (Procedure II). Table 516.8-VII provides the transportation shock criteria for use when measured data are not available. The durations e T and E T for SRS based waveform synthesis are defined in Annex A Paragraph 1.3. Table 516.8-VII is representative of wheeled ground vehicles, but is not characteristic of specific vehicles or a transportation scenario. The default shock severities shown in Table 516.8-VII have application when the purpose of the test is to address scenarios in which damage is dependent upon multiple cycle events. The levels in Table 516.8-VII were derived from classical half-sine pulses defined in paragraph 6.1, reference h. The classical half-sine pulses were converted to terminal peak sawtooth with equivalent velocities. The terminal peak sawtooth was selected due to its relatively flat SRS characteristics above the roll-off frequency. In the event field data are available, tailor the test per the LCEP. 4.6.3.2 Test Tolerances - Transportation Shock (Procedure II). For complex transients from measured data, ensure test tolerances are consistent with the general guidelines provided in paragraph 4.2.2. For classical pulse testing, ensure the test tolerances specified in Figure 516.8-3, with respect to the information provided in Table 516.8-VII, are satisfied. 4.6.3.3 Test Procedure - Transportation Shock (Procedure II). Generally, either the primary road or the secondary/off road shock sequence is preformed, not both sequences. Complete testing at all applicable shock amplitudes in Table 516.6-VII for the number of shocks indicated, or as defined in the test plan. The lowest amplitude shock tests are typically performed first, followed by the higher amplitude tests. If testing is required in more than one axis, repeat the procedure below for each axis and sequence of shock amplitudes. Step 1 Calibrate the test equipment as follows: a. Mount the calibration load to the test equipment and fixture in a configuration similar to that of the actual test item. The test setup and fixture should prevent distortion of the shock waveform. b. Perform calibration shocks until two consecutive shock applications reproduce waveforms that are within the test tolerance specification. c. For electrodynamic test systems or other equipment with a stored drive signal, repeat the calibration to other required test amplitudes and store the drive signal. Allow sufficient time between shocks for the previous shock event to fully decay. Step 2 Remove the calibration load and install the test item on the test equipment. Step 3 Perform a pre-test inspection of the test item, and an operational test if required. Step 4 Subject the test item to the shock test sequence, and perform intermediate inspections or checkouts as required between shock events. Allow sufficient time between shocks for the previous shock event to fully decay. Step 5 If testing is required at a different amplitude, return to Step 3, or if the sequence is complete, proceed to Step 6. Step 6 Perform a post-test inspection of the test item, and operational test if required. Document the results, including plots of response waveforms and any pre- or post-shock anomalies. See paragraph 5 for analysis of results. 4.6.4 Fragility (Procedure III). The intent of this test is to determine (1) the maximum level of input to which the materiel can be exposed and still continue to function as required by its operational guide without damage to the configuration, or, (2) the minimum level of input on which exposure to a higher level of input will most likely result in either functional failure or configuration damage. Determination of the fragility level is accomplished by starting at a benign level of shock as defined by a single parameter, e.g., G-level or velocity change, and proceeding to increase the level of shock by increasing the single parameter value to the test item (base input model) until: a. Failure of the test item occurs. b. A predefined test objective is reached without failure of the test item. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-27 c. A critical level of shock is reached that indicates failure is certain to occur at a higher level of shock. It is important in performing a fragility test to recognize that “level of input” must correlate in some positive way with the potential for materiel degradation. It is well recognized that materiel stress is directly related to materiel velocity such as might occur during vibration/shock (see paragraph 6, references e and f) and, in particular, to change in materiel velocity denoted as . V ∆ Pulse duration that relates to the fundamental mode of vibration of the materiel is a factor in materiel degradation. For a drop machine with a trapezoidal pulse program, there is a simple relationship between the three variables: pulse maximum amplitude m A (G-pk), pulse velocity change V ∆ [m/sec2 (in/sec2)], pulse duration D T (seconds), and ( ) 2 2 9.81m/s 386.09in sec g = as provided by the following formula for the trapezoidal pulse in Figure 516.8-4 (the rise time R T and fall time F T should be kept to the minimum duration possible to minimize the resulting increase in velocity not associated with duration D T ): 𝐴𝐴𝑚𝑚𝑔𝑔= ∆𝑉𝑉 𝑇𝑇 𝐷𝐷 (𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 ∆𝑉𝑉= 𝐴𝐴𝑚𝑚𝑔𝑔𝑇𝑇 𝐷𝐷), ∆𝑉𝑉= 2ඥ2𝑔𝑔ℎ 𝑎𝑎𝑎𝑎𝑎𝑎 𝑇𝑇 𝐷𝐷= 2ඥ2𝑔𝑔ℎ 𝐴𝐴𝑚𝑚𝑔𝑔 (𝑡𝑡𝑡𝑡𝑡𝑡ℎ𝑎𝑎𝑛𝑛𝑡𝑡𝑎𝑎𝑛𝑛𝑛𝑛𝑛𝑛 ∆𝑉𝑉= 𝐴𝐴𝑚𝑚𝑔𝑔(𝑇𝑇 𝐷𝐷−0.5𝑇𝑇 𝑅𝑅−0.5𝑇𝑇 𝐹𝐹) ≅𝐴𝐴𝑚𝑚𝑔𝑔𝑇𝑇 𝐷𝐷 𝑓𝑓𝑓𝑓𝑓𝑓 𝑇𝑇 𝐷𝐷≫𝑇𝑇 𝑅𝑅, 𝑇𝑇 𝐹𝐹 It is clear that if V ∆ is to be increased incrementally until failure has occurred or is imminent, it is possible to either increase , D m T A or both. Since D T relates to the period of the first mounted natural frequency of the materiel (and generally failure will occur when the materiel is excited at its lower mounted natural frequencies), it is required that the test be conducted by increasing the peak amplitude, m A , of the test alone, leaving D T fixed. Figure 516.8-7 provides the 100 percent rebound V ∆ versus drop height h based upon the simple relationship ℎ= (∆𝑉𝑉)2 𝑔𝑔 ⁄ . Holding D T fixed and incrementally increasing V ∆ provides a direct relationship between and m A V ∆ with D T serving as a scale factor. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-28 Figure 516.8-7. Fragility Shock Trapezoidal Pulse: velocity change versus drop height. Table 516.8-VIII. Fragility Shock Trapezoidal pulse parameters (refer to Figure 516.8-4). Test Peak Value1 (Am) G’s Nominal Duration2 (TD) (sec) Fragility 10-50 2 2 2 2 D m m h gh g T A g A = = Note 1: Am is dependent upon drop height “h.” Typical range is provided (refer to paragraph 4.6.4). Note 2: “h” units: m (in) and g=9.81 m/s2 (386.09 in/sec2). For a complex transient, there is no simple relationship between peak acceleration, pulse duration, and a change in velocity. It is assumed here that for a complex transient, velocity change is related to a significant difference between successive instantaneous peaks. (This can be determined with some effort by selecting positive and negative thresholds for which a few, e.g., five or fewer, positive and negative peaks alternate over suitably short periods of time.) In this case, change in velocity is not so much an instantaneous change upon impact, but may be a successive set of changes occurring at significant periods lower than those of acceleration. (Recall that velocity is a ( ) 1 2 f π scaling of the acceleration frequency domain information.) For test materiel where a degree of precision is needed in specifying the level of input and correlation of the shock effects on the materiel with the level of input, simple base input SDOF modeling is suggested with subsequent integration of the equations of motion to determine the relative velocity and displacement. Simply scaling the peak acceleration level (in effect the square-root of the energy) of the pulse likewise scales the velocity change directly for a linear system. The same relationship between the variables holds, except now a “distribution” of velocity change in the complex transient must be considered as opposed to a single large velocity change as in the case of the trapezoidal pulse. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-29 Paragraph 4.6.4.c above implies that an analysis of the materiel has been completed prior to testing, that critical elements have been identified with their "stress thresholds," and that a failure model of the materiel relative to the shock input level has been developed. In addition, during the test, the "stress thresholds" of these critical elements can be monitored, and input to a failure model to predict failure at a given shock input level. In general, such input to the materiel produces large velocities and large changes in velocity. If the large velocity/velocity change exceeds that available on standard electrodynamic and/or servo-hydraulic test equipment, for this procedure the classical trapezoidal pulse may be used on properly calibrated drop machines. However, if the large velocity/velocity change is compatible with the capabilities of electrodynamic and/or servo-hydraulic test equipment, consider tailoring the shock according to a complex transient for application on the electrodynamic or servo-hydraulic test equipment. Using a trapezoidal pulse on electrodynamic and/or servo-hydraulic test equipment is acceptable (accounting for pre- and post-exciter positioning) if there are no available data providing shock input information that is tailorable to a complex transient. In summary, there is a single parameter (peak amplitude of the shock input) to define the fragility level holding the duration of the shock, D T , approximately constant. In the case of SRS synthesis, maximum velocity change is not as well defined, nor as easily controllable as for the classical trapezoidal pulse. Tailoring of the test is required when data are available, can be measured, or can be estimated from related data using accepted dynamic scaling techniques. An inherent assumption in the fragility test is that damage potential increases linearly with input shock level. If this is not the case, other test procedures may need to be used for establishing materiel fragility levels. 4.6.4.1 Test Controls – Fragility (Procedure III). a. Specify the duration of the shock, D T , as it relates to the first fundamental mode of the materiel. Select a design drop height, h, based on measurement of the materiel’s shipping environment, or from Transit Drop Tables 516.8-IX thru 516.8-XI as appropriate to the deployment environment when measured data are unavailable. (A design drop height is the height from which the materiel might be dropped in its shipping configuration and be expected to survive.) The maximum test item velocity change may then be determined by using the following relationship for 100% rebound: 2 2 ∆ = V gh where, ∆V = maximum product velocity change m/s (in/s) (summation of impact velocity and rebound velocity) h = design drop height in m (in) g = 9.81 m/s2 (386.09 in/s2) The maximum test velocity change assumes 100 percent rebound. Programming materials, other than pneumatic springs, may have less than 100 percent rebound, so the maximum test velocity needs to be decreased accordingly. If the maximum test velocity specified is used for drop table shock machine programming materials other than pneumatic springs, the test is conservative (an over-test), and the maximum test item velocity is a bounding requirement. b. Set the shock machine to an acceleration level (Am) as determined based upon and D T V ∆ , well below the anticipated fragility level. If no damage occurs, increase Am incrementally (along with V ∆ ) while holding the pulse duration D T constant until damage to the test item occurs. This will establish the materiel’s critical acceleration fragility (or velocity change) level. c. Test levels used in this procedure represent the correlation of the best information currently available from research and experience. Use more applicable test level data if they become available (paragraph 6.1, reference g). In particular, if data are collected on a materiel drop and the SRS of the environment computed, a scaled version of the SRS could be used to establish the acceleration fragility level with respect to a measured environment on electrodynamic or servo-hydraulic test equipment, provided the displacement and velocity limitations of the test equipment are not exceeded. In addition to the maximax acceleration response spectra, compute the pseudo-velocity response spectra. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-30 4.6.4.2 Test Tolerances – Fragility (Procedure III). It is assumed that the instrumentation noise in the measurements is low so that tolerances may be established. For complex transients from measured data, ensure test tolerances are consistent with the general guidelines provided in paragraph 4.2.2. For classical pulse testing, ensure the test tolerances specified in Figure 516.8-4, with respect to the information provided in Table 516.8-VIII, are satisfied. 4.6.4.3 Test Procedure – Fragility (Procedure III). This test is designed to build up in severity as measured in peak acceleration or velocity change until a test item failure occurs, or a predetermined goal is reached. It may be necessary to switch axes between each shock event unless critical axes are determined prior to test. In general, all axes of importance will be tested at the same level before moving to another level. The order of test activity and the calibration requirements for each test setup should be clearly established in the test plan. It is also desirable to pre-select the steps in severity based on knowledge of the materiel item or the test environment, and document this in the test plan. Unless critical stress thresholds are analytically predicted and instrumentation used to track stress threshold buildup, there is no rational way to estimate the potential for stress threshold exceedance at the next shock input level. The following procedures, one for a classical pulse and the other for a complex transient, are written as if the test will be conducted in one axis alone. In cases where more test axes are required, modify the procedure accordingly. a. Classical Pulse. This part of the procedure assumes that the classical pulse approach is being used to establish the fragility level by increasing the drop height of the test item, thereby increasing the ∆V directly. The fragility level is given in terms of the measurement variable-peak acceleration of the classical pulse while holding the pulse duration as a function of the materiel modal characteristics a constant. In using this procedure, estimate the first mode mounted frequency of the materiel in order to specify the pulse duration D T . Step 1 Mount the calibration load to the test apparatus in a configuration similar to that of the actual test item. Use a fixture similar in configuration to the interface of the shock attenuation system (if any) that will support the materiel. The fixture should be as rigid as possible to prevent distortion of the shock pulse input to the test item. Step 2 Perform calibration shocks until two consecutive shock applications to the calibration load reproduce the waveforms that are within the specified test tolerances. If response to the calibration shock is nonlinear with respect to shock input level, other test procedures may need to be applied to establish materiel fragility levels depending upon the extent of the nonlinearity prior to reaching the "stress threshold". Step 3 Select an initial drop height low enough to assure that no damage will occur by selecting a fraction of the anticipated service drop height established from Transit Drop Tables 516.8-IX thru 516.8-XI. The maximum velocity change can be taken to be: gh 2 2 V = ∆ Where: ∆V = maximum test item velocity change, m/s (in/s) (assumes full resilient rebound of test item) h = drop height, m (in.) g = acceleration of gravity 9.81 m/s2 (386.09 in/s2) Step 4 Mount the test item in the fixture. Perform an operational check and document the pre-test condition. If the test item operates satisfactorily, proceed to Step 5. If not, resolve the problems and repeat this step. Step 5 Perform the shock test at the selected level, and examine the recorded data to assure the test is within tolerance. Step 6 Visually examine and operationally check the test item to determine if damage has occurred. If the test item does not operate satisfactorily, follow the guidance in paragraph 4.3.2 for test item failure. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-31 Step 7 If it is required to determine the fragility of the test item in more than one axis, proceed to test the item (Steps 4-6) in the other axes (before changing the drop height). Step 8 If the test item integrity is preserved, select the next drop height. Step 9 Repeat Steps 4 through 8 until the test objectives have been met. Step 10 Perform a post shock operational test of the test item. See paragraph 5 for analysis of results. Document the results, including plots of the measured test response waveforms, and any pre- or post-shock operational anomalies. b. Synthesized Pulse. This part of the procedure assumes that the fragility level is some function of the peak acceleration level that correlates with a maximax acceleration SRS of a complex transient base input (because stress relates to velocity a peak pseudo-velocity level determined from a maximax pseudo-velocity SRS of a complex transient is preferable). For a complex transient specified in the time domain, this procedure generally uses the peak acceleration of the time history to define the fragility level. Step 1 Mount the calibration load to the test apparatus in a configuration similar to that of the actual test item. Use a fixture similar in configuration to the interface of the shock attenuation system (if any) that will support the materiel. The fixture should be as rigid as possible to prevent distortion of the shock pulse input to the test item. Step 2 Perform calibration shocks until two consecutive shock applications to the calibration load reproduce maximax acceleration SRS or pseudo-velocity SRS that are within the specified test tolerances. If response to the calibration shock is nonlinear with respect to shock input level, other test procedures along with simple modeling may need to be applied to establish materiel fragility levels, depending upon the extent of the nonlinearity prior to reaching the "stress threshold". Step 3 Select a peak maximax acceleration (or pseudo-velocity) SRS level low enough to assure no damage will occur. Step 4 Mount the test item in the fixture. Inspect and operationally test the item to document the pre-test condition. If the test item operates satisfactorily, proceed to Step 5. If not, resolve the problems and repeat this step. Step 5 Perform the shock test at the selected level, and examine the recorded data to assure the test maximax acceleration (or pseudo-velocity) SRS is within tolerance. Step 6 Visually examine and operationally check the test item to determine if damage has occurred. If so, follow the guidance in paragraph 4.3.2 for test item failure. Step 7 If it is required to determine the fragility of the test item in more than one axis, proceed to test the item in the other axes (before changing the peak maximax acceleration (or pseudo-velocity) SRS level). Step 8 If the test item integrity is preserved, select the next predetermined peak maximax acceleration (or pseudo-velocity) SRS level. Step 9 Repeat Steps 5 through 8 until the test objectives have been met. Step 10 Perform a post shock operational test of the test item. See paragraph 5 for analysis of results. Document the results, including plots of the measured test response waveforms and any pre- or post-shock operational anomalies. 4.6.5 Transit Drop (Procedure IV). The intent of this test is to determine the structural and functional integrity of the materiel to a transit drop either outside or in its transit or combination case. In general, there is no instrumentation requirement for the test and measurement information is minimized, however, if measurements are made, the maximax acceleration SRS and the pseudo-velocity SRS will define the results of the test, along with the measurement amplitude time history. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-32 4.6.5.1 Test Controls - Transit Drop (Procedure IV). Test levels for this test are based on information provided in Tables 516.8-IX thru 516.8-XI. Test the item in the same configuration that is used in transportation, handling, or a combat situation. Toppling of the item following impact will occur in the field and, therefore, toppling of the test item following its initial impact should not be restrained as long as the test item does not leave the required drop surface. Levels for this test were set by considering how materiel in the field might commonly be dropped. Conduct all drops using a quick release hook, or drop tester. Use of a standardized impact surface is recommended for test repeatability because the surface configuration can influence test results. For most drop test requirements, steel plate on reinforced concrete is the default impact surface. The plate shall be homogenous material with a minimum thickness of 3 inches (76 mm) and Brinell hardness of 200 or greater. The plate shall be uniformly flat within commercial mill production standards, level within 2 degrees, and free of surface irregularities that may influence impact results. The concrete shall have a minimum compressive strength of 2500 psi (17 MPa), and be reinforced as required to prevent fracture during testing. In high velocity hazard classification drop scenarios (e.g. 40 ft) it is necessary for the concrete strength be 4000 psi with a minimum thickness of 24 inches. The steel plate shall be bonded and/or bolted to the concrete to create a uniform rigid structure without separation. The concrete foundation plus the impact plate mass shall be a minimum of 20 times the mass of the test item. The plate surface dimensions shall be sufficiently large to provide direct and secondary rotational impacts, and if possible rebound impacts. Guidance systems which do not reduce the impact velocity may be employed to ensure correct impact angle; however the guidance shall be eliminated at a sufficient height above the impact surface to allow unimpeded fall and rebound. Use of armor plate or similar composition steel plate is recommended to improve steel surface durability and prevent impact indentation and cuts. The impact surface shall be free from standing water, ice, or other material during testing. The most severe damage potential is impact with a non-yielding mass that absorbs minimal energy. Thus, use of a single monolithic impact mass is recommended to reduce energy transfer into the mass rather than the test item. The impact mass rigidity and energy transfer can be evaluated by measurement of the mass acceleration during testing. Tables 516.8-IX thru 516.8-XI provide default drop conditions for transport from manufacturer to the end of its service life. Table 516.8-IX (Logistic Transit Drop Test) includes drop scenarios generally associated with non-tactical, logistical transport based on weight and test item dimensions. Table 516.8-X (Tactical Transport Drop Test) includes drop scenarios generally associated with tactical transport beyond the theatre storage area. As a default, the criteria for the tactical transport drop tests are to meet all performance requirements. For items that are incapable of meeting performance requirements, adjustments may be made to the drop height or configuration to accommodate the item performance limitations. If the drop conditions are modified, restrictions may be placed on the deployment of the item. Ensure an adequate test is performed and all deviations from this procedure are properly documented. Table 516.8-XI (Severe Tactical Transport Drop Test) includes severe drop scenarios, and the item is considered to have passed if it did not explode, burn, spread propellant or explosive material as a result of dropping, dragging or removal of the item for disposal. Other drop scenarios in the LCEP should be considered. Realistic variations to the default values provided in Tables 516.8-IX thru 516.8-XI may be permitted when justified; e.g. large/complex systems in which specific handling considerations are identified in the LCEP may supersede the default levels provided. Figure 516.8-8 illustrates the standard drop orientations as referenced in Tables 516.8-IX thru 516.8-XI. Figure 516.8-9 illustrates typical edge and corner drop configurations for large packages as discussed in Notes 2-4 of Table 516.8-IX. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-33 Table 516.8-IX. Logistic Transit Drop Test1. Weight of Test Item & Case kg (lbs) Largest Dimension cm (in.) Notes Height of Drop, h cm (in.) Number of Drops Under 45.4 (100 ) Man-packed or man-portable Under 91 (36) 122 (48) Drop on each face, edge and corner; total of 26 drops5 91 (36) & over 76 (30) 45.4 - 90.8 (100 – 200 ) inclusive Under 91 76 (30) Drop on each corner; total of eight drops 91 (36) & over 61 (24) 90.8-454 (200 – 1000 ) inclusive Under 91 61 (24) 91 – 152 (36 – 60) 2 61 (24) Over 152 (over 60) 2 61 (24) Over 454 (1000) No limit 3 4 46 (18) Drop on each bottom edge. Drop on bottom face or skids; total of five drops Note 1: Perform drops from a quick-release hook or drop tester. Orient the test item so that, upon impact, a line from the struck corner or edge to the center of gravity of the case and contents is perpendicular to the impact surface. The default drop surface is steel backed by concrete. Concrete or 5 cm (2 in) plywood backed by concrete may be selected if (a) a concrete or wood surface is representative of the most severe service conditions or (b) it can be shown that the compressive strength of the impact surface is greater than that of the test item impact point(s). Note that the shorter shock duration associated with the steel impact surface may not excite all test item resonant modes. Note 2: With the longest dimension parallel to the floor, support the transit, or combination case with the test item within, at the corner of one end by a block 13 cm (five inches) in height, and at the other corner or edge of the same end by a block 30 cm (12 inches) in height. Raise the opposite end of the case to the specified height at the lowest unsupported corner and allow it to fall freely. Note 3: While in the normal transit position, subject the case and contents to the edgewise drop test as follows (if the normal transit position is unknown, orient the case so the two longest dimensions are parallel to the floor): Edgewise drop test: Support one edge of the base of the case on a sill 13-15 cm (five to six inches) in height. Raise the opposite edge to the specified height and allow it to fall freely. Apply the test once to each edge of the base of the case (total of four drops). Note 4: For shelters without shock attenuated skids, the drop height may be reduced to 15 cm (6 in) with a 10 cm (4 in) sill for edgewise drops. Note 5: If desired, divide the 26 drops among no more than five test items (see paragraph 4.6.5.1). Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-34 Table 516.8-X. Tactical Transport Drop Test. Scenario Category Impact Velocity (m/sec) Drop Height1 (m) Configuration # Drops / Orientation3,6 Impact Surface Ship Transport Storage and transport to theatre storage area, transport by ship 5.4 (17.7 ft/sec) 1.5m (5 ft) Packaged6 (minimum of 3) Flat bottom and two faces.4 Steel7,8 backed by concrete Unpackaged Handling Infantry and man-carried equipment 5.4 (17.7 ft/sec) 1.5m (5 ft) Unpackaged 5 Flat bottom, two faces4 and two edges5 Steel7,8 backed by concrete Packaged Handling Loading and offloading from side of transport vehicle - transport by truck, forklift, & helicopter 6.4 (21 ft/sec) 2.1m (7 ft) Packaged6 5 Steel7,8 backed by concrete Helicopter Underslung load, quick release onto land or ship 6.4 (21 ft/sec) 2.1m (7 ft) Packaged6 1 Flat bottom Steel7,8 backed by concrete Parachute Drop2 Low velocity drop 8.7 (28.5 ft/sec) 3.8m (12.6 ft) Packaged with appropriate honeycomb or other shock absorbing system used in delivery 1 Flat bottom Concrete Parachute Drop High velocity drop 27.3 (90 ft/sec) 38.1m (125 ft) Concrete Note 1: The test is not intended to encompass all credible accident conditions or severe mishandling conditions. Where the drop heights quoted are exceeded by those specified elsewhere in the table or for other phases of Service, the higher values should be substituted. Note 2: Drop heights are provided for simulated parachute drops. This test may not fully address certain effects that can occur during parachute drops in high wind conditions. Consider different drop height and angles of impact to address these issues. Drop from aircraft may be required for airdrop certification. Note 3: Sufficient assets are required to test in each of the orientations specified. Five standard drop orientations are listed in Table XII and illustrated in Figure 8. Consider other drop orientations if expected to have a greater damage potential. Expose each item to no more than 2 drops. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-35 Note 4: For munitions, the two faces shall be the forward and aft ends of the munition. Note 5: For munitions, the two edges shall be at 45 degrees on the forward and aft ends. Note 6: Unpackaged if required by LCEP or Test Plan. Note 7: The default drop surface is steel backed by concrete. Concrete or 5 cm (2 in) plywood backed by concrete may be selected if (a) a concrete or wood surface is representative of the most severe service conditions or (b) it can be shown that the compressive strength of the impact surface is greater than that of the test item impact point(s). Note that the shorter shock duration associated with the steel impact surface may not excite all test item resonant modes. Note 8: A steel impact surface shall have a Brinell hardness of at least 200. For test items less than 454 kg (1000 lbs) the steel plate shall be at least 2.5 cm (1 in) thick, otherwise it shall be at least 7.6 cm (3 in) thick. Table 516.8-XI. Severe Tactical Transport Drop Test. Scenario Category Impact Velocity (m/sec) Drop Height (m) Configuration # Drops / Orientation4,5 Helicopter External Carriage on Helicopter 6.4 (21 ft/sec) 2.1m (7 ft) Unpackaged 5 Flat Bottom, two faces2 and two edges3 Military Land Vehicles Includes weapons loading and off loading 7.7 (25.3 ft/sec) 3.05m (10 ft) Unpackaged Aircraft External Carriage on Fixed Wing Aircraft 7.7 (25.3 ft/sec) 3.05m (10 ft) Unpackaged Crane Accidental Crane Drop 15.5 (50.9 ft/sec) 12.2m (40 ft) Packaged1 Ship Transport Shipboard Loading 15.5 (50.9 ft/sec) 12.2m (40 ft) Packaged1 (minimum of 3) Flat Bottom and two faces.2 Ship Aircraft Carrier Shipboard Loading and Handling 22.1 (72.5 ft/sec) 25m (82 ft) Packaged1 5 Flat Bottom, two faces2 and two edges3 Note 1: Unpackaged if required by LCEP or Test Plan. Note 2: For munitions, the two faces shall be the forward and aft ends of the munition. Note 3: For munitions, the two edges shall be at 45 degrees on the forward and aft ends. Note 4: Sufficient assets are required to test in each of the orientations specified. Five standard drop orientations are shown listed in Table 10 and illustrated in Figure 8. Other drop orientations should be considered if expected to have a greater damage potential. Each item should be exposed to no more than 2 drops. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-36 Note 5: The default drop surface is steel backed by concrete. Concrete or 5 cm (2 in) plywood backed by concrete may be selected if (a) a concrete or wood surface is representative of the most severe service conditions or (b) it can be shown that the compressive strength of the impact surface is greater than that of the test item impact point(s). Note that the shorter shock duration associated with the steel impact surface may not excite all test item resonant modes. Table 516.8-XII. Five standard drop test orientations. Drop Rectangular Packages Cylindrical Packages 1 Flat Bottom Horizontal (Side 1) 2 Face 1: (Left End) Face 1: (Fwd End/Top) 3 Face 2: (Right End) Face 2: (Aft End/Bottom) 4 Edge 1: (Bottom Right End Edge) Edge 1: (Aft End Bottom Edge (45 Deg)) 5 Edge 2: (Top Left Edge) Edge 2: Fwd End Top Edge (45 Deg)) Figure 516.8-8. Standard drop orientations for rectangular and cylindrical packages. Rectangular Package Cylindrical Package Standard Drop Orientations for Rectangular and Cylindrical Packages Fwd End/Top Aft End/Bottom Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-37 Figure 516.8-9. Illustration of edge drop configuration (corner drop end view is also illustrated). 4.6.5.2 Test Tolerances - Transit Drop (Procedure IV). Ensure the test height of drop is within 2.5 percent of the height of drop as specified in Tables 516.8-IX through 516.8-XI. 4.6.5.3 Test Procedure - Transit Drop (Procedure IV). Step 1 After performing a visual inspection and operational check for baseline data, install the test item in its transit or combination case as prepared for field use (if measurement information is to be obtained, install and calibrate such instrumentation in this Step). If the test item operates satisfactorily, proceed to Step 2. If not, resolve the problems and repeat this step. Step 2 From paragraph 4.6.5.1 and Tables 516.8-IX-516.8-XI, determine the height of the drops to be performed, drop orientation, the number of drops per test item, and the drop surface. Step 3 Perform the required drops using the apparatus and requirements of paragraphs 4.6.5 and 4.6.5.1 and Tables 516.8-IX through 516.8-XI notes. Recommend visually and/or operationally checking the test item periodically during the drop test to simplify any follow-on evaluation that may be required. If any degradation is noted, see paragraph 4.3.2. Step 4 Document the impact point or surface for each drop and any obvious damage. Step 5 Following completion of the required drops, visually examine the test item(s), and document the results. Step 6 Conduct an operational checkout in accordance with the approved test plan. See paragraph 5 for analysis of results. Step 7 Document the results for comparison with data obtained in Step 1, above. Hoist Quick Release End View (Edge Drop) Sill or Block 13-15 cm high. Drop Heights: Specified in Table 516.7-VII End View (Corner Drop) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-38 4.6.6 Crash Hazard Shock (Procedure V). The intent of this procedure is to disclose structural failures of materiel or mounts for materiel in air or ground vehicles that may present a hazard to personnel or other materiel if the materiel breaks loose from its mount during or after a vehicle crash. This test procedure is intended to verify that materiel mounting and/or restraining devices will not fail, and that sub-elements are not ejected during crash situations. Attach the test item to its shock fixture by its in-service mounting or tie-downs. For materiel weighing less than 227 g (8 ounces) it may be permissible to omit the crash hazard test if it is determined that personnel expected to be in the vicinity of the test article are equipped with sufficient Personal Protective Equipment -PPE (i.e., helmets with visors) such that risk of significant bodily injury is determined to be highly unlikely. In addition to the item’s mass, assess overall material properties and geometry when considering omitting Procedure V. Final decisions in such cases are left to the discretion of the responsible safety authority, and based upon the case-specific hazard analysis. 4.6.6.1 Test Controls - Crash Hazard Shock (Procedure V). Use Table 516.8-III and Figure 516.8-2 as the test spectrum and effective durations. If shock spectrum analysis capabilities are not available, a classical pulse may be used as an alternative to a complex transient waveform developed from the SRS in Figure 516.8-2. Table 516.8-XIII provides the parameters for the default terminal peak sawtooth. An aircraft crash level of 40 G’s is based on the assumption that, during a survivable crash, localized G levels can approach 40 G’s. Ground transportation vehicles are designed with a higher safety factor and, therefore, must sustain a much higher G level with correspondingly higher specified test levels. Table 516.8-XIII. Terminal peak sawtooth default test parameters for Procedure V – Crash Hazard (refer to Figure 516.8-3). Test Minimum Peak Value and Pulse Duration Am (G-Pk) & TD (ms) Flight Vehicle Materiel1 Ground Materiel1 Procedure V -Crash Hazard 40 G 11 ms 75 G 6 ms Note 1. For materiel that is shock-mounted or weighing more than 136 kg (300 lbs), an 11 ms half-sine pulse of such amplitude that yields an equivalent velocity to the default terminal peak sawtooth may be employed. 4.6.6.2 Test Tolerances - Crash Hazard Shock (Procedure V). For complex waveform replication based on SRS, ensure the test tolerances are within those specified for the SRS in paragraph 4.2.2. For the classical terminal peak sawtooth and half-sine options defined in Table 516.8-XIII, ensure the waveform is within the tolerances specified in Figures 516.8-3 and 5. 4.6.6.3 Test Procedure - Crash Hazard Shock (Procedure V). Step 1 Secure the test item mount to the shock apparatus by its in-service mounting configuration. Use a test item that is dynamically similar to the materiel, or a mechanically equivalent mockup. If a mockup is used, it will represent the same hazard potential, mass, center of mass, and mass moments about the attachment points as the materiel being simulated. (If measurement information is to be collected, mount and calibrate the instrumentation.) Step 2 Perform two shocks in each direction (as determined in paragraph 2.3.3) along three orthogonal axes of the test item for a maximum of 12 shocks. Step 3 Perform a physical inspection of the test setup. Operation of the test item is not required. Step 4 Document the results of the physical inspection, including an assessment of potential hazards created by either materiel breakage or structural deformation, or both. Process any measurement data according to the maximax acceleration SRS or the pseudovelocity SRS. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-39 4.6.7 Bench Handling (Procedure VI). The intent of this test is to determine the ability of materiel to withstand the usual level of shock associated with typical bench maintenance or repair. Use this test for any materiel that may experience bench or bench-type maintenance. This test considers both the structural and functional integrity of the materiel. 4.6.7.1 Test Controls - Bench Handling (Procedure VI). Ensure the test item is a fully functional representative of the materiel. Raise the test item at one edge 100 mm (4 in.) above a solid wooden bench top, or until the chassis forms an angle of 45° with the bench top or until point of balance is reached, whichever is less. (The bench top must be at least 4.25 cm (1.675 inches) thick.) Perform a series of drops in accordance with specifications. The heights used during this test are defined by examining the typical drops that are commonly made by bench technicians and assembly line personnel. 4.6.7.2 Test Tolerances - Bench Handling (Procedure VI). Ensure the test height of drop is within 2.5 percent of the height of drop as specified in paragraph 4.6.7.1. 4.6.7.3 Test Procedure - Bench Handling (Procedure VI). Step 1 Following an operational and physical checkout, configure the item as it would be for servicing, e.g., with the chassis and front panel assembly removed from its enclosure. If the test item operates satisfactorily, proceed to Step 2. If not, resolve the problems and repeat this Step. Position the test item as it would be for servicing. Generally, the test item will be non-operational during the test. Step 2 Using one edge as a pivot, lift the opposite edge of the chassis until one of the following conditions occurs (whichever occurs first). a. The lifted edge of the chassis has been raised 100 mm (4 in.) above the horizontal bench top. b. The chassis forms an angle of 45° with the horizontal bench top. c. The lifted edge of the chassis is just below the point of perfect balance. Let the chassis drop back freely to the horizontal bench top. Repeat using other practical edges of the same horizontal face as pivot points, for a total of four drops. Step 3 Repeat Step 2 with the test item resting on other faces until it has been dropped for a total of four times on each face on which the test item could be placed practically during servicing. Step 4 Visually inspect the test item. Step 5 Document the results. Step 6 Operate the test item in accordance with the approved test plan. See paragraph 5 for analysis of results. Step 7 Document the results for comparison with data obtained in Step 1, above. 4.6.8 Pendulum Impact (Procedure VII). The test item (large shipping container) may consist of a box, case, crate or other container constructed of wood, metal, or other material, or any combination of these for which ordinary box tests are not considered practical or adequate. Unless otherwise specified, large containers are those that measure more than 152cm (60 in.) on any edge or diameter, or those when loaded have gross weights in excess of 70kg (154 lbs). 4.6.8.1 Test Controls - Pendulum Impact (Procedure VII). a. The pendulum impact tester consists of a platform suspended from a height at least 5m (16.4 ft) above the floor by four or more ropes, chains, or cables; and a bumper comprised of a flat, rigid concrete or masonry wall, or other equally unyielding flat barrier. The bumper is at least 46cm (18.1 in) high; wide enough to make full contact with the container end, and has sufficient mass to resist the impacts without displacement. The impact surface is oriented perpendicular to the line of swing of the platform. The platform is large enough to support the container or pack, and when hanging free, has its top surface approximately 23cm (9.1 in) above the floor, and its leading edge at least 8cm (3.1 in) from the surface of the bumper. The suspension Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-40 chains are vertical and parallel so that when the platform is pulled straight back, it will rise uniformly but remain at all times horizontal and parallel to the floor (see Figure 516.8-10). Figure 516.8-10. Pendulum impact test. b. The drop height shall be determined for the required horizontal impact velocity based on the transfer of potential to kinetic energy (h = v2/2g). Unless otherwise specified, the vertical height is a drop of 23 cm (9 in.) that results in a velocity of 2.13m/sec (7 ft/sec) at impact. c. Load the test item (container) with the interior packing and the actual contents for which it was designed. If use of the actual contents is not practical, a dummy load may be substituted to simulate such contents in weight, shape, and position in the container. Block and brace the contents, or dummy load, and cushion them in place as for shipment. When the pendulum impact test is performed to evaluate the protection provided for the contents, the rigidity of a dummy load should closely approximate that of the actual contents for which the pack was designed. 4.6.8.2 Test Tolerances - Pendulum Impact (Procedure VII). Ensure the vertical drop height is within 2.5 percent of the required height. 4.6.8.3 Test Procedure - Pendulum Impact (Procedure VII). Step 1 If required, perform a pretest operational checkout in accordance with the test plan. Install accelerometers and other sensors on the test item, as required. Step 2 Place the test item on the platform with the surface that is to be impacted projecting beyond the front end of the platform so that the specimen just touches the vertical surface of the bumper. Step 3 Pull back the platform so that the center of gravity of the pack is raised to the prescribed height, and then release it to swing freely so that the surface of the container impacts against the bumper. Unless 5m Above Floor (Minimum) Platform Bumper Drop Height Cable or Rope Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-41 otherwise specified, the vertical height is a drop of 23cm (9 in.) that results in a velocity of 2.13m/sec (7 ft/sec) at impact. Step 4 Examine the test item and record obvious damage. If the container is undamaged, rotate it 180 degrees and repeat Step 3. When the test is conducted to determine satisfactory performance of a container or pack, and unless otherwise specified, subject each test item to one impact to each side and each end that has a horizontal dimension of less than 3m (9.8 ft). Step 5 Record any changes or breaks in the container, such as apparent racking, nail pull, or broken parts, and their locations. Carefully examine the packing (blocks, braces, cushions, or other devices) and the contents, and record their condition. If required, perform a post-test operational checkout in accordance with the test plan. See paragraph 5 for analysis of results. 4.6.9 Catapult Launch/Arrested Landing (Procedure VIII). The intent of this test is to verify the functionality and structural integrity of materiel mounted in or on fixed wing aircraft that are subject to catapult launches and arrested landings. 4.6.9.1 Test Controls - Catapult Launch/Arrested Landing (Procedure VIII). a. Measured Data Not Available. Whenever possible, derive the test conditions from measured data on applicable carrying aircraft (see Part One, paragraph 5.6, as well as the tasks at the end of Part One in Annex A for information on the use of field/fleet data), since shock responses can be affected by local influences such as wing and fuselage bending modes, pylon interfaces, and structural damping. While the pulse amplitudes associated with this environment are generally low, the long periods of application and high frequency of occurrence have the potential to cause significant dynamic and/or low cycle fatigue damage in improperly designed materiel. A typical aircraft may fly as many as 200 sorties per year, of which more than two-thirds involve catapult launches and arrested landings. However, for laboratory test purposes, 30 simulated catapult/arrested landing events in each of two axes (longitudinal and vertical) should provide confidence that the majority of significant defects will be identified for remedial action. If acceptable field-measured data are not available, the following guidance is offered in which sinusoidal burst is used to simulate each catapult or launch event. This time history has been simplified to a constant amplitude sine burst of 2-second duration for simulation at the selected materiel frequency (usually the first fundamental mode of the loaded aircraft wing). For testing purposes, it is permissible to reduce the maximum amplitude in the horizontal direction to 75 percent of that in the vertical direction. (1) Wave shape: damped sine wave. (2) Wave frequency: determined by structural analysis of the specific aircraft and frequency of the fundamental mode. (3) Burst amplitude: determined by structural analysis of the specific aircraft, the frequency of the fundamental mode and the location of the materiel relative to the shape of the fundamental mode. (4) Wave damping (quality factor): Q = 20. (5) Axis: vertical, horizontal, longitudinal. (6) Number of bursts: determined by the specific application (for example, 30 bursts, each followed by a 10 second rest period). b. Measured Data Available. If acceptable field measured data are available, the following guidance is offered in which the catapult event is simulated by two shocks separated by a transient vibration, and the arrested landing event by one shock followed by transient vibration. The catapult launch/arrested landing shock environment differs from other typical shock events in that it is a transient periodic vibration (roughly sinusoidal) at a relatively low frequency determined by aircraft mass and landing gear damping characteristics. Typical catapult launch shock time histories are shown in Figure 516.8-11. These data represent measured acceleration response in the vertical, horizontal and longitudinal directions of a store component mounted on the pylon of a platform. The data are DC coupled and low pass filtered at 70 Hz. All three time histories demonstrate an initial transient, followed by a transient vibration (nearly two seconds long), and concluded by a final transient. The longitudinal axis provides a profile of the DC catapult acceleration that, in general, will not be important for testing purposes, and can be removed by high pass Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-42 filtering the time history at a frequency less than 10 percent of the lowest significant frequency in the maximax acceleration SRS. Procedures for accomplishing this filtering may necessarily be iterative (unless Fourier transform information is used) with high pass filtering beginning at a comparatively high frequency, and decreasing until the most significant SRS low frequency is identified. In general, catapult acceleration response will display two shock events corresponding to initial catapult load application to the aircraft and catapult release from the aircraft separated by an oscillatory acceleration. Both the initial and the final shock events have a distinct oscillatory nature. It is essential that this test be run as a series of two shock transients separated by a two second period of time in which transient vibration may be input. Typical arrested landing shock time histories are shown on Figure 516.8-12. These data represent measured acceleration response in the vertical, horizontal and longitudinal directions of a store component mounted on the pylon of a platform. The data are DC coupled and low pass filtered at 70 Hz. All three time histories demonstrate an initial transient, followed by a transient vibration (nearly three seconds long). It is clear that the longitudinal time history has a comparatively large DC component that may be filtered out for test specification development. The term “transient vibration” is introduced here because of the duration of the event being not typical of a shock event. NOTE: Transient Vibrations. For precise laboratory simulation, Procedure VIII may require consideration of the concept of a transient vibration in processing and replication of the form of time history from measured data. For long duration transient environments (durations on the order of one second or more), it may be useful to process the response time history by estimating the envelope function, a(t), and proceeding to compute a maximax Autospectral Density Estimate (ASD), assuming short portions of the response time history behave in the same manner as stationary random data. Estimation of this form falls under the category of nonstationary time history processing and will not be considered further in this Method. For a precise definition of transient vibration, see Part One, Annex D. The importance of the transient vibration phenomenon is that (1) it has the form of a shock (short duration and substantial time varying amplitude), (2) it can be mathematically modeled in a precise way, and (3) it can be used in stochastic simulation of certain shock environments. In general, shocks have their significant energy in a shorter time frame than transient vibrations, while transient vibrations allow for time history enveloping functions other than the exponential envelope form often times displayed in shocks as a result of resonant response decay to an impact. Figure 516.8-11. Sample measured store three axis catapult launch component response acceleration time histories. BULKHEAD VERTICAL 5.00 -5.00 0.00 5.00 G BULKHEAD HORIZONTAL 5.00 -5.00 0.00 5.00 G BULKHEAD LONGITUDINAL 10.00 -10.00 0.00 5.00 G Time (seconds) Amplitude (g’s) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-43 Figure 516.8-12. Sample measured store three axis arrested landing component response acceleration time histories. 4.6.9.2 Test Tolerances - Catapult Launch/Arrested Landing (Procedure VIII). For cases in which measured data are not available and waveforms are generated from dynamic analysis of the configuration, ensure the waveform tolerances are within the time history test tolerances specified for waveforms in paragraph 4.2.2. For cases in which measured data are available, ensure the SRS for the test response is within the SRS tolerances specified in paragraph 4.2.2. For transient vibration, ensure the waveform peaks and valleys are within the tolerances given for waveforms in paragraph 4.2.2 or as provided in the test specification. 4.6.9.3 Test Procedure - Catapult Launch/Arrested Landing (Procedure VIII). Step 1 Mount the test item to its shock/vibration fixture on the shock device for the first test axis. Step 2 Attach instrumentation as required in the approved test plan. Step 3 Conduct an operational checkout and visual examination in accordance with the approved test plan. If the test item operates satisfactorily, proceed to Step 4. If not, resolve the problems and repeat this step. Step 4a If no measured field data are available, apply short transient sine waves of several cycles to the test item in the first test axis. (Each short transient sine wave of several cycles represents a single catapult or arrested landing event.) Follow each burst by a rest period to prevent unrepresentative effects. Operate the test item in its appropriate operational mode while bursts are applied. If the test item fails to operate as intended, follow the guidance in paragraph 4.3.2 for test item failure. Step 4b If measured field data are available, either apply the measured response data under exciter system time waveform control (see Method 525.2), or process the catapult as two shocks separated by a transient vibration, and the arrested landing as a shock followed by a transient vibration. Operate the test item in its appropriate operational mode while bursts are applied. If the test item fails to operate as intended, follow the guidance in paragraph 4.3.2 for test item failure. Step 5 If the test item has not malfunctioned during testing, conduct an operational checkout and visual examination in accordance with the approved test plan. If a failure has occurred, it may be desirable to perform a thorough visual examination before proceeding with the operational checkout to avoid initiating additional hardware damage. When a failure occurs, consider the nature of the failure and corrective action, along with the purpose of the test (engineering information or contractual BULKHEAD VERTICAL 10.00 -10.00 0.00 5.00 G BULKHEAD HORIZONTAL 10.00 -10.00 0.00 5.00 G BULKHEAD LONGITUDINAL 5.00 -5.00 0.00 5.00 G Time (seconds) Amplitude (g’s) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-44 compliance) in determining whether to restart the test or to continue from the point of interruption. If the test item does not operate satisfactorily, follow the guidance in paragraph 4.3.2 for test item failure. Step 6 Repeat Steps 1 through 5 for the second test axis. Step 7 Document the test results including amplitude time history plots, and notes of any test item operational or structural degradation. See paragraph 5 for analysis of results. 5. ANALYSIS OF RESULTS. In addition to the specific guidance provided in the test plan and the general guidance provided in Part One, paragraphs 5.14 and 5.17; and Part One, Annex A, Task 406, refer to the below paragraphs for supplemental test analysis information. Analyze any failure of a test item to meet the requirements of the materiel specifications. a. Procedure I (Functional Shock) - Consider any interruption of the materiel operation during or after the shock in relationship to the materiel's operational test requirements. (See paragraph 4.3.2.) b. Procedure II (Transportation Shock) - Consider any damage to the shock mounts or the internal structural configuration of the test item that may provide a cause for the development of a failure analysis course of action to consider retrofit or redesign. c. Procedure III (Fragility) - The outcome of a successful fragility test is one specified measurement level of test item failure for each test axis along with the duration of the shock. Consider that if the test item fails either operationally or structurally at the lowest level of testing, and there is no provision for testing at lower levels, the test item's fragility level is indeterminate. d. Procedure IV (Transit Drop) - In general, analysis of results will consist of visual and operational comparisons for before and after test. Measurement instrumentation and subsequent processing of acceleration time history information can provide valuable information related to response characteristics of the test item and statistical variation in the shock environment. e. Procedure V (Crash Hazard Shock) - If measurement information was obtained, process this in accordance with paragraph 4.6.6.3, Step 4. f. Procedure VI (Bench Handling) - In general, any operational or physical (mechanical or structural) change of configuration from Step 1 in paragraph 4.6.7.3 must be recorded and analyzed. g. Procedure VII (Pendulum Impact) – In general, analysis of the results will consist of visual inspections and any operational comparisons before and after the test. Check for operability and inspect for physical damage of the contents (except when using a dummy load). Damage to the exterior shipping container that is the result of improper interior packaging, blocking, or bracing is cause for rejection. Structural damage to the exterior shipping container that results in either spilling of the contents or failure of the container in subsequent handling is cause for rejection. Assess whether a substantial amount of shifting of the contents within the shipping container created conditions likely to cause damage during shipment, storage, and reshipment of the container. Minor container damage such as chipping of wood members, dents, paint chipping, is not cause for rejection. If recorded, acceleration time histories or other sensor data can provide valuable information related to the response characteristics of the test item. h. Procedure VIII (Catapult Launch/Arrested Landing) - Consider any failure of the structural configuration of the test item, mount, or launcher that may not directly impact failure of the operation of the materiel, but that would lead to failure under in-service conditions. 6. REFERENCE/RELATED DOCUMENTS. 6.1 Referenced Documents. a. Handbook for Dynamic Data Acquisition and Analysis, IES-RD-DTE012.2, Institute of Environmental Sciences and Technology, Arlington Place One, 2340 S. Arlington Heights Road, Suite 100, Arlington Heights, IL 60005-4516; Institute of Environmental Sciences and Technology. b. Piersol, Allan G., Determination of Maximum Structural Responses From Predictions or Measurements at Selected Points, Proceedings of the 65th Shock and Vibration Symposium, Volume I, SAVIAC, 1994. Shock & Vibration Exchange (SAVE), 1104 Arvon Road, Arvonia, VA 23004. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-45 c. MIL-DTL-901, “Detail Specification – Shock Tests, H.I. (High Impact), Shipboard Machinery, Equipment and Systems, Requirements for”, 20 June 2017. d. MIL-STD-331, “Fuzes, Ignition Safety Devices and Other Related Components, Environmental and Performance Test for”, May 2017. e. Gaberson, H. A. and Chalmers, R. H., Modal Velocity as a Criterion of Shock Severity, Shock and Vibration Bulletin 40, Pt. 2, 1969, pp.31-49. f. Piersol, Allan G., and T. L. Paez, eds., Harris’ Shock and Vibration Handbook, 6th Edition, NY, McGraw-Hill, 2010. g. AR 70-44, DoD Engineering for Transportability; Information Handling Services. h. DEF-STAN-00-35, Part 3, Test M3, Issue 4, 10, July, 2006. i. Smallwood, David O., “Generating Ramp Invariant Filters for Various forms of the Shock Response Spectrum”, 76th Shock and Vibration Symposium, 2005. j. Bendat, J. S. and Piersol, A. G., Random Data: Analysis and Measurement Procedures-Fourth Edition, John Wiley & Sons Inc., New York, 2010. k. Smallwood, D. O., "Characterization and Simulation of Transient Vibrations Using Band Limited Temporal Moments", Shock and Vibration, Vol. 1, No. 6, pp.507-527, John Wiley and Sons, 1994. l. Edwards, Timothy, "Power Delivered to Mechanical Systems by Random Vibrations", Proceedings of the 79th Shock and Vibration Symposium, Orlando, Florida, October 2008. m. Chu, A., “Zeroshift of Piezoelectric Accelerometers in Pyroshock Measurements,” Proceedings of the 58th Shock & Vibration Symposium, Huntsville, AL, October 1987. n. Plumlee, R. H., “Zero-Shift in Piezoelectric Accelerometers,” Sandia National Laboratories Research Report, SC-RR-70-755, March 1971. o. Bateman, V. I., “Accelerometer Isolation for Mechanical Shock and Pyroshock,” Proceedings of the 82nd Shock and Vibration Symposium, Baltimore, MD, November, 2011 (paper) and ESTECH2012, Orlando, FL, May 2012. p. Riley, Michael R., Murphy, H.P., Coats, Dr. Timothy W., Petersen, Scott M., “Standardized Laboratory Test Requirements for Hardening Equipment to Withstand Wave Impact Shock in Small High-Speed Craft”, Naval Surface Warfare Center Carderock Division Report NSWCCD-80-TR-2017/002, February 2017. q. Riley, Michael R., Petersen, S.M., “The Use of Shock Isolation Mounts in Small High-Speed Craft to Protect Equipment from Wave Slam Effects”, Naval Surface Warfare Center Carderock Division Report NSWCCD-80-TR-2017/022, July 2017. r. Riley, Michael R., Ganey, Dr. H. Neil., Haupt, Kelly, Coats, Dr. Timothy W., “Laboratory Test Requirements for Marine Shock Isolation Seats”, Naval Surface Warfare Center Carderock Division Report NSWCCD-80-TR-2015/010, May 2015. s. V. I. Bateman, H. Himelblau, and R. G. Merritt, “Validation of Pyroshock Data,” Journal of the IEST, October 2012. 6.2 Related Documents. a. Conover, W.J., Practical Nonparametric Statistics. New York; Wiley, 1971, Chapter 3. b. Piersol, A.G., Analysis of Harpoon Missile Structural Response to Aircraft Launches, Landings and Captive Flight and Gunfire. Naval Weapons Center Report #NWC TP 58890. January 1977. c. Schock, R. W. and Paulson, W. E., TRANSPORTATION A Survey of Shock and Vibration Environments in the Four Major Modes of Transportation, Shock and Vibration Bulletin #35, Part 5, February 1966. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 516.8-46 d. Ostrem, F. E., TRANSPORTATION AND PACKAGING, A Survey of the Transportation Shock and Vibration Input to Cargo, Shock and Vibration Bulletin #42, Part 1, January 1972. Shock & Vibration Exchange (SAVE), 1104 Arvon Road, Arvonia, VA 23004. e. Allied Environmental Conditions and Test Procedure (AECTP) 400, Mechanical Environmental Tests (under STANAG 4370), Methods 403, 416, and 417. f. MIL-STD-209K, Lifting and Tiedown Provisions. g. DOD Directive 4510.11, DOD Transportation Engineering. h. Egbert, Herbert W. “The History and Rationale of MIL-STD-810 (Edition 2)”, January 2010, Institute of Environmental Sciences and Technology, Arlington Place One, 2340 S. Arlington Heights Road, Suite 100, Arlington Heights, IL 60005-4516. i. ANSI/ASTM D3332, Standard Test Methods for Mechanical-Shock Fragility of Products, Using Shock Machines; Information Handling Services. j. Fackler, Warren C, “Equivalence Techniques for Vibration Testing”, SVM-9, The Shock Vibration Information Center, Naval Research Laboratory, Washington D.C., 1972. k. Miles, J., On Structural Fatigue Under Random Loading”, J. Aeronaut. Sci. 21, 753-762, November 1954. (Copies of Department of Defense Specifications, Standards, and Handbooks, and International Standardization Agreements are available online at Requests for other defense-related technical publications may be directed to the Defense Technical Information Center (DTIC), ATTN: DTIC-BR, Suite 0944, 8725 John J. Kingman Road, Fort Belvoir VA 22060-6218, 1-800-225-3842 (Assistance--selection 3, option 2), and the National Technical Information Service (NTIS), Springfield VA 22161, 1-800-553-NTIS (6847), Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-1 METHOD 516.8, ANNEX A MEASUREMENT SYSTEM CHARACTERIZATION AND BASIC PROCESSING 1. SINGLE SHOCK EVENT MEASUREMENT SYSTEM CHARACTERIZATION AND BASIC PROCESSING The following paragraphs discuss basic measurement system acquisition characteristics, followed by a discussion on the correct identification of the parts of a measured shock (in particular the duration of a shock). Information in Annex A is essential for the processing of measured data for a laboratory test specification. 1.1 Measurement System and Signal Conditioning Parameters The data recording instrumentation shall have flat frequency response to the maximum frequency of interest (𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀). If 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀 is not specified, a default value of 10 KHz is recommended for acquisition at each measurement location. Defining AA f as the 3dB half-power point cut-off frequency of the low-pass analog anti-alias filter, max < AA f f is implied to maintain flat frequency response. The digitizing rate must be at least 2.5 times the filtering frequency 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀. Note that when measurements of peak amplitude are used to qualify the shock level, a sample rate of at least 10 times the filtering frequency (100 thousand samples per second for the default case) is required. For SRS considerations a measurement shock should be acquired at 10 times the filtering frequency or resampled to 10 times the filtering frequency. It is imperative that a responsibly designed signal conditioning system be employed to reject possibility of any aliasing. Analog anti-alias filters must be in place before the digitizing portion of the signal conditioning system. The selected anti-alias filtering must have an amplitude attenuation of 50 dB or greater, and a pass band flatness within one dB across the frequency bandwidth of interest for the measurement (see Figure 516.8-1a). Subsequent re-sampling for either up-sampling (interpolation) or down-sampling (decimation) must be in accordance with standard practices and consistent with the analog anti-alias configuration.). Figure 516.8A-1a. Filter attenuation (conceptual, not filter specific). The end to end alias rejection of the final digitized output must be shown to meet the requirements in Figure 516.8 A-1a. The anti-alias characteristics must provide an attenuation of 50 dB or greater for frequencies that will fold back into the bandwidth of interest (passband). Generally, for validly acquired digital shock time history data spectral data including SRS plots are only presented for frequencies within the passband (up to 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀). However, this restriction is Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-2 not to constrain digital data validation procedures that require assessment of digitally acquired data to the Nyquist frequency (either for the initial Analog to Digital Conversion (ADC) or subsequent re-sampled sequences). It should be noted that it is possible that certain sensor/signal conditioning systems may display substantial “out-of-band” frequency content, i.e., greater than 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀 but less than the Nyquist frequency, in digital processing. For example, a Fourier spectra estimate over the duration of the shock may display “general signal” to “noise” that seemingly contradicts the filter attenuation criterion displayed in Figure 516.8A-1a. In this case the signal conditioning system must be subject to the “verification of alias rejection” described in the paragraph to follow. If the signal conditioning system is verified as non-aliasing then the substantial frequency content between 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀 and the Nyquist frequency can be digitally filtered out if desired. Verification of alias rejection should start by establishing the dynamic range within the pass band in terms of the signal to noise ratio (SNR). The voltage based 10 20log ( ) = Noisefloor FullScale SNR V V must be ≥60dB. Once sufficient SNR is verified, establishing the alias rejection characteristics may be determined using an input sine wave with a magnitude of 0.5 full scale range and at the lowest frequency range that can impinge i.e., be aliased into 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀, and then confirming (using the IEEE 1057 sine wave test procedure or through inspection of the time domain data) that the alias rejection is sufficient at this frequency for the signal conditioning system. For a conventional multi-bit ADC such as flash or successive approximation design, if a 100 thousand sample/second digitizing rate is used, for example, then 𝑓𝑓 𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑎𝑎𝑎𝑎𝑠𝑠= 50 KHz. Theory says that if a signal above the Nyquist Ratio is present, it will “fold over” into a frequency below the Nyquist ratio. The equation is: Fa = absolute value [(Fsn)-F], where Fa = frequency of “alias” F = frequency of input signal Fs = sample rate n = integer number of sample rate (Fs) closest to input signal frequency (F) Hence, the lowest frequency range that can fold back into the 10 KHz passband is from 90 KHz to 110 KHz. It should be noted that Sigma Delta (SD) digitizers “oversample” internally at a rate several times faster than the output data rate and that analog anti-alias filtering is still required. For illustrative purposes, consider an example for a SD digitizer with a bandwidth of interest up to 10 KHz that samples internally at 𝑓𝑓 𝑎𝑎= 800 thousand samples/second. The internal analog based Nyquist frequency by definition is 400 KHz, hence the analog anti-alias filter should attenuate 50 dB or more content that can fold back into the 10 KHz pass band (790 KHz to 810 KHz and similar bands that are higher in frequency). Figure 516.8A-1b illustrates sampling frequencies, Nyquist frequencies, and frequency bands that can fold back into the bandwidth of interest for both conventional and over sampling digitizers, such as the Sigma Delta. Observe that for the example SD design, there is significant bandwidth above the 10 KHz desired 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀 and the Nyquist rate that is not useable due primarily to quantization error, an artifact of the single bit SD design. The output of a SD ADC will be digitally filtered and resampled yielding a new effective sampling rate 𝑓𝑓 𝐷𝐷𝑅𝑅 which in turn yields a new Nyquist rate for the decimated signal of 𝑓𝑓 𝐷𝐷𝑅𝑅2 ⁄ . Through careful selection the digital filter cutoff frequency, the majority of noise between 𝑓𝑓 𝐷𝐷𝑅𝑅2 ⁄ and 𝑓𝑓 𝑎𝑎 is removed while maintaining a nearly flat frequency response through 𝑓𝑓 𝑀𝑀𝑎𝑎𝑀𝑀. The SD oversampling rate 𝑂𝑂𝑂𝑂𝑂𝑂= 𝑓𝑓 𝑎𝑎𝑓𝑓 𝐷𝐷𝑅𝑅 ⁄ , which is directly correlated to dynamic range, is one of several design parameters for a SD ADC. Most reputable vendors will provide a detailed specification sheet associated with their products, however, it is strongly recommended that one verifies aliasing rejection and noise floor characteristics as recommended above prior to employing any signal conditioning/digitizing system in the acquisition of critical field data. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-3 Figure 516.8A-1b Illustration of sampling rates and out of band “fold over” frequencies for Conventional and Oversampling (Sigma-Delta) based data acquisition systems. 1.2 Measurement Shock Identification A “simple shock” is being addressed in this Method (excluding Procedure VIII and the example of a complex shock provided in Annex B), i.e., the impulse force input defines a single “event” arising from a characteristic phenomenon. A “simple shock” is defined by a measurement, e.g., acceleration, with three characteristic regions: a. An initial low amplitude stationary random measurement termed the measurement system noise floor. b. A series of erratic high amplitude decaying measurement amplitudes termed the shock. c. A comparatively low level stationary measurement at or just above the instrumentation noise floor termed the post-shock noise floor. NOTE: If periodic components or non-Gaussian behavior are present in the measurement system noise floor, the signal conditioning system needs to be examined. If periodic components are present in the post-shock noise floor but the general amplitude is relatively stationary, it is indicative of mounting/materiel resonance response. A trained analyst needs to decide the importance of such resonance information in a laboratory test specification. This decision should be based upon the lowest mounted fundamental frequency of the materiel. In general, shock information should not be unduly extended in order to accommodate the full extent of the resonant “ringing” behavior. It is always imperative that the data be carefully analyzed to ensure the measurement is free of corruption, and the nature of the event is physically well grounded. This subject is discussed in greater detail in Annex B. 10 KHz 100 Ksps 10 KHz Bandwidth of Interest Frequencies Above Bandwidth of Interest Frequencies that fold over into the bandwidth of interest = 90 to 110 KHz Conventional ADC (Flash, Successive Approximation, etc.) Oversampling ADC (Sigma Delta, etc.) Frequencies that fold over into the bandwidth of interest = 790 to 810 KHz 800 Ksps Bandwidth of Interest Frequencies Above Bandwidth of Interest Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-4 The example that follows will illustrate initial time domain assessment of a typical transient acceleration time history. Annex B will provide frequency domain and more advanced assessment. Figure 516.8A-2 displays the measurement shock that will be considered for proper processing in both the time and frequency domain. The phenomenon producing the shock has initial high frequency/high energy input, followed by a form of ringing or resonance decay. The measurement shock exists between 617 milliseconds and 1560 milliseconds. Figure 516.8A-2. Example acceleration time history. 1.3 Effective Pulse Duration for Non-Classical Shocks When considering the two non-classical shock alternatives discussed in paragraph 1.2, the analyst (and ultimately test operator), will need to consider the effective durations (including the overall shock duration ( e T ) and the concentration of energy duration ( E T )) for the pulse to be replicated. In the case in which TWR is selected as the implementation method, the duration of the transient event is straightforward. The test operator should simply identify the pre-pulse and post-pulse noise floor levels that will indicate reasonable start and end times for the TWR based event. In the case in which a reference transient is to be synthesized based upon an SRS reference, the SRS reference must come with recommended effective durations established by the analyst review of the data ensemble used to develop the SRS reference. The analyst may view the effective durations of a transient event from a number of perspectives. However, the final guidance on effective durations provided to the test operator with the reference SRS should be simplified to manageable parameters to which the test operator will be able to implement efficiently. Providing the test operator both the shock duration e T and the concentration of energy duration E T is recommended for any SRS based laboratory shock test. With the SRS magnitude controlling the synthesized pulse magnitude and both e T and E T defining energy distribution, the synthesized pulse should resemble a measured pulse having the same SRS. The concept of effective durations is discussed further in the following paragraphs. Annex B contains more information on determining e T and E T based upon easily computed “instantaneous root-mean-square” computations. 0 0.5 1 1.5 2 2.5 -100 -80 -60 -40 -20 0 20 40 60 80 100 Time (sec) Acceleration (G) Mechanical Shock (6000 Hz BW) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-5 As mentioned in paragraph 1.2, a “simple shock” (refer to Figure 516.8A-3), is defined in terms of three time intervals: a. The first time interval; pre T is usually well defined and occurs prior to the shock where the measurement represents the measurement system noise floor. b. The second interval; e T is termed the shock duration and is defined as the duration from the zero crossing for the first measurement acceleration “above the instrumentation noise floor” until the perceived “termination” of the shock. This interval contains the interval with the highest concentration of energy, E T , defined as the minimum length of time that contains any time history magnitudes exceeding in absolute value Pk A CF (see detailed discussion below). c. The third time interval; post T is the time from the “termination” of the shock until the measurement signal approaches or reaches levels of the measurement system noise floor. (In general, shocks over reasonable characterization/identification times seldom decay to the levels of the pre-shock noise floor.) This third time interval can be termed the post-shock noise floor that is above, but includes the measurement system noise floor. 0 0.5 1 1.5 2 2.5 -100 -80 -60 -40 -20 0 20 40 60 80 100 Time (sec) Acceleration (G) Mechanical Shock (6000 Hz BW) Figure 516.8A-3. Example simple shock time history with segment identification. In general, for further processing it is convenient, if possible, to select the interval pre T of duration equal to post T and these intervals should be reasonably comparable or equal in length to e T . The same amount of “time/amplitude” information is available in all three intervals. Tpre Tpost Te TE Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-6 1.3.1 Calculation of e T . There is historical precedence in which the shock duration e T was defined as, “the minimum length of continuous time that contains the root-mean-square (RMS) time history amplitudes exceeding in value ten percent of the peak RMS amplitude associated with the shock event. The short-time averaging time for the unweighted RMS computation is assumed to be between ten and twenty percent of e T .” The previous definitions also included discussion relative to the relationship between e T and E T at which point it was recognized that this relationship is dependent upon the “shape” of the true RMS of the time history. Although the previous definition of e T is a useful analysis tool, e T is now defined from the zero crossing for the first measurement acceleration “above the instrumentation noise floor” until the perceived “termination” of the shock as discussed above. This parameter provides a reasonable bound on the interval in which the reference time history contains measurable energy levels above the noise floor. In synthesizing the reference pulse for an SRS based laboratory test, the user should set the window length, (time-domain block size), containing the reference signal to e T or the nearest programmable interval greater than e T . Observe that unlike the field measurements, the noise floor of the synthesized signal will actually be zero. Zero padding outside of the interval e T will have no effect on the SRS computation. In the event e T (the shock duration) is not provided, define min 2.5 e T f = where min f is the lowest frequency in the reference SRS (this will allow a minimum duration sufficient to allow up to 5 half-cycles of the lowest frequency component in the reference time history. e T includes both the primary “concentration of energy” and an “extension of energy” duration. 1.3.2 Calculation of E T . E T represents a “concentration of energy” duration. There is historical precedence in which E T was defined to be the minimum length of time that contains any time history magnitudes exceeding in absolute value one-third of the shock peak magnitude absolute value, i.e., 3 Pk A , associated with the reference time history. This assumes the shock peak amplitude, Pk A , has been validated, e.g., it is not an “instrumentation noise spike.” A definition of E T that considers the crest factor, = Pk CF A RMS , associated with the single shock or shock data ensemble from the reference SRS is defined. The crest factor is computed in small intervals over the duration e T , (e.g. /10 e T ), and the “maximum crest factor” computed on the individual intervals is defined as CF . This yields a revised definition of E T based on the minimum length of time that contains any time history magnitudes exceeding in absolute value Pk A CF . Even though the crest factor is a stationary random vibration concept applied when Gaussian or particularly non-Gaussian time histories are considered in stationary random vibration, it can be justified for use in terms of a shock if it is realized that peak amplitudes are of a random nature and come at random times. All amplitudes less than the last amplitude greater than Pk A CF define a time of between greater energy concentration and lesser energy concentration that can be quite robust. The analyst must however be immune from selecting a random amplitude spike time far from the major energy concentration, i.e., too strict an application of the concept for determining E T . Generally, the larger the CF the greater E T so selection of several ' CF s and comparing ' E T s is recommended. For several shocks, i.e., an ensemble, varying CF and assembling a table of ' E T s should provide the analyst a robust method for establishing duration E T for synthesis. Plots of CF versus E T would indicate the sensitivity between the two variables. In the event E T is not provided, the test operator should assume the CF to be 3, and synthesize a pulse such that E T for the synthesized reference time history is characterized by E T based on the minimum length of time that contains any time Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-7 history magnitudes exceeding in absolute value of 3 Pk A . Having established a nominal value for E T , the synthesis of a representative pulse shall have a tolerance of 0.8 1.2 ≤ ≤ E E E T T T . 1.3.3 Implementation Considerations. In summary, it is desired that the reference transient synthesized based upon an SRS reference has reasonably similar temporal characteristics to that of the field data from which the SRS reference was derived. The analyst developing SRS based test criteria should carefully investigate the effective duration of the ensemble of transient events from which the final test criteria was based, and document the results along with the SRS. The laboratory technician synthesizing the reference pulse should then be able to consider the variables, 𝐶𝐶𝐶𝐶, e T and E T , associated with effective duration in the synthesis process. As an example, the above durations and associated time intervals are displayed for the typical simple shock in Figure 516.8A-3 where the pre-shock noise floor 𝑇𝑇 𝑝𝑝𝑝𝑝𝑎𝑎≜0 →0.617 𝑠𝑠𝑡𝑡𝑡𝑡 and the post-shock noise floor is defined as 𝑇𝑇 𝑝𝑝𝑠𝑠𝑎𝑎𝑠𝑠≜( ) ( ) to pre e pre e pre T T T T T + + + . pre T and post T were taken to be the same duration for processing comparison convenience. 0.943 sec = e T , is identified by the dashed lines between 0.617 and 1.56 seconds. The maximum crest factor, computed in intervals of 10 e T was computed to be 𝐶𝐶𝐶𝐶≅5. ห𝐴𝐴𝑝𝑝𝑝𝑝ห 𝐶𝐶𝐶𝐶 ൘ is identified by the horizontal lines based on 𝐶𝐶𝐶𝐶≅5 and 98.17 = pk A G (that occurred at time 0.735 sec = pk T ). 𝑇𝑇 𝐸𝐸≅0.230 𝑠𝑠𝑡𝑡𝑡𝑡 is identified by the interval between the first occurrence of ห𝐴𝐴𝑝𝑝𝑝𝑝ห 𝐶𝐶𝐶𝐶 ൘ that occurs at approximately 0.625 seconds and the last occurrence of ห𝐴𝐴𝑝𝑝𝑝𝑝ห 𝐶𝐶𝐶𝐶 ൘ that occurs at approximately 0.860 seconds. 1.4 Shock Response Spectrum The SRS, either acceleration maximax SRS estimates or the pseudo-velocity maximax SRS, is the primary “frequency domain” descriptor that links time history shock amplitudes to some physical model, i.e., the shock model. The below paragraphs will provide a description of the SRS options in addition to SRS estimates that may be used to imply the validity of the measured shock information. 1.4.1 Processing Guidelines The maximax SRS value at a given undamped natural oscillator frequency, n f , describes the maximum response (positive, negative, primary, and residual) of the mass of a damped single degree of freedom (SDOF) system at this frequency to a shock base input time history, e.g., acceleration, of duration e T (see Figure 516.8-1 for the appropriate model). Damping of the SDOF is typically expressed in terms of a “Q” (quality factor). Common selections for Q are Q=50 that represents 1 percent critical damping; a Q =10 that represents 5 percent critical damping; and a Q=5 that represents 10 percent critical damping of the SDOF. For processing of shock response data, the absolute acceleration maximax SRS has become the primary analysis descriptor. In this description of the shock, the maximax acceleration values are plotted on the ordinate with the undamped natural frequency of the base input to the SDOF system plotted along the abscissa. The frequency range over which the SRS is computed, (i.e., natural frequencies of the SDOF system filters) as a minimum, includes the data signal conditioning bandwidth, but should also extend below and above this bandwidth. In general, the “SRS Natural Frequency Bandwidth” extends from an octave below the lowest frequency of interest, up to a frequency at which the “flat” portion of the SRS spectrum has been reached (that may require going an octave or more above the upper signal conditioning bandwidth). This latter SRS upper frequency max SRS f requirement helps ensure no high frequency content in the spectrum is neglected, and is independent of the data bandwidth upper frequency, max f . As a minimum, this SRS upper frequency should exceed max f by at least ten percent, i.e., max 1.1f . The lowest frequency of interest is determined by the frequency response characteristics of the mounted materiel under test. Define 1 f as the first mounted natural frequency of the materiel (by definition, 1 f will be less than or equal to the first natural frequency of a materiel component such as a circuit board) and, for laboratory testing purposes, define the lowest frequency of interest as min 1 2 < f f , (i.e., min f is at least one octave below 1 f ). min SRS f can then be taken as min f . The maximax SRS is to be computed over the time range e T and over the frequency Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-8 range from min f to max max 1.1 > SRS f f . From paragraph 1.1 above, the max f relationship to AA f is defined, however for SRS computation, if max 10 < s SRS F f the time history must be re-sampled to max 10 > r SRS Fs f . The SRS frequency spacing in [ ] min max ,1.1 f f is left to the discretion of the analyst, but should not be coarser that one-twelfth octave and, in general, of a proportional band spacing as opposed to a fixed band spacing (proportional band spacing is more in tune with the materiel modal frequency spacing, and results in fewer natural frequencies for processing). A more complete description of the shock (potentially more useful for shock damage assessment) can be obtained by determining the maximax pseudo-velocity response spectrum. The maximax pseudo-velocity may be plotted on log-log paper with the abscissa as SDOF natural frequency, and the ordinate as pseudo-velocity in units of velocity. Alternatively, a more complete description of the shock (potentially more useful for shock damage assessment) can be obtained by determining the maximax pseudo-velocity response spectrum, and plotting this on four-coordinate paper where, in pairs of orthogonal axes, the maximax pseudo-velocity response spectrum is represented by the ordinate, with the undamped natural frequency being the abscissa, and the maximax absolute acceleration along with maximax pseudo-displacement plotted in a pair of orthogonal axes, all plots having the same abscissa (SDOF natural frequency). This form of a pseudo-velocity SRS plot, as seen in Figure 516.8A-4, is widely accepted in Civil Engineering earthquake ground motion specifications, but historically has not been as common for mechanical shock display or specification. Figure 516.8A-4. Maximax pseudo-velocity SRS estimates for shock and noise floor segments. The maximax pseudo-velocity at a particular SDOF undamped natural frequency is thought to be more representative of the damage potential for a shock since it correlates with stress and strain in the elements of a single degree of freedom system (paragraph 7.1, references e and f). In the laboratory testing to meet a given specification with undesignated Q, use a Q value of 10 and a second Q value of 50 for comparison in the processing (see Figure 516.8A-4). Using two Q values, a damped value and a value corresponding to light damping provides an analyst with information on the potential spread of maximum materiel response. Recommend the maximax absolute acceleration SRS be the primary method of display for the shock, with the maximax pseudo-velocity SRS the secondary method 10 0 10 1 10 2 10 3 10 4 10 -2 10 -1 10 0 10 1 10 2 10 3 Natural Frequency (Hz) Pseudo velocity (in/sec) 10 g 1 g 0.1 g 0.01 g 0.001 g 1000000 g 100000 g 10000 g 1000 g 100 g 100 in 10 in 1 in 0.1 in 0.01 in 0.001 in 0.0001 in 1e-005 in 1e-006 in 1e-007 in Pre(50) Pst(50) Shk(50) Shk(10) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-9 of display. This is useful in cases in which it is desirable to be able to correlate damage of simple systems with the shock. Two additional recommendations related to the validity of the measurement are as follows: a. A pre-shock SRS of the measurement system noise floor over interval pre T should be computed along with the return to noise floor interval post T , i.e., post-shock noise floor, and displayed on the same plot. These noise SRSs help to confirm the overall validity of the measurement if the “Pre” and “Post” times allow adequate accuracy for the SRS estimates, i.e., SRS estimates over very short time segments may not provide representative maximax SRS amplitudes at low natural frequencies. These SRS estimates should be computed at the Q=50 damping value (see Figure 516.8A-4). Refer to Annex B, paragraph 3 for additional guidance on establishing criteria for defining the noise floor. b. For the shock segment, both the maximum positive and maximum negative acceleration and pseudo-velocity SRS estimates should be plotted for a minimum Q value of 10 over the frequency range for which the shock SRS values are displayed (see Figure 516.8A-5). The positive and negative SRS estimates should be very similar in nature as discussed in paragraph 1.4.2 and illustrated through example in Figures 516.8A-5&6. The low Q value should be able to detect acceleration time history anomalies similar to the time history integration. If positive and negative SRS maximax values are disparate, this could be an indicator of potential measurement system signal conditioning problems. Figure 516.8A-5. Shock maximum and minimum pseudo-velocity SRS estimates. 10 0 10 1 10 2 10 3 10 4 10 -2 10 -1 10 0 10 1 10 2 10 3 Natural Frequency (Hz) Pseudo velocity (in/sec) 10 g 1 g 0.1 g 0.01 g 0.001 g 1000000 g 100000 g 10000 g 1000 g 100 g 100 in 10 in 1 in 0.1 in 0.01 in 0.001 in 0.0001 in 1e-005 in 1e-006 in 1e-007 in Shk(10)pos Shk(10)neg Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-10 Figure 516.8A-6. Shock maximum and minimum acceleration SRS estimates. 1.4.2 Processing Example For the shock time history displayed in Figure 516.8A-3, the sample rate was 51200 samples per second. The bandwidth of the data was from DC to 6000 Hz. The bandwidth of interest was from 10 Hz to 6000 Hz. The time history was re-sampled to 102,400 Hz to ensure a reasonable SRS computation thru 10 KHz as discussed in paragraph 1.4.1. The SRS estimates are actually plotted to 50 KHz to illustrate convergence at the low and high frequency extremes. Since even the slightest of bias error influences velocity estimates computed from acceleration data, it is recommended that minor DC bias should be corrected as required prior to performing pseudo velocity calculations (a severe bias error in the acceleration time may indicate more serious issues such as amplifier and/or transducer saturation leading to data validity concerns). Quality factors of 10 and 50 were used for computation of the acceleration and pseudo-velocity maximax SRS estimates except where noted. Except where noted, the computations were made with the standard ramp-invariant filter set. The abscissa of the plots is the undamped natural frequency of the SDOF system at a one-twelfth-octave band spacing. Figure 516.8A-7 contrasts the shock maximax acceleration SRS for the Q values of 10 and 50, and for both measurement system noise floor and post-shock noise floor for a Q of 50. Figure 516.8A-4 provides the related information for the maximax pseudo-velocity SRS estimates. As expected, the shock is substantially greater than either noise floor SRS estimates. Ideally, the noise floor SRS should be 12dB or more below the acceleration SRS of the shock event across the frequency range of interest. 10 0 10 1 10 2 10 3 10 4 10 -1 10 0 10 1 10 2 10 3 Natural Freqency (Hz) Amplitude (G) Shk(10)pos Shk(10)neg Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-11 Figure 516.8A-7. Maximax acceleration SRS estimates for shock and noise floor segments. As a time history validity check, Figure 516.8A-5 and Figure 516.8A-6 provides the positive and negative SRS estimates. It is noted that in these two figures neither the positive nor negative SRS value dominates the other; that would imply the time history information is valid. 1.5 Frequency Domain Identification Energy Spectral Density (ESD) The ESD estimate is a properly scaled squared magnitude of the Fourier Transform of the total shock. Its counterpart, the Fourier Spectra (FS) is, in effect, the square root of the ESD, and may be useful for display but will not be discussed here. The importance of the ESD estimate is its properties relative to input/output system computations. That is for two acceleration measurements related as input and output, either (1) an estimate of the transfer function (magnitude/phase) between the input and output is possible, or (2) a transmissibility estimate (magnitude alone) can be determined by ratioing the output ESD over the input ESD. Further details and illustration of ESD estimates are provided in Annex B. 1.6 Single Event / Multiple Channel Measurement Processing Guidelines When multiple measurements are made for a single configuration, generally pre-processing should proceed as if multiple channel analysis is to be performed. In particular, the pre-shock noise floor, the shock event, and the post-shock noise floor should be of the same duration, and this duration for the shock event should be determined based upon the “longest” duration measurement. Since SRS and ESD processing are generally insensitive to differences in the duration of significant energy content, such selection will allow multi-channel processing. It is imperative that for cross-energy spectral density estimates and energy transfer function estimates, the pre-processing, e.g., event selection durations, filtering, etc., on all measurement channels be the same. Pre-processing across multiple measurement channels involving integration of acceleration to determine velocity needs to correspond to the physics of the configuration. For high signal-to-noise ratios, useful information can be obtained from cross-spectral and transfer function estimates even though random error is high. 1.7 Measurement Probabilistic / Statistical Summary Recommend that, whenever possible, two or more equivalently processed response measurements or test estimates be combined in some statistical manner for summary. This summary then can be used for test specification purposes to provide a level of confidence that the important information in the measurement or test has been captured. Paragraph 7.1, reference b, discusses some options in statistically summarizing processed results from a series of measurements 10 0 10 1 10 2 10 3 10 4 10 -2 10 -1 10 0 10 1 10 2 10 3 Natural Freqency (Hz) Amplitude (G) Pre(50) Pst(50) Shk(50) Shk(10) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX A 516.8A-12 or tests. The best summary option is generally dependent on the size of sample. Processed results from the SRS or ESD are typically logarithmically transformed to provide estimates that tend to be more normally distributed, e.g., estimates in dB. This transformation is important since often very few estimates are available from a test series, and the probability distribution of the untransformed estimates cannot be assumed to be normally distributed. In virtually all cases, combination of processed results will fall under the category of small sample statistics, and need to be considered with care with other parametric or less powerful nonparametric methods of statistical analysis. Annex C addresses the appropriate techniques for the statistical combination of processed test results as a function of the size of the sample and provides an example. 1.8. Other Processing Other descriptive processes that tend to decompose the shock into component parts, e.g., product model, time domain moments (TDM), wavelets, SRS modal and power energy methods (PEM), etc., may be useful, but details of such descriptive processes are beyond the scope of this document, and generally fall in the area of analytical modeling. TDM and PEM show promise of being able to characterize and compare individual shocks among sets of similar shock time traces and perhaps provide insight into cause of materiel failure from shock. TDM (paragraph 7.1, reference k) assessment provides for characterization of the “form” of measured response with respect to both time and frequency. PEM (paragraph 7.1, reference l) attempts to estimate the energy absorbed within a simple modal structure of the materiel when the materiel’s base attachment is the source of the shock input (or power input) to the materiel. PEM seems most useful for power comparison among similar measurements for shock, and has units (forcevelocity) that relate to damage potential when applied to base motion relative to mass motion. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-1 METHOD 516.8, ANNEX B GUIDELINES FOR ADDITIONAL SHOCK TIME HISTORY VALIDATION AND PROCESSING 1. INTRODUCTION. This Annex provides additional guidelines for shock time history assessment including validation, i.e., to detect any measurement system anomalies that would invalidate the measurement. For massive field shock measurement programs where time and budget constraints do not allow validation of individual shocks, at least one shock time history from each measurement channel needs to be individually validated, and careful examination of the time history for each subsequent shock from the measurement channel be examined for gross anomalies. Consistency relative to the test specification for processed information is acceptable as long as any inconsistency is investigated under shock time history validation. For example, the Normal Tolerance Limit (Annex C) when properly applied should be used only for collections of SRS estimates that have a similar shape; otherwise the variance is inflated beyond what might exist for field measured data under repeated experimental measurements. 2. COMPLEX SHOCKS. This Method and this Annex are focused upon simple shocks such as in Figure 516.8-A1 (and repeated below as Figure 516.8B-1). Many shocks are not simple in nature. Figure 516.8B-2 displays a complex shock. The phenomenon producing this shock would appear to have three “rebounds.” If it can be traced to a distinct phenomenon, the last of the four shocks might be separated out as a simple shock from the other three. A trained analyst and a clear understanding of the shock producing phenomenon are needed to justify any such decomposition of this complex shock. It probably would not be possible to use SRS synthesis for laboratory test, leaving TWR as the only option for laboratory testing. Cases in which it would appear that several “simple shocks” are in series should rely upon a trained analyst to identify individual “simple shocks” in concert with goals of the characterization, analysis, and specification. Any decomposition of a series of shocks should be related to the phenomenon producing the shock. For example, a catapult shock represents a non-simple shock that could be specified as two independent simple shocks, separated in time by approximately three seconds with an intervening transient vibration. See Figure 516.8-11. Gunfire Shock, Method 519.8, presents information on a repeated shock, the repetition rate being the gun-firing rate. The direct replication method is preferred over the synthesis method when non-simple shocks are being considered. Generally, this Method has no recommendations beyond the use of TWR for laboratory test specification and laboratory testing for such complex shocks. It is important to maintain the integrity of the complex shock to the extent possible. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-2 0 0.5 1 1.5 2 2.5 -100 -80 -60 -40 -20 0 20 40 60 80 100 Time (sec) Acceleration (G) Mechanical Shock (6000 Hz BW) Figure 516.8B-1. Shock time history with segment identification and and e E T T time intervals illustrated. Figure 516.8B-2. A complex shock. Tpre Tpost Te TE Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-3 3. ADDITIONAL SIMPLE SHOCK PROCESSING AND VALIDATION. 3.1 Introduction. In Annex A paragraph 1.3 of this method, the simple shock time segment for the instrumentation noise floor, the shock and the post shock noise floor are identified. In addition e T and E T are specified. Since the SRS is the primary analysis descriptor, both maximax acceleration and maximax pseudo-velocity estimates of the segments are displayed and interpreted. For verification purposes, the shock maximax positive and negative SRS estimates are displayed. Comparability of these estimates showed no signs of the shock being invalid. In this paragraph the following analysis will be undertaken providing (1) additional analysis of the shock, and (2) additional information regarding the validity of the shock. In particular: a. The time history instantaneous root-mean-square. b. The shock velocity and displacement displayed. c. The time history ESD estimate displayed. Annex A paragraphs 1.7-1.8 of this Method reference more advanced processing that is applicable to a single simple shock or useful in summarizing the information in an ensemble of shocks. No such advanced processing is provided in this Method. 3.2 Instantaneous Root-Mean-Square (RMS). The “instantaneous rms” provides useful information that may not be apparent from examining the amplitude time history. In order to establish shock time intervals for processing, it is useful to consider the “instantaneous rms” of a measurement level. For the measurement ( ) 0 , ≤ ≤ a t t T the instantaneous rms level is defined over the same interval as follows: ( ) ( ) 2 0 for 0 , = ≥ ≤ ≤ irms a t a t t T where “irms” stands for “instantaneous root-mean-square level”. It is assumed that any DC offset in a digitized measurement signal, ( ), a t has been removed prior to computing irms a . Figure 516.8B-3 displays the irms in absolute terms and in dB. In the dB display, no negative values are displayed. Observe that irms a is computed point by point. Therefore, pk A as referenced in paragraph 1.3 in Annex A of this method, will be the maximum computed irms a . From the example of Figure 516.8B-3, it is clear that the “signal” approaches 40 dB, while the “noise floor” is on the order of 3 dB, roughly a signal-to-noise ratio of 37 dB. Relative to identifying the time of the beginning of the post-shock noise floor, Post T , it is a matter for an experienced analyst in concert with the objectives of the shock assessment. Almost assuredly, post-shock instantaneous rms is greater than the pre-shock instantaneous rms, i.e., ( ) ( ) Pr for > ≤ irms Post irms e a T a t t T since the measurement seldom returns to the measurement system noise floor levels because of change of boundary conditions as a result of the shock. If there is indication of periodic behavior in the time trace for , > Pk t T the analyst must decide if analysis over this periodic “ringing” behavior is important for the shock specification. For SRS shock synthesis, it will be difficult to capture such periodic behavior and duplicate it in testing. For waveform replication, this periodic “ringing” behavior should be retained over a minimum of ten cycles if possible. For establishing the end of the range of e T for a simple “well-behaved,” i.e., sharply decaying shocks, it is recommended that the analyst examine times t at which ( ) for > irms Pk a t t T is at least 20dB (preferably 40 dB) below ( ) irms Pk a T , and based upon judgment, select the zero-crossing for defining the end of beginning of e T (or beginning of Post T ). Generally, criteria for defining and automatically determining Post T are left to the discretion of the analyst, and selection of Post T is much more inconsequential in analysis than selection of Pre T . An estimate of the measurement system noise floor level will be useful in establishing Post T . If arbitrary specification of ( ) irms a t levels is not feasible, then a relatively robust way of specifying the end of a shock and the beginning of the post-shock noise floor is to begin at the end of the measured data, T, and compute the mean rms signal level until a noticeable change in level is apparent. This can be accomplished by selecting an averaging time, e.g., ~5 percent of the estimated duration of the shock, and computing a moving average of time history values in the measurement system noise floor and post-shock noise floor, where the average is shifted at least ten times within an averaging time window and ideally Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-4 computing the average at each time point. Usually, plotting these rms levels leads to simple identification of Post T . Specifying the normalized random error for the rms estimate can enhance this procedure. Figure 516.8B-3. Shock time history instantaneous root-mean-square. This error is given by 1 2 ε = r BT for B the bandwidth and T the averaging time. A 95 percent confidence interval is defined by ( ) ( ) [ ] ˆ ˆ 1 2 1 2 x r x x r σ ε σ σ ε − ≤ ≤ + . For 0.025 ε ≈ , then [ ] ˆ ˆ 0.95 1.05 σ σ σ ≤ ≤ x x x . Estimating both the measurement system noise floor and post-shock noise floor levels (standard deviations) for a specified normalized random error, e.g., 0.025, computing the 95 percent confidence intervals and determining the degree of overlap of the measurement system noise floor and post-shock noise floor confidence intervals can provide an analytical criterion for specifying the ed of a shock. Excessive noise that may not be Gaussian in form in the post-shock noise floor may be an indication of a degraded instrumentation signal conditioning system as a result of the shock, e.g., broken accelerometer sensing element, amplifier slew rate exceeded, etc. In this case, the post-shock integrity of the measurement system needs to be validated (see paragraph 4 below). If such computation and subsequent displays are not available, the assessment for the end of the shock, and beginning of the post-shock noise floor can be determined based on examination of a representative sample of the positive and negative peaks in the time history (usually starting from the end of the measurement and avoiding single spurious “noise spikes”) without regard to sign. In this case, the maximum peak (positive or negative) can be estimated in absolute units, and then a -20 dB, -30 dB, and -40 dB level down from the validated peak Apk , estimated by ( ) 10 20log − = pk y A A for y the desired dB decrement, and A representing either a positive or negative peak. Because of the need to balance the normalized random error with the normalized bias error to determine optimum averaging times, it is not recommended that the instantaneous rms values be smoothed through short-time-averaging. 40 30 20 10 0 Acceleration dB (ref=1grms) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-5 3.3 Shock Velocity/Displacement Validation Criteria. Two steps are necessary for examining an unprocessed acceleration time history for purposes of validation. a. The first step is to clearly define the bandwidth of the measurement time history. The signal conditioning configuration and the ESD estimate to be discussed in paragraph 3.4 (below) will be helpful. The time history bandwidth will determine if TWR is a laboratory test option. b. The second step relates to integration of the time history to see if the velocity and displacement make physical sense. Velocity can usually be determined from direct integration of the shock acceleration after the shock has had its mean removed (velocity begins at zero and ends at zero), or has been high pass filtered to remove any DC component and other very low frequency information. Subsequent removal of the velocity mean or DC information in the velocity allows integration of the velocity to get displacement. As a minimum requirement, shock acceleration time traces should be integrated to provide velocity, and the velocity should have a clear physical interpretation, e.g., oscillatory behavior and near zero velocity at the “beginning” and the “end” of the shock. Velocity tends to be quite sensitive to sensor or signal conditioning anomalies that invalidate measurements. Integration of the velocity to obtain displacement should be considered an extended requirement, and reasonable values for displacement should be apparent. The form of velocity (or displacement) with respect to oscillatory behavior needs to be examined for reasonableness. That is, a form of velocity that displays little oscillatory behavior should be suspect. Figure 516.8B-4 displays velocity computed via mean removal alone. Figure 516.8B-5 displays the results of integrating velocity to arrive at displacement. For displacement, “DC” removal was performed on the velocity time history. Examination of both these plots, knowing the physical nature of the test, shows (1) reasonableness of peak amplitudes, and range from positive to negative values, (2) distinct and substantial oscillatory behavior during the “shock,” and (3) characteristic pre- and post- shock noise floor behavior. It would appear that the bandlimited measurement does not have readily identifiable anomalies, and the acceleration time trace can be considered valid for further processing that is designed to either support or refute this validation. Figure 516.8B-4. Measurement velocity via integration of mean (DC) removed acceleration. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-6 Figure 516.8B-5. Measurement displacement via integration of velocity after mean (DC) removal. At this point in the analysis, if the velocity and displacement validation checks, particularly the velocity validation check, do not seem to correspond with the physics of the test, a detailed investigation of the reason for this discrepancy must be instigated. For example, velocities that are not physically realizable call for such an investigation. For one of a kind and expensive tests, it may be possible to recover meaningful data based upon advanced processing techniques. 3.4 ESD Estimate. The ESD is a single block periodiogram sampled at a uniform set of frequencies distributed over the bandwidth of interest, and displayed as a two-dimensional plot of amplitude units ( 2 "units sec Hz" − ) versus frequency in Hz. In determining the estimate, the Fast Fourier Transform block size must include the entire shock above the measurement system noise floor, interval e T , otherwise the low frequency components will be biased. Selection of an analysis filter bandwidth may require padding with zeros beyond the effective duration, e T . Zero padding results in a frequency interpolation of the ESD estimate. Generally, a rectangular window will be assumed in the time domain, however, other windows are permissible, e.g., Kaiser, as long as the analyst understands the effects of the window shape in the frequency domain, since time domain multiplication results in frequency domain convolution. The ESD description is useful for comparing the distribution of energy within selected frequency bands among several shocks, provided the analysis frequency bandwidth is the same, and it is realized that the estimates have approximately 100% normalized random error. Figure 516.8B-6 displays the ESD estimate for the shock time history in Figure 516.8B-1. By either (1) averaging n adjacent ESD ordinates (keeping estimate bias a minimum), or (2) averaging n independent, but statistically equivalent ESD estimates, the percentage of normalized random error can be decreased by a factor of 1 n . Frequency averaging for periodiogram estimates is well defined in reference 6.1j. ESD estimates for noise floor segments tend not to be particularly useful for examining the validity of the measurement system because of the nondescript behavior of the noise floor. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-7 For validation purposes, the ESD estimate should display proper frequency domain characteristics. In particular, the DC region should be rolled-off if the DC time history component has been removed, and the maximum bandwidth levels should be rolled-off if aliasing is not present. If the maximum bandwidth levels show an increase, it is quite possible that aliasing is present provided the time history has not been previously filtered. An ESD estimate needs to be computed on a high-passed time history that has been not bandlimited by digital filtering in any way. Figure 516.8B-6. Shock ESD estimate. 4. SHOCK IDENTIFICATION AND ANOMALOUS MEASUREMENT BEHAVIOR. In the course of examination of some 216 mechanical shocks from a single test series (reference paragraph 6.1.c) the variation in time history form is substantial, and requires the judgment of an analyst for development of a specification for which shock synthesis for an electrodynamic exciter might be appropriate. Figures 516.8B-7 through 9 display typical anomalous time histories related to signal conditioning or transducer problems. The identification of the problem is assumed, and generally based upon a visual examination of the time history. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-8 Figure 516.8B-7. Measurement input overdriving the signal conditioning with clipping. Figure 516.8B-8. Noisy or missing measurement signals. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-9 Figure 516.8B-9. Combination amplifier overdriving and noise. Based on similar displays, all of these time histories must be rejected and the source of the problem identified before continuing to make measurements. Figure 516.8B-8 illustrates noise in the system that could be from a loose connector or even a missing sensor. Once again, measurement time histories of this form need to be rejected. Measurement time histories with a few clearly identified noise “spikes” may often be “corrected” by a trained analyst and used. Finally, Figure 516.8B-9 illustrates a combination of amplifier over driving and noise corruption. Once again, this measurement must be rejected. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX B 516.8B-10 (This page is intentionally blank.) Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-1 METHOD 516.8, ANNEX C STATISTICAL AND PROBABILISTIC CONSIDERATIONS FOR DEVELOPING LIMITS ON PREDICTED AND PROCESSED DATA ESTIMATES 1. SCOPE. 1.1 Purpose. This Annex provides information relative to the statistical and probabilistic characterization of a set of data for the purpose of defining an “upper limit” on the data set. Such an upper limit may be subsequently used for an enveloping procedure for specification development (this Annex provides no guidance on “enveloping procedures,” where an “enveloping procedure” is defined as a procedure providing polynomial interpolation of spectral information for break point definition used directly in exciter control). Although limit estimates defined below may be applicable over a range of different independent variables it will be assumed for convenience that the independent variable is labeled “frequency”. (For other independent variables, e.g., time, serial correlation in the estimates may need to be accounted for in establishing limits.) It is assumed that input is empirical and representative of one of more random processes with unknown probabilistic specification (i.e., if the probabilistic structure of the random processes is known, statistical considerations contained herein would not be pertinent.) 1.2 Application. Information in this Annex is generally applicable to two or more frequency domain estimates that are either predicted based on given information, or on time domain measurements processed in the frequency domain according to an appropriate technique, e.g., for stationary random vibration, the processing would be an ASD; for a very short transient the processing could be an SRS, ESD, or FS. Given estimates in the frequency domain, information in this Annex will allow the establishment of upper limits on a data set in a statistically correct way with potential for probabilistic interpretation. Statistically based lower limits may be established on a data set of positive amplitude; e.g., ASD or SRS estimates, by inverting the amplitudes and proceeding as in the case of establishment of upper limits, subsequently inverting the resulting ‘upper limit’ for the desired statistically based lower limit. When using a dB representation of amplitude, the process of inversion represents a change in sign for the amplitude, and subsequent application of the ‘upper limit’ procedure such that with sign reversal results in the desired statistically based lower limit. 2. DEVELOPMENT. 2.1 Limit Estimate Set Selection. It is assumed that the analyst has clearly defined the objective of the prediction and/or measurement assessment, i.e., to provide a statistically viable limit estimate. Prediction estimates, measurement estimates, or a combination of prediction and measurement estimates may be considered in the same manner. It is assumed that uncertainty in individual measurements (processing error) does not affect the limit considerations. For measured field data digitally processed such that estimates of the ASD, SRS, ESD, or FS are obtained for single sample records, it is imperative to summarize the overall statistics of "similar" estimates selected in a way so as to not bias the limits. Since excessive estimate variance at any independent variable value may lead to overly conservative or meaningless limits depending upon the procedure selected, this choice of “similar estimates” is a way of controlling the variance in the final limit estimates. To ensure that similar estimates are not physically biased, the measurement locations might be chosen randomly, consistent with the measurement objectives. Likewise, similar estimates may be defined as (1) estimates at a single location on materiel that has been obtained from repeated testing under essentially identical experimental conditions; (2) estimates on materiel that have been obtained from one test, where the estimates are taken (a) at several neighboring locations displaying a degree of response homogeneity, or (b) in "materiel zones," i.e., points of similar response at varying locations, or (3) some combination of (1) and (2). In any case, similar estimates assume that there is a certain degree of homogeneity among the estimates across the frequency band of interest. 2.2 Estimate Processing Considerations. Once the set of “similar estimates” has been identified the following list of assumptions can be used to ensure limit determination is meaningful. a. All estimates are defined over the same bandwidth and at the same independent variable (this is referred to as a “fixed design”). Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-2 NOTE: A “random design” allows the independent variable to vary among estimates and requires principles of distribution-free non-parametric regression techniques to assess the relationship among the estimates. b. The uncertainty or error in individual estimate processing (random or bias processing error) does not significantly affect limit considerations. NOTE: For Fourier based estimates such as ASD, ESD or FS, the estimate accuracy will be defined in terms of statistical degrees of freedom. For example, a basic periodogram estimate has two statistical degrees of freedom, but through block averaging (in time) using the Welch procedure or averaging of adjacent frequencies (in frequency), the statistical degrees of freedom in the estimate can be increased with subsequent decrease in estimate random error, but potential increase in corresponding estimate bias error. It is important in making estimates that the processing error be minimized (or optimized) in some sense through either extending (if possible) the stationary random time history processing length, or by increasing the estimate bandwidth by frequency averaging. In the case of non-Fourier based estimates such as the SRS, there is little guidance on processing bandwidth selection, except that based upon physical considerations for single-degree-of-freedom systems. In these cases, recommend selection of different damping factors along with bandwidths, and comparing the limits. c. Individual estimates from a given measurement are uncorrelated with one another, i.e., there is no serial correlation with respect to the independent variable. NOTE: For Fourier based estimates, this assumption is usually fulfilled because of the “orthogonality” of the Fourier transform. For non-Fourier based estimates, e.g., SRS, some serial correlation in estimates is unavoidable. d. Transformed estimates often are more in line with the assumptions behind the limit determination procedures. For example, using a logarithm transform to yield the estimates in dB will generally leave the estimate set at a given frequency closer to being normally distributed. e. Near “optimal limit estimates” may be determined potentially by reprocessing available time trace information through change in the spacing of the independent variable, i.e., the analysis bandwidth. For the case of prediction, this would mean interpolation of the given prediction estimates. f. Parametric and non-parametric based limit estimates are available. The analyst should select one or more limit estimates that best aligns with (a) the desired interpretation of the limit assessment, and (b) the character of the set of “similar estimates”. 2.3 Parametric Upper Limit Statistical Estimate Assumptions. In all the formulas for the estimate of the statistical upper limit of a set of N predictions or processed estimates at a single frequency within the overall estimate bandwidth, { x1, x2, ……..xN }, it is assumed that (1) the estimates will be logarithm transformed to bring the overall set of measurements closer to those sampled of a normal distribution, and (2) the measurement selection bias error is negligible. Since the normal and “t” distribution are symmetric, the formulas below apply for the lower bound by changing the sign between the mean and the standard deviation quantity to minus. It is assumed here that all estimates are at a single frequency or Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-3 for a single bandwidth, and that estimates among bandwidths are independent, so that each bandwidth under consideration may be processed individually, and the results summarized on one plot over the entire bandwidth as a function of frequency. For yi = log10(xi) i = 1,2,……,N Mean estimate for true mean, µy is given by i N 1 i y y N 1 m ∑ = = and the unbiased estimate of the standard deviation for the true standard deviation σy is given by ( ) 2 1 1 = − = − ∑ N i y i y y m s N 2.3.1 NTL - Upper Normal One-Sided Tolerance Limit. The upper normal one-sided tolerance limit on the proportion β of population values that will be exceeded with a confidence coefficient, γ, is given by NTL(N, β, γ), where ( ) γ β + = γ β , , N y y k s m 10 , , N NTL where kN,β,γ, is the one-sided normal tolerance factor given in Table 516.8C-I for selected values of N, β and γ. NTL is termed the upper one-sided normal tolerance interval (of the original set of estimates) for which 100 β percent of the values will lie below the limit with 100 γ percent confidence. For β = 0.95 and γ= 0.50, this is referred to as the 95/50 limit. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-4 Table 516.8C-I. Normal tolerance factors for upper tolerance limit. N γ = 0.50 γ = 0.90 γ = 0.95 β = 0.90 β = 0.95 β = 0.99 β = 0.90 β = 0.95 β = 0.99 β = 0.90 β = 0.95 β = 0.99 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 32 34 36 38 40 42 44 46 48 50 55 60 65 70 75 80 85 90 95 100 500 1000 ∞ 1.50 1.42 1.38 1.36 1.35 1.34 1.33 1.32 1.32 1.32 1.31 1.31 1.31 1.31 1.31 1.30 1.30 1.30 1.30 1.30 1.30 1.30 1.30 1.30 1.30 1.30 1.29 1.29 1.29 1.29 1.29 1.29 1.29 1.29 1.29 1.29 1.29 1.29 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.94 1.83 1.78 1.75 1.73 1.72 1.71 1.70 1.70 1.69 1.69 1.68 1.68 1.68 1.68 1.67 1.67 1.67 1.67 1.67 1.67 1.67 1.67 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.66 1.65 1.64 1.64 1.64 1.64 1.64 1.64 1.64 1.64 1.64 1.64 1.64 1.64 1.64 2.76 2.60 2.53 2.48 2.46 2.44 2.42 2.41 2.40 2.39 2.39 2.38 2.38 2.38 2.37 2.37 2.37 2.37 2.36 2.36 2.36 2.36 2.36 2.36 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.34 2.34 2.34 2.34 2.34 2.33 2.33 2.33 2.33 2.33 2.33 2.33 2.33 2.33 2.33 2.33 2.33 2.33 4.26 3.19 2.74 2.49 2.33 2.22 2.13 2.07 2.01 1.97 1.93 1.90 1.87 1.84 1.82 1.80 1.78 1.77 1.75 1.74 1.72 1.71 1.70 1.69 1.68 1.67 1.66 1.66 1.64 1.63 1.62 1.61 1.60 1.59 1.58 1.57 1.57 1.56 1.54 1.53 1.52 1.51 1.50 1.49 1.48 1.48 1.47 1.47 1.36 1.34 1.34 5.31 3.96 3.40 3.09 2.89 2.75 2.65 2.57 2.50 2.45 2.40 2.36 2.33 2.30 2.27 2.25 2.23 2.21 2.19 2.17 2.16 2.15 2.13 2.12 2.11 2.10 2.09 2.08 2.06 2.05 2.03 2.02 2.01 2.00 1.99 1.98 1.97 1.97 1.94 1.93 1.91 1.90 1.89 1.88 1.88 1.87 1.86 1.86 1.74 1.71 1.71 7.34 5.44 4.67 4.24 3.97 3.78 3.64 3.53 3.44 3.37 3.31 3.26 3.21 3.17 3.14 3.11 3.08 3.05 3.03 3.01 2.99 2.97 2.95 2.94 2.92 2.91 2.90 2.88 2.86 2.84 2.82 2.81 2.79 2.78 2.77 2.76 2.74 2.73 2.70 2.68 2.67 2.65 2.64 2.63 2.62 2.61 2.60 2.60 2.44 2.41 2.41 6.16 4.16 3.41 3.01 2.76 2.58 2.45 2.35 2.28 2.21 2.16 2.11 2.07 2.03 2.00 1.97 1.95 1.93 1.91 1.89 1.87 1.85 1.84 1.82 1.81 1.80 1.79 1.78 1.76 1.74 1.72 1.71 1.70 1.69 1.67 1.66 1.65 1.65 1.62 1.60 1.59 1.58 1.57 1.56 1.55 1.54 1.53 1.52 1.38 1.35 1.35 7.66 5.14 4.20 3.71 3.40 3.19 3.03 2.91 2.82 2.74 2.67 2.61 2.57 2.52 2.49 2.45 2.42 2.40 2.37 2.35 2.33 2.31 2.29 2.28 2.26 2.25 2.23 2.22 2.20 2.18 2.16 2.14 2.13 2.11 2.10 2.09 2.08 2.07 2.04 2.02 2.00 1.99 1.97 1.96 1.95 1.94 1.93 1.92 1.76 1.73 1.73 10.55 7.04 5.74 5.06 4.64 4.35 4.14 3.98 3.85 3.75 3.66 3.58 3.52 3.46 3.41 3.37 3.33 3.30 3.26 3.23 3.21 3.18 3.16 3.14 3.12 3.10 3.08 3.06 3.03 3.01 2.98 2.96 2.94 2.92 2.91 2.89 2.88 2.86 2.83 2.80 2.78 2.76 2.74 2.73 2.71 2.70 2.69 2.68 2.47 2.43 2.43 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-5 The table (Table 516.8C-I) from paragraph 6.1, reference b, contains the k value for selected N, β, γ. In general this method of estimation should not be used for small N with values of β and γ close to 1 since it is likely the assumption of the normality of the logarithm transform of the estimates will be violated. 2.3.2 NPL - Upper Normal Prediction limit. The upper normal prediction limit (NPL) is the value of x (for the original data set) that will exceed the next predicted or measured value with confidence coefficient, γ, and is given by ( ) 10 , N NPL = γ N 1 1 s m y y + + α −; 1 N t where α = 1 - γ. tN-1; α is the student t distribution variable with N-1 degrees of freedom at the 100 α = 100(1-γ) percentage point of the distribution. This estimate, because of the assumptions behind its derivation, requires careful interpretation relative to measurements made in a given location or over a given estimate zone (paragraph 6.1, reference b). 2.4 Non-parametric Upper Limit Statistical Estimate Procedures. If there is some reason to believe that the estimate at a given frequency, after they have been logarithm-transformed, will not be sufficiently normally distributed to apply the parametric limits defined above, consideration must be given to nonparametric limits, i.e., limits that are not dependent upon assumptions concerning the distribution of estimate values. In this case there is no need to transform the data estimates. All of the assumptions concerning the selection of estimates are applicable for nonparametric estimates. With additional manipulation, lower bound limits may be computed. 2.4.1 Envelope (ENV) - Upper Limit. The maximum upper limit is determined by selecting the maximum estimate value in the data set. ENV(N) = max{ x1, x2, ……..xN } The main disadvantage of this estimate is that the distributional properties of the estimate set are neglected, so that no probability of exceedance of this value is specified. In the case of outliers in the estimate set, ENV(N) may be far too conservative. ENV(N) is also sensitive to the bandwidth of the estimates. 2.4.2 Distribution Free Limit (DFL) - Upper Distribution-Free Tolerance Limit. The distribution-free tolerance limit that uses the original untransformed sample values is defined to be the upper limit for which at least the fraction β of all sample values will be less than the maximum predicted or measured value with a confidence coefficient of “γ”. This limit is based on order statistic considerations. ( ) N max 1 ; x , , N DFL β − = γ = γ β where xmax is the maximum value of the set of estimates, β, is the fractional proportion below xmax, and γ is the confidence coefficient. N, β and γ are not independently selectable. That is a. Given N and assuming a value of β, 0 ≤ β ≤ 1, the confidence coefficient can be determined. b. Given N and γ, the proportion β can be determined. c. Given β and γ, the number of samples can be determined such that the proportion and confidence can be satisfied (for statistical experiment design). DFL(N,β,γ) may not be meaningful for small samples of data, N < 13, and comparatively large β, β > 0.95. DFL(N,β,γ) is sensitive to the estimate bandwidth. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-6 2.4.3 Empirical Tolerance Limit (ETL) - Upper Empirical Tolerance Limit. The empirical tolerance limit uses the original sample values and assumes the predicted or measured estimate set is composed of N measurement points over M frequency analysis bandwidths, for a total of NM estimate values. That is {x11, x12, …., x1M; x21, x22, …., x2M; xN1, xN2, ….xNM} where mj is the average estimate at the jth frequency bandwidth over all N measurement points mj = ∑ = N 1 i ij x N 1 j = 1, 2, …., M mj is used to construct an estimate set normalized over individual frequency resolution bandwidths. That is {u} = {u11, u12, ……, u1M, u21, u22, …, u2M, uN1, uN2, ….., uNM} where : uij = j ij m x i = 1, 2, …., N; j = 1, 2, …., M The normalized estimate set, {u}, is ordered from smallest to largest and uβ = u(k) where u(k) is the kth ordered element of set {u} for 0 < MN k = β ≤1 is defined. For each resolution frequency bandwidth, then ETL(β) = uβmj = xβj j = 1, 2, …., M Using j m implies that the value of ETL(β) at j exceeds β percent of the values with 50 percent confidence. If a value other than j m is selected, the confidence level may increase. It is important that the set of estimates is homogeneous to use this limit, i.e., they have about the same spread in all frequency bands. In general, apply this limit only if the number of measurement points, N , is greater than 10. 3. EXAMPLE. 3.1 Input Test Data Set. Table 516.8C-II represents a homogeneous table of normally distributed numbers of unity variance around a mean value of 3.5 with N=14 rows and M=5 columns (rows could represent fourteen individual test measurements and columns could represent test values over five data sets). Table 516.8C-II is used in the upper limit determinations in paragraphs 3.2 and 3.3 below. Table 516.8C-II. Input test data set. Data Set 1 Data Set 2 Data Set 3 Data Set 4 Data Set 5 3.0674 3.3636 2.0590 2.4435 3.8803 1.8344 3.6139 4.0711 4.9151 2.4909 3.6253 4.5668 3.1001 2.6949 3.4805 3.7877 3.5593 4.1900 4.0287 3.4518 2.3535 3.4044 4.3156 3.7195 3.5000 4.6909 2.6677 4.2119 2.5781 3.1821 4.6892 3.7902 4.7902 1.3293 4.5950 3.4624 2.1638 4.1686 3.4408 1.6260 3.8273 4.2143 4.6908 2.4894 3.9282 3.6746 5.1236 2.2975 4.1145 4.3956 3.3133 2.8082 3.4802 4.0077 4.2310 4.2258 4.3580 3.3433 5.1924 4.0779 2.9117 4.7540 1.8959 4.0913 3.5403 5.6832 1.9063 3.7573 2.8564 4.1771 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-7 3.2 Parametric Upper Limits. The upper normal one-sided tolerance limit (NTL) is computed as 95/50 limit with 50 percent confidence that at least 95 percent of the values will lie below this limit for kN, β, γ = 1.68 from Table 516.8C-I. The upper normal prediction limit (NPL) is computed with a 95 confidence coefficient at the 95 percent point of the distribution where tN-1;α = t13;0.05 = 1.771. Figure 516.8C-1 displays the data, and Figure 516.8C-2 displays the two parametric upper limits. NOTE: The degree of conservativeness in the normal prediction upper limit over the normal tolerance limit. Figure 516.8C-1. Input test data set. Figure 516.8C-2. Parametric and non-parametric upper limits. 0.00 Amplitude (units) 10.00 1.000 Amplitude (units) 5.000 0.00 Amplitude (units) 10.00 1.000 Amplitude (units) 5.000 Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use. MIL-STD-810H METHOD 516.8 ANNEX C 516.8C-8 3.3 Non-parametric Upper Limits. The envelope limit (ENV) along with the upper distribution-free tolerance limit (DFL) for β proportion of the population set at 0.95 and γ confidence coefficient of 0.51 for N=14 samples is displayed in Figure 516.8C-2. This represents one curve with two interpretations. The 95 percent upper empirical tolerance limit (ETL) is also displayed on Figure 516.8C-2 where at least 95 percent of the values will be exceeded by this limit with 50 percent confidence. The data are displayed on Figure 516.8C-2 for comparison purposes. 3.4 Observations. The “flatness” of the upper limits on Figure 516.8C-2 attests to the homogeneity of the data in Table 516.8C-II. It is apparent from Figure 516.8C-2 that the upper limits for the parameters selected are not “statistically equivalent.” Of the two upper limit estimates, the NTL is favored if it can be established that the logarithm transform of the data set is approximately normally distributed. The closeness of the nonparametric envelopes attests also to the homogeneity of the data in Table 516.8C-II in addition to demonstrating, for this case at least, the non-statistical ENV, the statistically based DFL and the ETL basically agree with regard to the upper limit magnitude. For non-homogeneous data sets ETL would not be expected to agree with ENV or DFL. For small data sets, ETL may vary depending upon if parameter k rounds upward or downward. 4. RECOMMENDED PROCEDURES. 4.1 Recommended Statistical Procedures for Upper Limit Estimates. Paragraph 6.1, reference b, provides a detailed discussion of the advantages and disadvantages of estimate upper limits. The guidelines in this reference are recommended. In all cases, plot the data carefully with a clear indication of the method of establishing the upper limit and the assumptions behind the method used. a. When N is sufficiently large, i.e., N > 7, establish the upper limit by using the expression for the DFL for a selected β > 0.90 such that γ > 0.50. b. When N is not sufficiently large to meet the criterion in (a), establish the upper limit by using the expression for the NTL. Select β and γ ≥ 0.50. Variation in β will determine the degree of conservativeness of the upper limit. c. For N > 10 and a confidence coefficient of 0.50, the upper limit established on the basis of ETL is acceptable and may be substituted for the upper limit established by DFL or NTL. It is important when using ETL to examine and confirm the homogeneity of the estimates over the frequency bands. 4.2 Uncertainty Factors. Uncertainty factors may be added to the resulting upper limits if confidence in the data is low or the data set is small. Factors on the order of 3 dB to 6 dB may be added. Paragraph 6.1, reference b recommends a 5.8 dB uncertainty factor (based on “flight-to-flight” uncertainties of 3 dB, and “point-to-point” uncertainties of 5 dB) be used with captive carry flight measured data to determine a maximum expected environment using a normal tolerance limit. It is important that all uncertainties be clearly defined, and that uncertainties are not superimposed upon estimates that already account for uncertainty. Source: -- Downloaded: 2019-03-04T16:12Z Check the source to verify that this is the current version before use.
190657
https://en.wikipedia.org/wiki/Biased_random_walk_on_a_graph
Jump to content Biased random walk on a graph Add links From Wikipedia, the free encyclopedia | | | --- | | | This article relies excessively on references to primary sources. Please improve this article by adding secondary or tertiary sources. Find sources: "Biased random walk on a graph" – news · newspapers · books · scholar · JSTOR (December 2021) (Learn how and when to remove this message) | Structural analysis of a network | | | Part of a series on | | Network science | | | | Theory | | Graph Complex network Contagion Small-world Scale-free Community structure Percolation Evolution Controllability Graph drawing Social capital Link analysis Optimization Reciprocity Closure Homophily Transitivity Preferential attachment Balance theory Network effect Social influence | | Network types | | Informational (computing) Telecommunication Transport Social Scientific collaboration Biological Artificial neural Interdependent Semantic Spatial Dependency Flow on-Chip | | Graphs | | | Features | | Clique Component Cut Cycle Data structure Edge Loop Neighborhood Path Vertex Adjacency list / matrix Incidence list / matrix | | Types | | Bipartite Complete Directed Hyper Labeled Multi Random Weighted | | | Metrics Algorithms | | Centrality Degree Motif Clustering Degree distribution Assortativity Distance Modularity Efficiency | | Models | | | Topology | | Random graph Erdős–Rényi Barabási–Albert Bianconi–Barabási Fitness model Watts–Strogatz Exponential random (ERGM) Random geometric (RGG) Hyperbolic (HGN) Hierarchical Stochastic block Blockmodeling Maximum entropy Soft configuration LFR Benchmark | | Dynamics | | Boolean network agent based Epidemic/SIR | | | Lists Categories | | Topics Software Network scientists Category:Network theory Category:Graph theory | | v t e | In network science, a biased random walk on a graph is a time path process in which an evolving variable jumps from its current state to one of various potential new states; unlike in a pure random walk, the probabilities of the potential new states are unequal. Biased random walks on a graph provide an approach for the structural analysis of undirected graphs in order to extract their symmetries when the network is too complex or when it is not large enough to be analyzed by statistical methods. The concept of biased random walks on a graph has attracted the attention of many researchers and data companies over the past decade especially in the transportation and social networks. Model [edit] There have been written many different representations of the biased random walks on graphs based on the particular purpose of the analysis. A common representation of the mechanism for undirected graphs is as follows: On an undirected graph, a walker takes a step from the current node, to node Assuming that each node has an attribute the probability of jumping from node to is given by: where represents the topological weight of the edge going from to In fact, the steps of the walker are biased by the factor of which may differ from one node to another. Depending on the network, the attribute can be interpreted differently. It might be implied as the attraction of a person in a social network, it might be betweenness centrality or even it might be explained as an intrinsic characteristic of a node. In case of a fair random walk on graph is one for all the nodes. In case of shortest paths random walks is the total number of the shortest paths between all pairs of nodes that pass through the node . In fact the walker prefers the nodes with higher betweenness centrality which is defined as below: Based on the above equation, the recurrence time to a node in the biased walk is given by: Applications [edit] There are a variety of applications using biased random walks on graphs. Such applications include control of diffusion, advertisement of products on social networks, explaining dispersal and population redistribution of animals and micro-organisms, community detections, wireless networks, and search engines. See also [edit] Betweenness centrality Community structure Kullback–Leibler divergence Markov chain Maximal entropy random walk Random walk closeness centrality Social network analysis Travelling salesman problem References [edit] ^ Roberta Sinatra; Jesús Gómez-Gardeñes; Renaud Lambiotte; Vincenzo Nicosia; Vito Latora (March 2011). "Maximal-entropy random walks in complex networks with limited information". Physical Review E. 83 (3): 030103. arXiv:1007.4936. Bibcode:2011PhRvE..83c0103S. doi:10.1103/PhysRevE.83.030103. PMID 21517435. S2CID 6984660. ^ J. Gómez-Gardeñes; V. Latora (Dec 2008). "Entropy rate of diffusion processes on complex networks". Physical Review E. 78 (6): 065102. arXiv:0712.0278. Bibcode:2008PhRvE..78f5102G. doi:10.1103/PhysRevE.78.065102. PMID 19256892. S2CID 14100937. ^ R. Lambiotte; R. Sinatra; J.-C. Delvenne; T.S. Evans; M. Barahona; V. Latora (Dec 2010). "Flow graphs: interweaving dynamics and structure". Physical Review E. 84 (1): 017102. arXiv:1012.1211. Bibcode:2011PhRvE..84a7102L. doi:10.1103/PhysRevE.84.017102. PMID 21867345. S2CID 2286264. ^ Blanchard, P; Volchenkov, D (2008). Mathematical Analysis of Urban Spatial Networks. Springer. doi:10.1007/978-3-540-87829-2. ISBN 978-3-540-87828-5 – via ResearchGate. ^ Volchenkov D; Blanchard P (2011). Fair and biased random walks on undirected graphs and related entropies. Birkhäuser. p. 380. ISBN 978-0-8176-4903-6. ^ Chung, Zhao, Fan, Wenbo (2010). "PageRank and Random Walks on Graphs". Fete of Combinatorics and Computer Science. Bolyai Society Mathematical Studies. Vol. 20. pp. 43–62. CiteSeerX 10.1.1.157.7116. doi:10.1007/978-3-642-13580-4_3. ISBN 978-3-642-13579-8. S2CID 3207094.{{cite book}}: CS1 maint: multiple names: authors list (link) ^ Adal, K.M. (June 2010). "Biased random walk based routing for mobile ad hoc networks". 2010 International Conference on Intelligent and Advanced Systems. pp. 1–6. doi:10.1109/ICIAS.2010.5716181. ISBN 978-1-4244-6623-8. S2CID 16113377. ^ Kakajan Komurov; Michael A. White; Prahlad T. Ram (Aug 2010). "Use of Data-Biased Random Walks on Graphs for the Retrieval of Context-Specific Networks from Genomic Data". PLOS Comput Biol. 6 (8): e1000889. Bibcode:2010PLSCB...6E0889K. doi:10.1371/journal.pcbi.1000889. PMC 2924243. PMID 20808879. ^ J.K. Ochab; Z. Burda (Jan 2013). "Maximal entropy random walk in community detection". The European Physical Journal Special Topics. 216: 73–81. arXiv:1208.3688. Bibcode:2013EPJST.216...73O. doi:10.1140/epjst/e2013-01730-6. S2CID 56409069. ^ Beraldi, Roberto (Apr 2009). "Biased Random Walks in Uniform Wireless Networks". IEEE Transactions on Mobile Computing. 8 (4): 500–513. doi:10.1109/TMC.2008.151. S2CID 13521325. ^ Da-Cheng Nie; Zi-Ke Zhang; Qiang Dong; Chongjing Sun; Yan Fu (July 2014). "Information Filtering via Biased Random Walk on Coupled Social Network". The Scientific World Journal. 2014: 829137. doi:10.1155/2014/829137. PMC 4132410. PMID 25147867. External links [edit] Gábor Simonyi, "Graph Entropy: A Survey". In Combinatorial Optimization (ed. W. Cook, L. Lovász, and P. Seymour). Providence, RI: Amer. Math. Soc., pp. 399–441, 1995. Anne-Marie Kermarrec, Erwan Le Merrer, Bruno Sericola, Gilles Trédan, "Evaluating the Quality of a Network Topology through Random Walks" in Gadi Taubenfeld (ed.) Distributed Computing Retrieved from " Categories: Network theory Social networks Social systems Social information processing Hidden categories: CS1 maint: multiple names: authors list Articles lacking reliable references from December 2021 All articles lacking reliable references Articles with short description Short description is different from Wikidata
190658
https://www.scribd.com/document/137555002/Kingdom-Protista-Notes
Kingdom Protista Notes | PDF | Algae | Protozoa Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Close suggestions Search Search en Change Language Upload Sign in Sign in Download free for 30 days 0%(1)0% found this document useful (1 vote) 5K views 3 pages Kingdom Protista Notes The document provides information about the kingdom Protista, including that it first appeared about 1.5 billion years ago and contains eukaryotic organisms. It notes Protists are extremely … Full description Uploaded by hafizhusain AI-enhanced description Go to previous items Go to next items Download Save Save Kingdom Protista Notes For Later Share 0%0% found this document useful, undefined 100%, undefined Print Embed Translate Ask AI Report Download Save Kingdom Protista Notes For Later You are on page 1/ 3 Search Fullscreen KINGDOM PROTISTA Protist Characteristics  Organisms in the kingdom Protista first appeared about 1.5 billion y ears ago.  Protists are eukaryotic organisms unlike Monerans. P r o k a r y o t e s E u k a r y o t e s Organelles/Cell parts Cytoplasm contains a watery suspension of ions, enzymes,macromolecules, and ribosomes,floating freely inside the cell membrane.Cytoplasm has suspended membrane –surrounded organelles (for example,nucleus, chloroplasts,mitochondria, vacuoles) and ribosomes. Genetic Material Genetic material (DNA) is concentrated in a region called the nucleoid, but no membrane separates this region from the rest of the cell.A “true” nucleus surrounded by a membrane contains DNA. A nucleolus contains RNA. Metabolism Chemical reactions to support the work of the cell are carried out throughout the cytoplasm.Chemical reactions are carried out in specialized membrane-surrounded organelles. General Characteristics  Kingdom Protista contains 115,000 species, and they are extremely diverse in their cell structures, patterns of nutrition, metabolic needs, reproduction, and habitats.  This Kingdom contains a grab-bag of organisms that do not fit into the other kingdoms.  Protists are extremely difficult to classify so for the purposes of this class we will group them by their nutritional patterns. Animal-like Protists  These are often called protozoans and are animal-like because they eat or ingest material from their surroundings.  Zooflagellates : These protists possess one or more flagella used for locomotion.  Some zooflagellates are heterotr ophic and feed on other protists. Other species live as internal parasites on animals, including humans and may be pathogenic.  Examples: Sleeping sickness (a serious African disease) is caused by the parasitic zooflagellate Trypanosoma gambiensis. It’s carrier is the tsetse fly.  Giardia, a zooflagellate, can cause digestive problems i n humans. This illness known as “beaver fever” is caused by drinking polluted lake water.  Trichonympha, is a wood digesting zooflagellate which lives by the thousands in the guts of termites.  Amoebas are single-celled pr otozoans with no s et body shape. They create temporary projections of cytoplasm called pseudopods to move and feed.  They feed on small organisms by endocytosis , engulfing organisms with their pseudopods.Some amoebas are parasitic. An example is amoebic dysentery which i s caused by a species of Entamoeba commonly found in tropical regions. They enter the digestive adDownload to read ad-free system after a person dr inks infected water. They feed on the intestinal wal ls causing bleeding. They form cysts, to prevent being digested and are passed out in the feces.  Ciliates These protozoans are covered with hair-like projections (cilia) which move back and forth like oars to move the organism.  Unlike amoebas, ciliates have a rigid outer covering called a pellicle that maintains their shape.  All of these organis ms are aquatic and heterotrophic. They inhabit both salt and fresh water. A freshwater ciliate c alled Paramecium is one of the most common species in the group.  The beating of the cilia s weeps food into its or al groove. When food reaches the end of the oral groove, the membrane pinches off, surrounds the food and a food vacuole is form ed. Food is discharged through an anal pore.  Paramecium have two types of nuclei- a large macronucleus and one or more small micronuclei.  Asexual reproduction occurs by binary fission, and sexual reproduction is used in the form of conjugation.  Sporozoans are protists that produce spores during their asexual phase of reproduction. They are non-motile and pa rasitic, obtaining their nutrients from the bodies of their hosts. Example : Plasmodium cause Malaria. Fungus-like Protists  All are heterotrophic and most are decomposers that feed on dead plants and animals by endocytosis. They tend to l ive in cool, damp places.  Three major phyla: acellular slime moulds, cellular slime moulds, and water moulds. Plant-like Protists  24,000 species of protists that contain chlorophyll and carry out photosynthesis  Euglenoids are unicellular flagellates  Euglena is a freshwater organism that moves using a flagellum. In the day, it is fully autotrophic and it photosynthesizes. In the dark, it become heterot rophic and feeds on dead organic material in the water.  Algae are protists that resemble plants because they have chloroplasts and chlorophyll.  Some algae are single-celled, others live in colonies, and other species are multicellular, reaching enormous size.  There are six groups of algae. We will discuss three of the main groups.  Diatoms- are a golden colouration due to yellow-brown pigments contained within their glassl ike shells made of si lica. Their outer covering is made of two halves. They are tremendously abundant in the oceans and are the key food source in marine and freshwater ecosystems.  Dinoflagellates- are singl e celled algae that have two fl agella. Most are photosynthetic, though some are heterotrophs.  They are extremely abundant in both mari ne and freshwater environments. Each species has a characteristic shape. They are luminescent when the surrounding water is agitated.  Dinoflagellates grow rapidly when nutrients in the water increase or ocea n temperatures rise. Gonyaulax polyhedron produces a “red-tide” and also produces adDownload to read ad-free toxins. As shellfish and fish feed on them t he toxins concentrate in their bodies and can move through the food chain.  Green Algae- can be single-cell ed, or colonial. Each cell has two flagel la, which move the cell around. They live in fresh water. Thought to give rise to the first plants because of their many similarities.  Other groups of algae are large, multicellular, and are commonly known as seaweeds.  Algae perform 50-75% of all photosynthesis on Earth, and so provide most of the world’s oxygen. Some types of algae are eat en as is, but algae are mos tly used in the manufacture of food products. (i.e. Dulse and Sea vegetables.) Carrageenans are extracted from red algae and are used in stabilizing and gelling foods,cosmetics, pharmaceuticals, and industrial products. Protist Review Small, hair-like projections used for locomotion by paramecia are(a) pseudopodia (b) cilia (c) flagella (d) m ycorrhizae 2. The kingdom Protista contains(a) the unicellular prokaryotes (b) the unicellular eukaryotes (c) only animal-like eukaryotes (d) the Fungi 3. What are some important differences between Mon erans and Protists?4. Why do protists live in aqueous environments?5. What three groups make u p the kingdom Protista?6. What characteristics distinguish plant-like p rotists from animal-like protists?7. Would you expect all students to observe exactly the same shape when observing a live amoeba under the microscope? Explain.8. What is a pseudopod?9. Unlike the higher plants, p lant-like protists do not have roots, stems, or leaves. Explain why they do no t require these structures?10. Why are multicellular algae not classified as plants?11. What is the function of the cilia on the surface of Paramecium ?12. If you observe a contractile vacuole in a protist, what is the most likely ha bitat for the organism?13. All protists are eukaryotic. Why, then, is this not considered a trait that defines the group?14. You have a sample of pond water in which you want to look for protists. The jar has some mud at the bottom and some plant bits as well. Where would you look to find sarcodines? Ciliates?15. Smaller classification groupings for the Animal-like Protists are based on the following a) colour (b) means of locomotion (c) size of the organisms (d) number of cilia 16. All members of the Kingdom Protista a) are eukaryotes (b) reproduce sexually (c) have cilia (d) do not have a membrane around the nucleus 17. The Kingdom Protista is often described as the junk drawer when it comes to classification of organisms. Explain the meaning of this statement.18. Discuss the importance of the Protists to life in the pond habitat.19. Name three methods of locomotion used by some species of protists and describe these methods. Read this document in other languages Bahasa Indonesia język polski Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Copy link Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like 5 Kingdom Classification - Kingdom Protista No ratings yet 5 Kingdom Classification - Kingdom Protista 22 pages Classification of Protists No ratings yet Classification of Protists 46 pages Methylene Blue/Homeopathy Returns: Viral Loads, Malaria, Wrinkles & Anti-Aging 100% (5) Methylene Blue/Homeopathy Returns: Viral Loads, Malaria, Wrinkles & Anti-Aging 7 pages Communicable Disease - Community Health Nursing 100% (1) Communicable Disease - Community Health Nursing 49 pages Biotech LM1-Quarter4 No ratings yet Biotech LM1-Quarter4 16 pages (BSC Zoology Series) B. N. Pandey - Cytology, Genetics and Molecular Genetics (2012, Tata Mcgraw Hill) - Libgen - Li No ratings yet (BSC Zoology Series) B. N. Pandey - Cytology, Genetics and Molecular Genetics (2012, Tata Mcgraw Hill) - Libgen - Li 536 pages Biotechnology 71% (7) Biotechnology 30 pages Planation of NCERT Line by Line Fill-UP and True & False No ratings yet Planation of NCERT Line by Line Fill-UP and True & False 38 pages Important Questions For CBSE Class 11 Biology Chapter 3 No ratings yet Important Questions For CBSE Class 11 Biology Chapter 3 20 pages List of National Parks of India No ratings yet List of National Parks of India 5 pages STD 11 Module 2 No ratings yet STD 11 Module 2 120 pages Kingdom Monera Lecture Notes 100% (1) Kingdom Monera Lecture Notes 5 pages Advances in Nuclear Oncology Diagnosis and Therapy 1st Edition Emilio Bombardieri - The Latest Ebook Is Available For Instant Download Now 100% (8) Advances in Nuclear Oncology Diagnosis and Therapy 1st Edition Emilio Bombardieri - The Latest Ebook Is Available For Instant Download Now 85 pages Animal Kingdom Notes For Neet No ratings yet Animal Kingdom Notes For Neet 29 pages Haemophilus and Related Bacteria Overview No ratings yet Haemophilus and Related Bacteria Overview 53 pages 12th Biology 3 Mark Golden Questions 100% (1) 12th Biology 3 Mark Golden Questions 4 pages 10 Kuliah PLACENTA No ratings yet 10 Kuliah PLACENTA 33 pages Plant Kingdom Class 11 Notes Biology Chapter 3 100% (2) Plant Kingdom Class 11 Notes Biology Chapter 3 19 pages Cbse WWF India Wild Wisdom Quiz No ratings yet Cbse WWF India Wild Wisdom Quiz 7 pages Periodontal Care for Medically Complex No ratings yet Periodontal Care for Medically Complex 8 pages 4 Animal Kingdom Notes No ratings yet 4 Animal Kingdom Notes 7 pages Hemichodartes Presentation 100% (1) Hemichodartes Presentation 39 pages Blue Green Algae No ratings yet Blue Green Algae 23 pages Life Cycle of Spirogyra PDF 100% (1) Life Cycle of Spirogyra PDF 3 pages 2.biological Classification Resonance PDF 100% (1) 2.biological Classification Resonance PDF 72 pages Biological Classification 100% (2) Biological Classification 51 pages PTERIDOPHYTA No ratings yet PTERIDOPHYTA 16 pages Ecological Adaptations No ratings yet Ecological Adaptations 75 pages Presentation 1 No ratings yet Presentation 1 134 pages Fossils No ratings yet Fossils 28 pages NEET 2025 Biology Prep Guide 100% (1) NEET 2025 Biology Prep Guide 116 pages Autecology and Synecology in Ecology No ratings yet Autecology and Synecology in Ecology 11 pages Ascpi (MLS) Study Guide: Cushing's Disease 100% (5) Ascpi (MLS) Study Guide: Cushing's Disease 2 pages MORPHOLOGY OF FLOWERING PLANTS Class Notes 0% (1) MORPHOLOGY OF FLOWERING PLANTS Class Notes 8 pages Animal Kingdom Classification Guide No ratings yet Animal Kingdom Classification Guide 30 pages The World of Fungi and Their Significance1 No ratings yet The World of Fungi and Their Significance1 33 pages Unit 1-Limnology - An Introduction: Mukes Das 100% (1) Unit 1-Limnology - An Introduction: Mukes Das 72 pages Unit 4-Lecture 1-Hemorrhagic Disorders and Laboratory Assessment No ratings yet Unit 4-Lecture 1-Hemorrhagic Disorders and Laboratory Assessment 89 pages Biological Classification Topper's Notes Exambatneet? No ratings yet Biological Classification Topper's Notes Exambatneet? 9 pages CH-2 - Biological Classification - Notes No ratings yet CH-2 - Biological Classification - Notes 9 pages General Characters,: Range of Thallus Organization Economic Importance No ratings yet General Characters,: Range of Thallus Organization Economic Importance 59 pages Secondary Headache: By: Mutia Dewi Assifa Supervisor: Dr. Riki Sukiandra, SP.S No ratings yet Secondary Headache: By: Mutia Dewi Assifa Supervisor: Dr. Riki Sukiandra, SP.S 60 pages Ecology: Botany by DR Geetendra Mbbs MD @ 100% (2) Ecology: Botany by DR Geetendra Mbbs MD @ 35 pages Shapes of Bacteria No ratings yet Shapes of Bacteria 71 pages Histology Slides With Explanation No ratings yet Histology Slides With Explanation 33 pages 65142d0d439d38001809354e - ## - The Living World Handwritten Notes No ratings yet 65142d0d439d38001809354e - ## - The Living World Handwritten Notes 25 pages Biology Lab Experiments Guide No ratings yet Biology Lab Experiments Guide 27 pages Cambridge IGCSE: BIOLOGY 0610/43 No ratings yet Cambridge IGCSE: BIOLOGY 0610/43 24 pages Engler and Prantel System of Classification No ratings yet Engler and Prantel System of Classification 14 pages Documents - null-NCERT NOTES - Chapter 7 FROG (Calss 11) No ratings yet Documents - null-NCERT NOTES - Chapter 7 FROG (Calss 11) 7 pages Zoology Exam Papers and Answers No ratings yet Zoology Exam Papers and Answers 24 pages Yagyopathy For Preventing Microorganism Based Diseaeses 100% (1) Yagyopathy For Preventing Microorganism Based Diseaeses 22 pages Doc-20231201-Wa0013 231204 061743 No ratings yet Doc-20231201-Wa0013 231204 061743 21 pages Chapter-3/Class-XI/Biology: Plant Kingdom No ratings yet Chapter-3/Class-XI/Biology: Plant Kingdom 6 pages Unit:-I Chapter-3. Classification of Plant Kingdom: Important Points No ratings yet Unit:-I Chapter-3. Classification of Plant Kingdom: Important Points 7 pages 1 - Principles of Classification No ratings yet 1 - Principles of Classification 42 pages Polymorphism in Coelenterates No ratings yet Polymorphism in Coelenterates 30 pages NEET Biology Examples Guide No ratings yet NEET Biology Examples Guide 26 pages Bryophytes As Pollution Indicators 1 - 064718 No ratings yet Bryophytes As Pollution Indicators 1 - 064718 23 pages Goodman and Gilman's Sample Chapter No ratings yet Goodman and Gilman's Sample Chapter 17 pages Oparin - Haldane Theory and Miller - Urey Experiment No ratings yet Oparin - Haldane Theory and Miller - Urey Experiment 32 pages Chapter 2: Biological Classification: Class 11 Biology Unit I - Diversity in The Living World No ratings yet Chapter 2: Biological Classification: Class 11 Biology Unit I - Diversity in The Living World 14 pages Unit-5 - Circulatory System - General Plan of Circulation No ratings yet Unit-5 - Circulatory System - General Plan of Circulation 6 pages Biological Classification Class 11 Notes Biology Chapter 2-Min 0% (1) Biological Classification Class 11 Notes Biology Chapter 2-Min 19 pages Chapter 2 Biological Classification No ratings yet Chapter 2 Biological Classification 9 pages WJCC 3 156 No ratings yet WJCC 3 156 8 pages Theories of Organic Evolution No ratings yet Theories of Organic Evolution 15 pages Invertebrate Zoology 5th Edition 67% (3) Invertebrate Zoology 5th Edition 2 pages General Characters of Class Aves 100% (3) General Characters of Class Aves 10 pages Wilson Diesease Practice No ratings yet Wilson Diesease Practice 7 pages Biotechnology 8 No ratings yet Biotechnology 8 4 pages Reading Guide - Chapter 15 No ratings yet Reading Guide - Chapter 15 3 pages Living World Notes by Andleaf No ratings yet Living World Notes by Andleaf 12 pages GLUTEX GS2 Sanitizer - Efficacy Against Human Coronavirus (Leading Performance) - 253-03107 100% (1) GLUTEX GS2 Sanitizer - Efficacy Against Human Coronavirus (Leading Performance) - 253-03107 2 pages Dengue Fever Pathophysiology No ratings yet Dengue Fever Pathophysiology 3 pages Positive Dengue Fever Test Report Format Example Sample Template Drlogy Lab Report No ratings yet Positive Dengue Fever Test Report Format Example Sample Template Drlogy Lab Report 1 page R and K Selection No ratings yet R and K Selection 37 pages Biodiversity No ratings yet Biodiversity 63 pages NCERT Solutions For Biology Chapter 1 The Living World Class 11 No ratings yet NCERT Solutions For Biology Chapter 1 The Living World Class 11 6 pages Edaphic Factor No ratings yet Edaphic Factor 16 pages What Is Red Data Book? No ratings yet What Is Red Data Book? 5 pages New Zealand White Rabbits: Model Information Sheet No ratings yet New Zealand White Rabbits: Model Information Sheet 2 pages Cell The Unit of Life No ratings yet Cell The Unit of Life 5 pages Evolution - Biology Notes For NEET - AIIMS - JIPMER No ratings yet Evolution - Biology Notes For NEET - AIIMS - JIPMER 20 pages Coal and Petroleum No ratings yet Coal and Petroleum 21 pages Notebook Work Respiration in Organisms No ratings yet Notebook Work Respiration in Organisms 4 pages Aquatic Zonation No ratings yet Aquatic Zonation 28 pages Vision in Arthropoda No ratings yet Vision in Arthropoda 4 pages Biological Classification Class 11 Notes Biology No ratings yet Biological Classification Class 11 Notes Biology 6 pages Hierarchical Classification: Linnaean Hierarchy No ratings yet Hierarchical Classification: Linnaean Hierarchy 3 pages ad Footer menu Back to top About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Support Help / FAQ Accessibility Purchase help AdChoices Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Support Help / FAQ Accessibility Purchase help AdChoices Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps Documents Language: English Copyright © 2025 Scribd Inc. We take content rights seriously. Learn more in our FAQs or report infringement here. We take content rights seriously. Learn more in our FAQs or report infringement here. Language: English Copyright © 2025 Scribd Inc. 576648e32a3d8b82ca71961b7a986505 scribd.scribd.scribd.scribd.
190659
https://www.sciencedirect.com/topics/pharmacology-toxicology-and-pharmaceutical-science/ganglion-blocking-agent
Skip to Main content My account Sign in Ganglion Blocking Agent In subject area:Pharmacology, Toxicology and Pharmaceutical Science Ganglion blocking agents, also known as ganglioplegic agents, are competitive blockers of acetylcholine receptors in sympathetic and parasympathetic ganglia, leading to ganglionic blockade that effectively lowers blood pressure, especially in hypertensive patients. Examples include pentamethonium, hexamethonium, and trimetaphan, though their use has declined due to low tolerability. AI generated definition based on: Encyclopedia of Endocrine Diseases, 2004 How useful is this definition? Add to Mendeley Also in subject areas: Chemistry Nursing and Health Professions Discover other topics Chapters and Articles You might find these chapters and articles relevant to this topic. Chapter Antiadrenergic Agents 2004, Encyclopedia of Endocrine DiseasesP.A. van Zwieten Ganglion Blocking Drugs (Ganglioplegic Agents) These agents are competitive blockers of the AChn receptors in the sympathetic and also the parasympathetic ganglia. Ganglionic blockade in the sympathetic ganglia accounts for their effective lowering of blood pressure, particularly in hypertensive patients. Pentamethonium, hexamethonium, and trimetaphan are examples of ganglioplegic agents. These agents have been abandoned as therapeutics because of their low tolerability, which is partly due to the blockade of the parasympathetic ganglia. View chapterExplore book Read full chapter URL: Reference work2004, Encyclopedia of Endocrine DiseasesP.A. van Zwieten Chapter Drugs Affecting Nicotinic Receptors 2017, Pharmacology and Therapeutics for Dentistry (Seventh Edition)Xi-Qin Ding Ganglionic Blockers Ganglionic blocking agents can be classified on the basis of their chemical structure or mechanism of action into three groups (Fig. 7-2), as follows: 1. : Depolarizing drugs, such as nicotine, which produce initial stimulation and varying degrees of subsequent block through a mechanism analogous to that of succinylcholine (see later). At higher doses, these agents can stimulate and block other cholinergic receptors, such as those at the neuromuscular junction and in the CNS. 2. : Competitive drugs, such as trimethaphan and tetraethylammonium, which interfere with the binding of ACh to the nicotinic receptor. 3. : Noncompetitive agents, such as hexamethonium (C6) and mecamylamine, a secondary amine. Hexamethonium interferes with ganglionic transmission by blocking ion channels that have been opened by ACh, whereas mecamylamine seems to share properties associated with both hexamethonium and the competitive blocking agents. Pharmacologic effects All the ganglionic blocking drugs, regardless of their structure or their mechanism of action, have the same basic pharmacology, although many of them have additional actions at sites other than ganglionic receptors. An ideal ganglionic blocking agent would be a compound that interferes only with ganglionic transmission, blocks without previous excitation, and does not influence the release of transmitter. Hexamethonium is a prototype agent that meets these criteria. The pharmacology of the ganglionic blocking drugs is predictable because all parasympathetic and sympathetic ganglia are blocked by most of the available agents. Ganglia are not equally sensitive to the blocking drugs, however, and some effects are easier to block than others. The effects of ganglionic agents are profoundly influenced by the background tone; that is, the effect of blocking a ganglion is proportional to the rate of nerve transmission through that ganglion at any given time. If vascular tone is high, as it would be in a standing individual, the ganglionic blocking agents would produce a profound decrease in blood pressure, much greater than they would in a recumbent individual, in whom vascular tone would be lower. Finally, because these drugs block sympathetic and parasympathetic actions, the direction and magnitude of their effects are related to which autonomic division provides the dominant baseline control for a given organ. Table 7-2 summarizes the usual predominance of sympathetic or parasympathetic tone at various effector sites and the pharmacologic effects of ganglionic blockade. Absorption, fate, and excretion For ganglionic blocking agents, the question of absorption, fate, and excretion is an academic one because only one drug, mecamylamine, is available in an oral formulation, and it is seldom used because of its numerous side effects. Trimethaphan has been administered by intravenous drip; it has a rapid onset and short duration of action. General therapeutic uses Because of their multiple side effects, ganglionic blockers are rarely used. For most patients, these effects are intolerable except for acute use in recumbent patients. Trimethaphan was used in the past as an adjunct during anesthesia to produce controlled hypotension and in hypertensive emergencies. Adverse effects As is true of other autonomic drugs, toxicity from the ganglionic blocking agents is an extension of their known pharmacologic effects. Some of these effects, such as xerostomia, blurring of vision, and constipation, are annoying but bearable. Other side effects, such as orthostatic hypotension, urinary retention, and sexual impotence, present more significant problems. More severely, the ganglionic blocking agents can produce peripheral circulatory collapse with cerebral and coronary insufficiency, paralytic ileus, and complete urinary retention. The toxic liabilities of the drugs are the major reason for their abandonment in the treatment of hypertension. View chapterExplore book Read full chapter URL: Book2017, Pharmacology and Therapeutics for Dentistry (Seventh Edition)Xi-Qin Ding Chapter Vasodilators 2007, Comprehensive HypertensionGordon T. McInnes Ganglion-blocking drugs Ganglion blockers act by occupying receptor sites on the post-ganglionic axon to stabilize the membrane against acetylcholine stimulation. These drugs have no effect on pre-ganglionic acetylcholine release, cholinesterase activity, post-ganglionic neuronal catecholamine release, or vascular smooth muscle contractility.9 Adrenergic transmission to the heart and vessels is impaired, with the result that heart rate, myocardial contractility, and total peripheral resistance are reduced. The fall in arterial pressure and vascular resistance is not as great in the supine as in the upright position because the adrenergic venomotor effect is enhanced by the gravitational effect of pooling blood when the patient is upright. Examples include hexamethonium, pentolinium, mecamylamine, pempidine, chlorisondamine, and trimetaphan. The only widely used agent in this class, trimetaphan, is excreted by glomerular filtration and active secretion (30% is unchanged in urine). View chapterExplore book Read full chapter URL: Book2007, Comprehensive HypertensionGordon T. McInnes Chapter Drugs Affecting Nicotinic Receptors 2017, Pharmacology and Therapeutics for Dentistry (Seventh Edition)Xi-Qin Ding Pharmacologic effects All the ganglionic blocking drugs, regardless of their structure or their mechanism of action, have the same basic pharmacology, although many of them have additional actions at sites other than ganglionic receptors. An ideal ganglionic blocking agent would be a compound that interferes only with ganglionic transmission, blocks without previous excitation, and does not influence the release of transmitter. Hexamethonium is a prototype agent that meets these criteria. The pharmacology of the ganglionic blocking drugs is predictable because all parasympathetic and sympathetic ganglia are blocked by most of the available agents. Ganglia are not equally sensitive to the blocking drugs, however, and some effects are easier to block than others. The effects of ganglionic agents are profoundly influenced by the background tone; that is, the effect of blocking a ganglion is proportional to the rate of nerve transmission through that ganglion at any given time. If vascular tone is high, as it would be in a standing individual, the ganglionic blocking agents would produce a profound decrease in blood pressure, much greater than they would in a recumbent individual, in whom vascular tone would be lower. Finally, because these drugs block sympathetic and parasympathetic actions, the direction and magnitude of their effects are related to which autonomic division provides the dominant baseline control for a given organ. Table 7-2 summarizes the usual predominance of sympathetic or parasympathetic tone at various effector sites and the pharmacologic effects of ganglionic blockade. View chapterExplore book Read full chapter URL: Book2017, Pharmacology and Therapeutics for Dentistry (Seventh Edition)Xi-Qin Ding Chapter Drugs Affecting Nicotinic Receptors 2017, Pharmacology and Therapeutics for Dentistry (Seventh Edition)Xi-Qin Ding Adverse effects As is true of other autonomic drugs, toxicity from the ganglionic blocking agents is an extension of their known pharmacologic effects. Some of these effects, such as xerostomia, blurring of vision, and constipation, are annoying but bearable. Other side effects, such as orthostatic hypotension, urinary retention, and sexual impotence, present more significant problems. More severely, the ganglionic blocking agents can produce peripheral circulatory collapse with cerebral and coronary insufficiency, paralytic ileus, and complete urinary retention. The toxic liabilities of the drugs are the major reason for their abandonment in the treatment of hypertension. View chapterExplore book Read full chapter URL: Book2017, Pharmacology and Therapeutics for Dentistry (Seventh Edition)Xi-Qin Ding Chapter Cholinergic antagonists 2023, NeuropsychopharmacologyJahangir Moini MD, MPH, ... Jennifer G Schnellmann PHD Nicotinic antagonists: ganglionic blockers The nicotinic antagonists known as ganglionic blockers act at the autonomic ganglia. They lower blood pressure and are used in emergencies. These blockers interrupt transmission of nerve impulses at nicotinic receptors of the autonomic ganglia. Since the sympathetic and parasympathetic nervous systems both have ganglia, these blockers are nonselective. They inhibit all of the autonomic nervous system. The parasympathetic system is more highly affected, since the dominant baseline autonomic tone of most organs if parasympathetic. After slowing down parasympathetic nerve impulses, symptoms include constipation, urinary retention, blurred vision, dry mouth, and increased heart rate. The only therapeutic action is vasodilation, and these blockers can cause extreme hypotension, requiring emergency kits to be available that contain agents such as epinephrine to reverse extremely low BP. There is only one ganglionic blocker currently available for medical use because of these agent’s potential toxicities. It is mecamylamine (Vecamyl). It is a long-acting nicotinic receptor antagonist originally used for BP reduction in cases of severe hypertension. Safer antihypertensive drugs have largely replaced this drug. Mecamylamine has more recently been approved for nicotine dependence, since it reduces the brain’s psychologic rewarding effects of nicotine. It is also used for Tourette's syndrome when that condition is unresponsive to other medications. Mecamylamine is contraindicated in mild, moderate, or labile hypertension, coronary insufficiency, uremia, glaucoma, organic pyloric stenosis, known hypersensitivity, and recent myocardial infarction. The drug is only used with extreme caution if kidney insufficiency is signified by elevated or increasing blood urea nitrogen (BUN). Common adverse effects include fatigue, weakness, headaches, sedation, blurred vision, mydriasis, decreased libido, impotence, and urinary retention. The serious adverse effects of mecamylamine include orthostatic hypotension, precipitation of angina, adynamic ileus, and choreiform movement. Mecamylamine has a major drug interaction with tizanidine, and many moderate drug interactions, including with epinephrine, haloperidol, norepinephrine, risperidone, and vasopressin. Mecamylamine toxicity may cause hypotension that leads to peripheral vascular collapse, nausea, vomiting, postural hypotension, constipation, diarrhea, paralytic ileus, urinary retention, anxiety, dizziness, dry mouth, mydriasis, blurred vision, palpitations, and increased intraocular pressure. You should remember Mecamylamine is a nicotine antagonist/ganglionic blocker that is extremely potent, and is used for severe or life-threatening blood pressure. It starts working in 1–2 hours, and is only approved for use in adults. The drug is dangerous if it is suddenly stopped, not good to use along with antibiotics and some sulfa medications, and unsafe if the patient has severe kidney problems or a recent myocardial infarction. View chapterExplore book Read full chapter URL: Book2023, NeuropsychopharmacologyJahangir Moini MD, MPH, ... Jennifer G Schnellmann PHD Chapter The pharmacology of the autonomic nervous system 2008, Small Animal Clinical Pharmacology (Second Edition)Matthias J Kleinz, Ian Spence Nicotinic antagonists Nicotinic antagonists fall into two distinct groups: ganglion blockers and peripheral muscle relaxants. The differences between the nicotinic receptors at autonomic ganglia and those at the neuromuscular junction mentioned above form the basis of the distinct pharmacodynamic actions of these two classes of drugs. Generally, ganglion blockers have little or no effect on transmission at the neuromuscular junction. The prototypic ganglion blocker is hexamethonium, which causes a fall in blood pressure as the result of blockade of sympathetic ganglia that mediate some control on arterial and venous blood pressure. Hexamethonium in the past was used as an antihypertensive agent but has been superseded by β-blockers and other antihypertensive treatments. Trimetaphan, another ganglion blocker, is now only occasionally used in human medicine for controlled hypotension during surgery. The prototypic peripheral muscle relaxant is curare. This is not a pure substance but a mixture of alkaloids from the South American vine Chondodendron tomentosum. The main active constituent, d-tubocurarine, was isolated at the beginning of the last century and many synthetic agents are now available. Some of the older peripheral muscle relaxants, including tubocurarine and gallamine, have mild ganglion-blocking activity. The group of antinicotinic peripheral muscle relaxants used in veterinary medicine can be further subdivided into nondepolarizing (pancuronium, atracurium besylate, vecuronium) or depolarizing (suxamethonium/succinylcholine, see above) agents. Nondepolarizing peripheral muscle relaxants bind to nicotinic receptors at the motor endplates, acting as classic competitive antagonists with no intrinsic activity, thereby inhibiting neuromuscular transmission. Depolarizing peripheral muscle relaxants act as agonists at the nicotinic receptors of the neuromuscular junction that cause muscle paralysis by inducing sustained depolarization as a result of extremely slow dissociation of the receptor–ligand complex. The neuromuscular blockade produced by nondepolarizing blockers can be reversed with anticholinesterases. The neuromuscular blockade produced by depolarizing blockers cannot be reversed by this method. This latter point is rarely a problem as depolarizing blockade is very short-lived as suxamethonium is a substrate for circulating pseudocholinesterases. View chapterExplore book Read full chapter URL: Book2008, Small Animal Clinical Pharmacology (Second Edition)Matthias J Kleinz, Ian Spence Chapter Autonomic Pharmacology 2011, Applied PharmacologyStan K. Bardal BSc (Pharm), MBA, PhD, ... Douglas S. Martin PhD Ganglionic Pharmacology An additional mechanism of manipulating the ANS is through drugs that affect the autonomic ganglia. They can be ganglionic stimulants or ganglionic blockers. Most of these drugs are no longer used clinically and are of historical importance only, because drugs that target the ganglia usually have a broad range of effects and therefore many side effects; more directed, specifically acting drugs that do not act on the ganglia are now available and have replaced them. Some examples of these older ganglion-acting drugs include: guanethidine, hexamethonium, and mecamylamine. Nicotine is a clinically important agent that influences activity of the autonomic ganglia. As would be suggested by the name, nicotine is an agonist of nicotine receptors and is best known as a component of tobacco products and for its role in addiction. The major action of nicotine consists initially of transient stimulation, followed by a more persistent depression of all autonomic ganglia. Effects of nicotine are similar to increasing the effects of the SNS, including increased blood pressure and heart rate. In addition, nicotine is strongly associated with the pathways in the brain responsible for reward and addiction. View chapterExplore book Read full chapter URL: Book2011, Applied PharmacologyStan K. Bardal BSc (Pharm), MBA, PhD, ... Douglas S. Martin PhD Chapter Vasodilators 2007, Comprehensive HypertensionGordon T. McInnes Adrenergic inhibitors Ganglion-blocking drugs As a result of reduction in vasomotor tone, treated patients will pool blood in dependent capacitance vessels. This effect explains the phenomenon of orthostatic hypotension that can be associated with syncope.42 To enhance the antihypertensive effect in the supine posture, it is necessary to reduce intravascular (and extracellular) fluid volume and prevent the expansion of blood volume.7,8 Prolonged therapy with trimetaphan for 48 to 72 hours is associated with refractory responses (tachyphylaxis).43 Reduction in cardiac output results in at least proportionate reduction of renal blood flow, sometimes associated with reduced creatinine clearance.44 Because parasympathetic inhibition also results from ganglionic blockade, tonic activity leads to risk of paralytic ileus and acute urinary retention. Thus, abdominal pain with reduced bowel sounds, constipation, or reduced urinary output in a patient with aortic dissection may not reflect extension of the dissection into the mesenteric or renal arteries but instead may be a side effect of treatment. Other adverse drug reactions with trimetaphan include asthma attacks because of histamine release. Large doses may provoke muscle relaxation leading to cardiac arrest. Rawolfia alkaloids Parasympathetic activity remains unopposed, explaining many common side effects (including bradycardia, prolonged atrioventricular conduction, increased gastric acid excretion with possible secondary peptic ulceration, and frequency of bowel movements). These adverse effects may be counteracted by parasympathetic inhibitors. Although arterial dilatation with increased blood flow has been considered greatest in the skin, other vascular beds are also involved. The frequent complaint of nasal mucosal congestion and suffiness is ameliorated by nasally administered vasoconstrictors.45 However, prolonged use may result in chemical rhinitis. As a result of depletion of brain catecholamines and serotonin, there may be behavioral alterations and subtle or overt depression (sometimes leading to suicide).46 Less severe central complications include drowsiness and nightmares. Parkinsonism, dyskinesia, and dystonia can result from dopamine depletion in the basal ganglia. Congestive heart failure may be precipitated or worsened. Adrenergic neuron-blocking drugs Because of coincidental inhibition of venous tone,45 venous return to the heart is reduced by peripheral pooling of blood in dependent areas of the body with upright posture. As a result, orthostatic hypertension is prominent.47 Associated with the resulting fall in cardiac output, there is a proportionate reduction in organ blood flow. Severe hypotension may aggravate angina and lead to myocardial infarction, cerebrovascular insufficiency with syncope, or even stroke. The renal and splanchnic territories may receive a smaller proportion of total cardiac output, but glomerular filtration rate and renal function appear to return to normal with time.48 With reducing skeletal muscle blood flow and adrenergic innervation of skeletal muscle, weakness may result. This can be exacerbated by diuretic treatment.49 Muscle weakness may be aggravated still further during and immediately after exercise.50 Some side effects (orthostatic hypotension, excessive hypotension, bradycardia, increased gastric excretion) result from unopposed parasympathetic activity and impaired adrenergic function. Similarly, diarrhea, retrograde ejaculation, and fluid retention may be explained by reduced adrenergic transmission. Many of these side effects may be counteracted by reduction in dosage or the addition of a parasympatholytic agent or diuretic. Because these agents act by entering the nerve terminal, any agent that prevents this will block the action. This is the means by which tricyclic antidepressants act,51 and therefore these classes of drugs should not be prescribed concomitantly. Drugs that reduce efferent sympathetic output enhance postural hypotension and bradycardia. Examples include alpha blockers, beta blockers, and ganglion blockers. Cardiac glycosides may also enhance bradycardia. Monoamine oxidase inhibitors The major side effects are centrally mediated mental and emotional reactions, including euphoria, insomnia, and acute psychosis. More important is the severe hypertensive crisis following the ingestion of foods containing tyramine, such as aged cheeses, beer, sherry, Chianti, and herring.52 Veratrium alkaloids Because of the narrow therapeutic index, the effective control of arterial pressure is not infrequently associated with side effects. View chapterExplore book Read full chapter URL: Book2007, Comprehensive HypertensionGordon T. McInnes Related terms: Trimetaphan Camsilate Adenosine Mecamylamine Hexamethonium Beta Adrenergic Receptor Blocking Agent Adverse Event Nicotinic Receptor Cholinergic Receptor Blocking Agent Cholinergic Receptor Stimulating Agent View all Topics
190660
https://www.scribd.com/document/549416300/Ten-Teachers-Obstetrics-19th-Ed11-pdf-1
Ten Teachers Obstetrics 19th Ed11 PDF | PDF | Childbirth | Anatomy Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Close suggestions Search Search en Change Language Upload Sign in Sign in Download free for 30 days 0 ratings 0% found this document useful (0 votes) 193 views 5 pages Ten Teachers Obstetrics 19th Ed11 PDF The document discusses key obstetric emergencies that can be remembered as the "four Hs and four Ts". It then focuses on management of haemorrhage, describing antepartum haemorrhage which ca… Full description Uploaded by Abdullah Essa AI-enhanced title and description Go to previous items Go to next items Download Save Save Ten Teachers Obstetrics 19th Ed11.pdf(1) For Later Share 0%0% found this document useful, undefined 0%, undefined Print Embed Ask AI Report Download Save Ten Teachers Obstetrics 19th Ed11.pdf(1) For Later You are on page 1/ 5 Search Fullscreen ;=?Licigoloct hj spob﬉ hkstotrbolorgocbos @irmbhrospbrithry irrost bc hkstotrbs bs riro. Ahwovor, licy hkstotrb olorgocbos h``ur tait aivo cht yot prhgrossom th tabs mbro stigo. ]ao prbcbpnos hj licigoloct (sullirbzom bc Jbguro 9:.9) rolibc tao silo, icm bj su``ossjun wbnn provoct tao lhst sorbhus soquonio. Licigoloct hj spob﬉ hkstotrbolorgocbos Aiolhrraigo Hkstotrbaiolhrraigoic h`ur ictocitinny hr phst-citinny, icm khtaic prosoct is hkstotrbolorgocbos. Ictopirtul aio lhrraigo Ictopirtul aiolhrraigo (ITA) bs icy knoombcg h`urrbcg bc tao ictocitin porbhm ijtor ;4 wooes gostitbhc. Bthlpnbitos ;‐? poroct hj progcicbos.Lhstisos bcvhnvo ronitbvony slinn quictbtbos hj knhhm nhss, kut taoy hjtoc sbgcbjy tait tao progcicy bs it bcroisom rbse hj suksoquoct hlpnbitbhcs, bcnumbcg phstpirtul aiolhrraigo. It torl, ITAic ko mbj﬉unt th mbstbcgubsa jrhl i ‑sahw‒ waba bs tao ronoiso hj tao orvbin luus bc tao oirny stigos hj nikhur. ]aoiusos hj ITA iro pnioctin ikruptbhc (hco tabrm), pniocti priovbi (hco tabrm) icm htaor iusos (hco tabrm). ]aus, pnioctin knoombcg bs rosphcsbkno jhr ipprhxblitony twh tabrms hj ITAs. Waoc issossbcg pitbocts prosoctbcg wbta ic ITA, i mbgbtin oxilbcitbhc sahunm cht ko hcmutom uctbn ic untrishucm sic ais bmoctb﬉om tao nhitbhc hj tao pni`octi (soo konhw ucmor Mbigchsbs). Tni`octi priovbi Iotbhnhgy icm opbmolbh nhgy Tniocti priovbi bs mo﬉com is i pniocti tait ais blpnictom bcth tao nhwor sogloct hj tao utorus. Bt bs chw nissb﬉om is obtaor lifhr, bc waba tao pniocti bshvorbcg tao bctorcin orvbin hs, hr lbchr, waoc tao pniocti bs sbtom wbtabc tao nhwor sogloct hj tao utorus, kut mhos chthvor tao orvbin hs (Jbguro 9:.0). ]abs ais ropniom tao hnmor B‐BZnissb﬉itbhc systol. ]ao bcbmoco bc tao _E bs ipprhxblitony ? por 9444 icm bs bcroisbcg muo th tao r bsbcg @iosiroic sotbhc rito icm bcroisbcg litorcin igo. Bt bs lhro ic ko rololkorom is tao ‑jhur As icm tao jhur ]s‒ icm iro gbvoc konhw (tahso bc btinbs sbgcbjy tahso lhst nbeony bc progcic`y)6 J h u r A s J h u r ]s Ayphvhniolbi muo th aiolhrraigo hr sopsbs]arhlkholkhnbsl Ayphxbi ]hxbbty muo th mrugs, o.g. iciostaotb Ayporeinolbi icm htaor lotikhnbmbshrmors]ocsbhc pcoulhtahrix A y p h t a o r l b i@i r m b i t i l p h c i m o I motibnom mosrbptbhc hj imvicom rosusbtitbhc bs koyhcm tao shpo hj tabs khhe, kut `ic ko jhucm bc tao suggostom jurtaor roimbcg. Mbj﬉untbos bc rosusbtitbhc muo th progcic`y Bc immbtbhc th hlprossbhc hj tao nirgo ikmhlbcin knhhm vossons mosrbkom ikhvo, shlo htaor paysbhnhgbinaicgos hj progcicyic insh lieo rosusbtitbhc hj taohnnipsom progcict whlic lhro mbj﬉unt. ]ao progcict utorus prossos hc tao mbiparigl, taorojhro romubcg tao nucg juctbhcin rosbmuinipibty icm liebcg tao nucgs lhro mbj﬉unt th voctbnito. Nirgor kroists hlphucm tabs prhknol. Jurtaorlhro, progcicy iusos tao hoshpaigoin spabctor th kohlo lhro ronixom, taorojhro bcroisbcg tao nbeonbahhm hj ispbritbhc hj tao sthliahctocts bcth tao nucgs. Bt bs blphrtict tait tao ibrwiy bs so`urom oirny th provoct tabs. ]ao jotus ]ao prosoco hj i jotus wbtabc tao utorus lieos rosusbtitbhc hj tao lhtaor lhro mbj﬉unt muo th ihrthivin hlprossbhc, hkstrutbhc th voctbnitbhc icm bcroisom hxygoc roqubrolocts. Bc tao olorgocy sbtuitbhc, bt bs inwiys tao wonjiro hj tao lhtaor tait tieos proomoco. Bj rosusbtitbhc ais cht kooc su``ossjun ky = lbcutos, ic bllombito @iosiroic sotbhc sahunm ko hcmutom, wbta tao ibl hj aivbcg tao kiky monbvorom ky ? lbcutos. ]ao ibl hj tabs bs prblirbny th bcroiso tao nbeonbahhm hj su``ossjunny rosusbtitbcg tao lhtaor, icm, jhr tao sieo hj spoom, mhos cht roqubro tao pitboct th ko bc ic hporitbcg taoitro. adDownload to read ad-free ;=:Hkstotrbolorgocbos hllhc bc hnmor (hjtoc luntbpirhus) whloc icm bc whloc wbta provbhus utorbco surgory. Bc whloc wah aivo aim i provbhusiosoiroic sotbhc, taoro bs i rbse tait tao pniocti blpnicts bcth, icm taus bcvimos, bcth tao provbhus sir. ]abs bsinnom i ‑lhrkbmny imaoroct pni`octi‒ icm taoro iro taroo typos6 9.Tni`octi i``roti . Tni`octi bs ikchrlinny imaoroct th tao utorbco winn. ;.Tniocti bcroti . Tni`octi bs ikchrlinny bcvimbcg bcth tao utorbco winn. 0.Tniocti porroti . Tniocti bs bcvimbcg tarhuga tao utorbco winn.]ao rbse hj i lhrkbmny imaoroct pniocti bcroisos wbta bcroisbcg culkors hj provbhus @iosiroic so`tbhcs. Mbigchsbs ]ao lhtaor wbnn prosoct wbta pibcnoss knoombcg, hjtoc rourroct bc tao tabrm trblostor, icm untrishucm sics wbnn molhcstrito tao ikchrlin nhitbhc hj tao pniocti. ]ao knoombcg h`urs muo th sopiritbhc hj tao pniocti is tao nhwor sogloct movonhps bc tao tabrm trblostor. @hctritbhcsic insh probpbtito knoombcg ky i sblbnir loaicbsl. Hc ikmhlbcin pinpitbhc, tao utorus wbnn ko shjt icm chc-tocmor icm tao prosoctbcg pirt wbnn ko jroo is bt iccht octor tao ponvbs muo th hkstrutbhc ky tao pniocti. I mbgbtin oxilbcitbhc bshctribcmbitom is tabsic probpbtito knoombcg. Ipprhxblitony 94 poroct hj isos hj pniocti priovbi ic insh kohlpnbitom ky pnioctin ikruptbhc (soo konhw ucmor Tni`octin ikruptbhc). Licigoloct ]ao pitboct sahunm ko bcbtbinny rosusbtitom usbcg tao struturom ipprhia hj IK@. Bj tao knoombcg bs ronitbvony lbchr icm tao jotus uchlprhlbsom, tao pitboct sahunm ko imlbttom jhr hksorvitbhc icm cht innhwom ahlo uctbn it noist ;= ahurs ais pissom wbtahut jurtaor knoombcg. Whloc wbta lifhr pniocti priovbi wah aivo aim rourroct knoombcg sahunm ko imlbttom is bcpitbocts jrhl 0= wooes, icm tahso wah aivo cht knom coom i irojun rbse issossloct kojhro kobcg licigom it ahlo. Lifhr knoombcg wbnn roqubro ﬋ubm rosusbtitbhc icm monbvory hj tao jotus ky @iosoiroic sotbhc ky i socbhr hkstotrbbic. ]ao rbse th tao jotus bs libcny proliturbty muo th oirny @iosiroic sotbhc. ]aoro bshcsbmorikno rbse hj sorbhus litorcin aiolhrraigo, obtaor is ITA hr murbcg @iosiroic sotbhc waoc tao pnioctin kom liy cht hctrit, hr muo th lhrkbm imaoroco. ]abs liy noim th lissbvo phstpirtul aiolhrraigo (TTA). ]ao bcmbitbhcs jhr monbvory iro roi`abcg 07‐01 wooes L b c h r p n io c t i p r i o v b i L i f h r p n io c t i p r i o v b i Jbguro 9:.0 Tni`octi priovbi adDownload to read ad-free ;=7 Licigoloct hj spob﬉ hkstotrbolorgocbos gostitbhc, i lissbvo (  9?44 lN) knoom, hr hctbcubcg sbgcb﬉ict knoombcg hj nossor sovorbty. @isos hj lbchr pniocti priovbiic ko hcsbmorom jhr i vigbcin monbvory bj tao pniocti bs i lbcblul hj ; l iwiy jrhl taoorvb`in hs. Tni`octin ikruptbhc Iotbhnhgy icm opbmolbh nhgy I pnioctin ikruptbhc bs sopiritbhc hj i chrlinny sbtom pniocti jrhl tao utorbco winn. Bc lhst isos, tao sopiritbhc roiaos tao omgo hj tao pniocti, tries mhwc th tao orvbx icm bs rovoinom is vigbcin knoombcg. ]ao rolibcbcgisos iro hcoinom, icm prosoct is utorbco pibc icm phtoctbinny litorcin sahe hr jotin mbstross wbtahut hkvbhus knoombcg. ]ao jotus bs it rbse koiuso hj ayphxbi jhnnhwbcg pnioctin sopiritbhc icm prolituro monbvory. ]ao lhtaor bs it rbse hj ayphvhniolb sahe,nhttbcg mbshrmors icm hcsoquoct lhro wbmosproim hrgic miligo. ]ao iotbhnhgy icm pitahpaysbhnhgbin hcsoquocos hj pnioctin ikruptbhc iro mbsussom bc jurtaor motibn bc @aiptor 94, Tro-onilpsbi icm htaor mbshrmors hj pnioctitbhc. Mbigchsbs Tnioctin ikruptbhc typbinny prosocts is vigbcin knoombcg isshbitom wbta pibc. ]ao pibcic ko hcstict, hr is jroquoct sahrt-nistbcghctritbhcsiusom ky tao brrbtikno ojjot hj knhhm wbtabc tao utorus. ]ao pitboct liy rophrt romuom jotin lhvolocts icm tao irmbhthhgripa liy molhcstrito i chc-roissurbcg jotin aoirt rito pittorc. @hcstict pibc issh`bitom wbta i utorus tait bs vory airm hc pinpitbhc bs echwc is i @huvonibro utorus icm bs muo th i nirgo vhnulo hj knhhm wbtabc tao lyhlotrbul. Licigoloct ]ao pitboct sahunm ko bcbtbinny rosusbtitom usbcg tao struturom ipprhia hj IK@. Licigoloct mopocms hc rohgcbtbhc hj tao prhknol, roinbzitbhc tait truo knhhm nhss liy ko jir groitor taic tao knhhm nhss sooc, icm ripbm bcstbtutbhc hj lifhr aiolhrraigo licigoloct (soo konhw ucmor Thstpirtul aiolhrraigo). Bc vory sovoro isos, tao jotus wbnn ko moim icm vigbcin monbvoryic ko i`onoritom ky irtbjbbin rupturo hj tao lolkricos hco tao lhtaor bs roishcikny stikno. Bj tao jotus bs inbvo, monbvory wbtahuthlprhlbsbcg tao lhtaor‒s rosusbtitbhc bs urgoct icm tabs wbnn usuinny ko ky @iosiroic sotbhc. Htaor `iusos hj ictopirtul aiolhrraigo Htaor iusos hj ITA bcnumo orvbin knoombcg (otrhpbhc, phst-hbtin), gocbtin trit bcjotbhc, gocbtin trit tulhurs, i sahw icm visi priovbi. Wbta tao oxoptbhc hj visi provbi, taoso gocorinny iuso bcsbgcbjbict ilhucts hj knhhm nhss. Zisi priovbi bs rupturo hj jotin vossons ruccbcg wbtabc tao lolkricos, hjtoc coir th tao orvbin hs icm miligom waoc tao lolkricos rupturo. Bt bs i riro hcmbtbhc, kut bt bsitistrhpabjhr tao jotus is bt bs jotin knhhm tait bs nhst. [bse jithrs bcnumo pniocti priovbi, i vonilocthus pnioctin bcsortbhc icm luntbpno progcicy. Intahuga ronitbvony slinn ilhucts hj knoombcg iro sooc, tabs ic roprosoct i nirgo prhphrtbhc hj tao thtin jotin knhhm vhnulo. Aoco, tao jotus ic ripbmny oxsicgubcito icm taoro bs i abga rbse hj jotin moita. ]aoirmbhthhgripa wbnn ripbmny kohlo ikchrlin wbta i jotin tiayirmbi, jhnnhwom ky moop moonoritbhcs. Intahuga tosts jhr jotin aiolhgnhkbc iro phssbkno (rirony usom bc _E pritbo), tao kost shnutbhc bs i abga bcmox hj susp bbhc icm ripbm @iosiroic so`tbhc. Thstpirtul aiolhrraigo Thstpirtul aiolhrraigo (TTA) bs prhkikny hco hj tao lhst hllhc hkstotrb olorgocbos. Bc tao _E @hc﬉moctbin Ocqubry ;440‐?, aiolhrraigo wis tao tabrm lhsthllhc `iuso hj moita. Bt bs mo﬉com is6 • Trbliry TTA . Nhss hj  ?44 lN knhhm jrhl tao gocbtin tri`t wbtabc ;= ahurs hj m onbvory5 • Po`hcmiry TTA . Nhss hj  ?44 lN knhhm jrhl tao gocbtin trit kotwooc ;= ahurs icm 9; w ooes phst monbvory.Bt bshcsbmorom th ko lbchr bj tao knhhm nhss bs kotwooc ?44 icm 9444 lN icm lifhr bj bt bs groitor taic 9444 lN. Bc pr itbo, kn hhm nhss os kotw ooc ?44 icm 9444 lN iro ronitbvony hllhc, icmic usuinny ko thnoritom wonn ky tao whlic. ]aus, bt ais kooc suggostom tait nhssos hvor 9444 lN sahunm trbggor olorgocy TTA prhthhns. Ahwovo r, bt sahunm ko rololkorom tait ostblitbhc hj knhhm nhss bs chthrbhusny bci`urito, icm bj i whlic molhcstritos ovbmoco hj irmbhvisunir hlprhlbso, sua is tiayirmbi, hr bj taoro bs hctbcuom knoombcg, taoc prhthhns sahunm ko bcstbtutom ovoc bj ostblitom nhssos iro noss taic 9444 lN. Bc `hllhc wbta htaor adDownload to read ad-free ;=1 Hkstotrbolorgocbos hkstotrbolorgocbos, TTA ic hjtoc ko prombtom icm provoctitbvo loisuros ucmortieoc bj sbgcb﬉ict rbse jithrs iro prosoct (]ikno 9:.0). Iotbhnhgy icm opbmolbh nhgy ]ao iusos hj TTAic ko rololkorom is tao jhur ‑]s‒6]h c o _ t o r b c o i t h c y]b s s u o [otibcom pniocti icm/hr lolkricos]r i u l i Bcfury th vigbci, porbcoul icm utorbco toirs it @iosiroic sotbhc]a r h l k b c@n h t t b c g m b s h r m o r s _torbco ithcy, hr jibnuro hj tao utorus th hctrit ijtor tao monbvory hj tao pniocti (‑thco‒), bs tao lhsthllhc iuso hj TTA icmic iuso thrroctbin nhss hj knhhm bllombitony jhnnhwbcg monbvory. Btic ko prombtom, icm taorojhro stops tieoc th provoct bt, ky tao uso hj hxythbbcjusbhcs icm itbvo licigoloct hj tao tabrm stigo hj nikhur. I rotibcom pniocti (‑tbssuo‒)ic insh provoct i utorus jrhl hctritbcg oj﬉boctny uctbn tao tbssuo bs rolhvom. H``isbhcinny, pirts hj tao pniocti hr lolkricos ic ko rotibcom, icm tabsic ko bmoctb﬉om ky irojun oxilbcitbhc hj tao pniocti jhnnhwbcg monbvory. Inlhst inn typos hj monbvory iciuso shlo mogroo hj gocbtin trit triuli bc tao jhrl hj porbcoin icm vigbcin toirs, intahuga tabs bs lhsthllhc jhnnhwbcg i jhrops monbvory. [irony, taoorvbx ic ko thrc bj monbvory ais h``urrom kojhro taoorvbx bs junny mbnitom. Lhro rirony, ikchrlin knhhm nhttbcg (‑tarhlkbc‒)ic hctrbkuto th ic oxossbvo knhhm nhss. ]abs ic h``ur bc whloc wbta ic ucmornybcg mbshrmor sua is Zhc Wbnnokricm‒s mbsoiso, hr pnitonot mbshrmors. Bt lhro hllhcny irbsos bc whloc wah aivo movonhpom ihcsulptbvo higunhpitay is i rosunt hj ichtaor hkstotrb hlpnbitbhc, sua is i lissbvo pnioctin ikruptbhc, ic ucbmoctb﬉om moim jotus, ilcbhtb` ﬋ubm olkhnus hr lissbvo aiolhrraigo. Mbigchsbs Oirny rohgcbtbhc hj knhhm nhss icm ripbm itbhc bs vbtin bc tao licigoloct hj TTA. Ipprobitbhc hj rbse jithrs, iurito ostblitbhc hj knhhm nhss icm ro`hgcbtbhc hj tao litorcin sbgcs hj `irmbhvis`unir `hlprhlbso iro vbtin. ]aoso bc`numo i ti`ay`irmbi, nhw knhhm prossuro, sylpthls hj ciusoi, vhlbtbcg icm joonbcg jibct, pinnhr icm snhw `ipbnniry ro﬉nn (groitor taic ; so`hcms). Bt bs blphrtict th ro`hgcbzo tait yhucg, ﬉t whloc aivo tao `ipi`bty th thnorito nirgo ilhucts hj knhhm nhss wbtahut molhcstritbcg licy `nbcb`in sylpthls. ]ao oirnbost sylpthl wbnn ko i ti`ay`irmbi icm hjtoc knhhm prossuro mhos cht jinn uctbn lissbvo aiolhrraigo ais hurrom (hjtoc 9;44‐9?44 lN hj knhhm). Licigoloct Bc pritbo, mbigchsbs icm licigoloct hj TTA hur sblunticohusny. ]ao stru`turom IK@ ipprhi`a hutnbcom ikhvo sahunm ko bcstbtutom. ]abs licigoloct bs sullirbzom bc ]ikno 9:.=. [ipbm ﬋ubm rosus`btitbhc sahunm hur it tao silo tblo is issossbcg icm troitbcg tao iuso. Pbco utorbco ithcy bs tao lhst hllhciuso, tao utorus sahunm ko lissigom th ]ikno 9:.0 [bse ji`th rs jhr phst pirtul aiolhrraigo L i t o r c i n J o t i n T r o-o x bs t b c g[i bs o m litorcin igo Nirgo kiky T r b l b p i r b t y L u n t b p n o progcicy Gricm luntbpirbty Thnyaymrilcbhs _torbco ﬉krhbms Pahunmor mysthbi Trovbhus iosiroic Knoombcg mbshrmors Hkosbty Ictopirtul aiolhrraigo Trovbhus TTA B ct ri pi rt ul T r hn h cg om nikhur@iosiroic sotbhc Bcstruloctin monbvory Tyroxbi bc nikhur Opbsbhthly adDownload to read ad-free Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Copy link Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like HDMTX Nursing February2023CJON No ratings yet HDMTX Nursing February2023CJON 8 pages Bleeding After The 24Th Week of Pregnany No ratings yet Bleeding After The 24Th Week of Pregnany 17 pages Antepartum Hemorrhage Overview No ratings yet Antepartum Hemorrhage Overview 30 pages Haemorrhage in Obstetrics No ratings yet Haemorrhage in Obstetrics 7 pages Bleeding in Late Pregnancy No ratings yet Bleeding in Late Pregnancy 39 pages Ante Partum Hemorrhage No ratings yet Ante Partum Hemorrhage 21 pages Obstetric Emergencies: ABC of Labour Care No ratings yet Obstetric Emergencies: ABC of Labour Care 4 pages Ante Partum Haemorage No ratings yet Ante Partum Haemorage 88 pages Antepartum Hemorrhage No ratings yet Antepartum Hemorrhage 105 pages Antepartum Hemorrhage No ratings yet Antepartum Hemorrhage 12 pages Obstetric & Gynecological Emergency No ratings yet Obstetric & Gynecological Emergency 117 pages Placenta Previa No ratings yet Placenta Previa 8 pages Antepartum Hemorrhage Guide No ratings yet Antepartum Hemorrhage Guide 31 pages Placenta Praevia No ratings yet Placenta Praevia 14 pages Antepartum Hemorrhage No ratings yet Antepartum Hemorrhage 37 pages Antepartum Haemorrhage No ratings yet Antepartum Haemorrhage 6 pages Antepartum Hemorrhage (Aph) No ratings yet Antepartum Hemorrhage (Aph) 12 pages Antepartum Haemorrhage No ratings yet Antepartum Haemorrhage 6 pages APH Midwife Group Presentation No ratings yet APH Midwife Group Presentation 26 pages Obstetrics No ratings yet Obstetrics 77 pages Study of Foetomaternal Outcome of Antepartum Haemorrhage in Pregnancy No ratings yet Study of Foetomaternal Outcome of Antepartum Haemorrhage in Pregnancy 4 pages Bleeding in Late Pregnancy No ratings yet Bleeding in Late Pregnancy 39 pages Bleeding in Late Pregancy No ratings yet Bleeding in Late Pregancy 30 pages APH No ratings yet APH 12 pages 8 Placenta and Hemorrhage No ratings yet 8 Placenta and Hemorrhage 60 pages Antepartum Haemorrhage Guide No ratings yet Antepartum Haemorrhage Guide 78 pages Antepartum Hemorrage 2 No ratings yet Antepartum Hemorrage 2 69 pages Antepartum & Postpartum Hemorrhage Guide No ratings yet Antepartum & Postpartum Hemorrhage Guide 81 pages Perdarahan Antepartum No ratings yet Perdarahan Antepartum 52 pages Antepartum Hemorrhage No ratings yet Antepartum Hemorrhage 45 pages Placenta Previa No ratings yet Placenta Previa 3 pages OB Hemorrhage No ratings yet OB Hemorrhage 52 pages Antepartum Hemorrhage (APH) : It Is A Medical No ratings yet Antepartum Hemorrhage (APH) : It Is A Medical 10 pages Antepartum Haemorrhage No ratings yet Antepartum Haemorrhage 22 pages 6b APH No ratings yet 6b APH 52 pages Antepartum Hemorrhage Causes & Management No ratings yet Antepartum Hemorrhage Causes & Management 23 pages ON Hemorrhage in Late Pregnancy, Placenta Previa and Abruptio Placenta No ratings yet ON Hemorrhage in Late Pregnancy, Placenta Previa and Abruptio Placenta 22 pages 18 - AntePartum Haemorrhage No ratings yet 18 - AntePartum Haemorrhage 110 pages 16) Antepartum Hemorrhage No ratings yet 16) Antepartum Hemorrhage 54 pages Antepartum Haemorrhage - Copy - Copy-1 No ratings yet Antepartum Haemorrhage - Copy - Copy-1 109 pages Placenta Previa: A Nursing Guide No ratings yet Placenta Previa: A Nursing Guide 33 pages Antepartum and Post Partum Hemorrhage BScLicentiate 2020 No ratings yet Antepartum and Post Partum Hemorrhage BScLicentiate 2020 62 pages Antepartum Heamorrhage No ratings yet Antepartum Heamorrhage 34 pages Obstetric Haemorrhage No ratings yet Obstetric Haemorrhage 52 pages Placenta and Placental Problems 100% (1) Placenta and Placental Problems 13 pages GUU Antepartum Haemorrhage No ratings yet GUU Antepartum Haemorrhage 21 pages Antepartum Haemorrhage No ratings yet Antepartum Haemorrhage 30 pages Obstetric Hemorrhage Guide No ratings yet Obstetric Hemorrhage Guide 68 pages APH No ratings yet APH 40 pages Antepartum Hemorrhage Causes & Management No ratings yet Antepartum Hemorrhage Causes & Management 48 pages Antepartum Hemorrhage No ratings yet Antepartum Hemorrhage 43 pages Placenta Praevia No ratings yet Placenta Praevia 35 pages Antepartum Haemorrage (APH) : Dr. Mtumweni, MD No ratings yet Antepartum Haemorrage (APH) : Dr. Mtumweni, MD 42 pages Antipartum Heamorrhage: Presenter Nsubuga Ivan MBCHB Stud 3.2 Kiu Lira Center Date 23 / 2 /2022 No ratings yet Antipartum Heamorrhage: Presenter Nsubuga Ivan MBCHB Stud 3.2 Kiu Lira Center Date 23 / 2 /2022 49 pages Antepartum Hemorge No ratings yet Antepartum Hemorge 34 pages By Gemechu M (MD) No ratings yet By Gemechu M (MD) 26 pages ANTEPARTUM No ratings yet ANTEPARTUM 25 pages Antepartum Haemorrhage Updated No ratings yet Antepartum Haemorrhage Updated 15 pages Antepartum Hemorrhage No ratings yet Antepartum Hemorrhage 49 pages Histology Mid-Exam Answers No ratings yet Histology Mid-Exam Answers 2 pages Immunology MHC No ratings yet Immunology MHC 7 pages College of Medicine: Department of General Surgery No ratings yet College of Medicine: Department of General Surgery 53 pages Appendix Part 1 No ratings yet Appendix Part 1 8 pages Fetal Development & Growth Guide No ratings yet Fetal Development & Growth Guide 42 pages Sinapi Drain No ratings yet Sinapi Drain 12 pages CHAPTER V-Fam Coping No ratings yet CHAPTER V-Fam Coping 7 pages Health Assessment 8 No ratings yet Health Assessment 8 8 pages Tele-Pharmacists' Prospects in Pandemic Situations: A Bangladesh Scenario No ratings yet Tele-Pharmacists' Prospects in Pandemic Situations: A Bangladesh Scenario 12 pages Abstract Geometric Histology of Nervous System Slides No ratings yet Abstract Geometric Histology of Nervous System Slides 19 pages 77 Passage 2 - The Disease Multiple Sclerosis Q15-27 No ratings yet 77 Passage 2 - The Disease Multiple Sclerosis Q15-27 5 pages Marmot BritishJournalofGeneralPractice 2002 Architecturaldeterminism Doesdesignchangebehaviour No ratings yet Marmot BritishJournalofGeneralPractice 2002 Architecturaldeterminism Doesdesignchangebehaviour 17 pages Sinner's Circle For Chemo-Thermal Washing No ratings yet Sinner's Circle For Chemo-Thermal Washing 2 pages Ultrasonography For Chairside Evaluation of Periodontal Structures: A Pilot Study No ratings yet Ultrasonography For Chairside Evaluation of Periodontal Structures: A Pilot Study 29 pages Giao Trinh Nam 1-Ôn Tập No ratings yet Giao Trinh Nam 1-Ôn Tập 25 pages Pneumonia Summary - Elaf Alshamari No ratings yet Pneumonia Summary - Elaf Alshamari 3 pages Listening Words No ratings yet Listening Words 5 pages PRO-HEALTH: A Rehabilitation Center For Musculoskeletal Conditions and Orthopedic Care No ratings yet PRO-HEALTH: A Rehabilitation Center For Musculoskeletal Conditions and Orthopedic Care 10 pages Preclinical Evaluation of Polyherbal As An Anti - Alzheimers, by Using In-Vitro Anticholinesterase Enzyme Model No ratings yet Preclinical Evaluation of Polyherbal As An Anti - Alzheimers, by Using In-Vitro Anticholinesterase Enzyme Model 13 pages CPH Lab Notes I No ratings yet CPH Lab Notes I 6 pages Tyg-Bmi As A Surrogate Marker in Hypertensive Patients - An Observational Study in Tertiary Care Hospital No ratings yet Tyg-Bmi As A Surrogate Marker in Hypertensive Patients - An Observational Study in Tertiary Care Hospital 7 pages Biliary Atresia No ratings yet Biliary Atresia 39 pages Sleep Disorders: Start Here No ratings yet Sleep Disorders: Start Here 7 pages Philippine Saviours: Narratives of Filipino Intensive Care Nurses and Their Ways On Handling Chronically Ill Patients No ratings yet Philippine Saviours: Narratives of Filipino Intensive Care Nurses and Their Ways On Handling Chronically Ill Patients 38 pages CANY Preliminary Findings From Fishkill 7.18.20 No ratings yet CANY Preliminary Findings From Fishkill 7.18.20 4 pages Introduction To Cancer No ratings yet Introduction To Cancer 10 pages Cureus 0015 00000042010 No ratings yet Cureus 0015 00000042010 13 pages Lupus: Earl Eugene Castro - Ab Biology 4 Genetics No ratings yet Lupus: Earl Eugene Castro - Ab Biology 4 Genetics 17 pages Prostitution and The Bases: A Continuing Saga of Exploitation No ratings yet Prostitution and The Bases: A Continuing Saga of Exploitation 13 pages Endocrine System Assignment No ratings yet Endocrine System Assignment 4 pages Medical Surveillance Guidelines 2000 No ratings yet Medical Surveillance Guidelines 2000 139 pages Đề 5 KIỂM TRA TUYỂN SINH 10 - 2025 - 2026 No ratings yet Đề 5 KIỂM TRA TUYỂN SINH 10 - 2025 - 2026 5 pages 10 Trichomonas No ratings yet 10 Trichomonas 22 pages Lesson Plan - Genetic Disorders No ratings yet Lesson Plan - Genetic Disorders 4 pages ad Footer menu Back to top About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Support Help / FAQ Accessibility Purchase help AdChoices Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Support Help / FAQ Accessibility Purchase help AdChoices Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps Documents Language: English Copyright © 2025 Scribd Inc. We take content rights seriously. Learn more in our FAQs or report infringement here. We take content rights seriously. Learn more in our FAQs or report infringement here. Language: English Copyright © 2025 Scribd Inc. 576648e32a3d8b82ca71961b7a986505
190661
https://www.nagwa.com/en/videos/417146972530/
Question Video: Using the Determinant of a Matrix to Find Missing Values Mathematics • First Year of Secondary School Solve for 𝑥: |0, 5, −5𝑥 and 𝑥, 4, 5 and 4, 1, 3| = 280. Video Transcript Solve for 𝑥: the determinant of the matrix zero, five, negative five 𝑥, 𝑥, four, five, four, one, three is equal to 280. So the first thing we need to do to solve this problem is work out the determinant of our matrix. And if we try to find the determinant of the three-by-three matrix, what we use is the first row. And we use the first row to be our coefficients or the values that we multiply our submatrices by, which are gonna be two-by-two submatrices. And I’m gonna show you how it’s gonna work. It’s worth noting also that these values are gonna be positive, negative, or positive, determined by the pattern that we have. Where the first column is positive, so it’ll be positive zero, the second one will be negative, the third one will be positive, et cetera. So what this means is where we have a negative above the column, it’s gonna change the sign of the value that we’ve got from our first row. If you’ve got a positive, then the sign is gonna stay the same. So first of all, we’re gonna have zero multiplied by the determinant of the submatrix four, five, one, three. And we find that submatrix by deleting the row on column that the zero is in. And then, it’s what we’re left with afterwards. So 𝑥 would be left with four, five, one, three. Then next, what we have is negative five multiplied by the determinant of the submatrix 𝑥, five, four, three. Again, finding that in the same way, and it’s negative five because, as we said, the second column is negative. And what we mean by this is that the coefficient or our five is multiplied by negative one. So we get negative five. Then, finally, we have minus five 𝑥 multiplied by the determinant of the submatrix 𝑥, four, four, one. And again, we’ve done this the same way. And this time, we’ve still got negative five 𝑥. And that’s because the third column is positive. So we multiply negative five 𝑥 by positive one, which gives us the same sign because it doesn’t change. And then, this is all equal to 280. So now, what we need to do is evaluate our submatrices, well, the determinant of our submatrices. And the way we do that is using this method. So if we’ve got two-by-two submatrix, we can call this one 𝑎, 𝑏, 𝑐, 𝑑. Then, what we do is we multiply diagonally. So we have 𝑎 multiplied by 𝑑 and 𝑏 multiplied by 𝑐. And then, we take away 𝑏 multiplied by 𝑐 from 𝑎 multiplied by 𝑑. Well, for the first term, we don’t have to worry about the submatrix. And that’s because we’ve got zero multiplied by the determinant of the submatrix. So anything multiplied by zero is just zero. And then, we have minus five multiplied by. Now, we’ve got three multiplied by 𝑥 or 𝑥 multiplied by three, which gives us three 𝑥 minus. Then, we’ve got five multiplied by four, which is 20. And then, we have minus five 𝑥 multiplied by. Then, we’ve got 𝑥 and that’s because we had 𝑥 multiplied by one which is just 𝑥 minus 16 because we’ve four multiplied by four, which is 16. And this is all equal to 280. So now, what we need to do is distribute across our parentheses. So first of all, we have zero minus. And then, we’ve got five multiplied by three 𝑥, which is 15𝑥 and then plus a 100. And that’s because we had negative five multiplied by negative 20. And a negative multiplied by a negative is a positive. And then, we have negative five 𝑥 squared plus 80 𝑥. That’s cause we’ve got negative five 𝑥 multiplied by 𝑥, which is negative five 𝑥 squared. And we’ve got negative five 𝑥 multiplied by negative 16. Negative multiplied by a negative is a positive. So it gives us positive 80𝑥. This is equal to 280. So now, if we simplify, we get negative five 𝑥 squared plus 65𝑥 plus 100 equals 280. So now, what I wanna do is make this so that I have positive five 𝑥 squared or positive 𝑥 squared. So I’m going to add five 𝑥 squared to both sides of the equation, subtract 65𝑥 from both sides of the equation, and subtract 100 from both sides of the equation. So I wanna make our quadratic equal to zero. And when I do that, I get zero is equal to five 𝑥 squared minus 65𝑥 plus 180. So now, what we wanna do is solve this to find 𝑥. But the first thing we do to make it easier is divide through by five because five is a factor of each of our terms. So we’ve got zero is equal to 𝑥 squared minus 13𝑥 plus 36. So now, we can solve this using factoring. And to factor, just to remind us what we need to do, we need to find two factors whose product is positive 36 and whose sum is negative 13. So we’re gonna have zero is equal to 𝑥 minus nine multiplied by 𝑥 minus four. And we get that because nine multiplied by four is 36. And we’ve got negative nine and negative four because we got positive 36. But I need to sum to negative 13. So we know they both need to be negative. And then, you’ve got negative nine and negative four gives us negative 13. So great, we factored. So therefore, we can say that the solution for 𝑥 is gonna be equal to positive nine or positive four. And the way we got this is because we want to find the value where the quadratic is equal to zero. So that means that one of our parentheses needs to be equal to zero. So we can set them equal to zero. So we get 𝑥 minus nine equals zero and 𝑥 minus four equals zero. So therefore, if we add nine to each side of the equation, we get 𝑥 equals nine. If we add four to each side of the equation, we get 𝑥 equals four. So therefore, we’ve got a final answer 𝑥 equals nine or four. Lesson Menu Join Nagwa Classes Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher! Nagwa is an educational technology startup aiming to help teachers teach and students learn. Company Content Copyright © 2025 Nagwa All Rights Reserved Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy
190662
https://pmc.ncbi.nlm.nih.gov/articles/PMC8094529/
Sphingolipid lysosomal storage diseases: from bench to bedside - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Journal List User Guide View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Lipids Health Dis . 2021 May 3;20:44. doi: 10.1186/s12944-021-01466-0 Search in PMC Search in PubMed View in NLM Catalog Add to search Sphingolipid lysosomal storage diseases: from bench to bedside Muna Abed Rabbo Muna Abed Rabbo 1 Department of Biology and Biochemistry, Birzeit University, P.O. Box 14, Ramallah, West Bank 627 Palestine Find articles by Muna Abed Rabbo 1, Yara Khodour Yara Khodour 1 Department of Biology and Biochemistry, Birzeit University, P.O. Box 14, Ramallah, West Bank 627 Palestine Find articles by Yara Khodour 1, Laurie S Kaguni Laurie S Kaguni 2 Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, MI USA Find articles by Laurie S Kaguni 2, Johnny Stiban Johnny Stiban 1 Department of Biology and Biochemistry, Birzeit University, P.O. Box 14, Ramallah, West Bank 627 Palestine Find articles by Johnny Stiban 1,✉ Author information Article notes Copyright and License information 1 Department of Biology and Biochemistry, Birzeit University, P.O. Box 14, Ramallah, West Bank 627 Palestine 2 Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, MI USA ✉ Corresponding author. Received 2021 Feb 9; Accepted 2021 Apr 14; Collection date 2021. © The Author(s) 2021 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data. PMC Copyright notice PMCID: PMC8094529 PMID: 33941173 Abstract Johann Ludwig Wilhelm Thudicum described sphingolipids (SLs) in the late nineteenth century, but it was only in the past fifty years that SL research surged in importance and applicability. Currently, sphingolipids and their metabolism are hotly debated topics in various biochemical fields. Similar to other macromolecular reactions, SL metabolism has important implications in health and disease in most cells. A plethora of SL-related genetic ailments has been described. Defects in SL catabolism can cause the accumulation of SLs, leading to many types of lysosomal storage diseases (LSDs) collectively called sphingolipidoses. These diseases mainly impact the neuronal and immune systems, but other systems can be affected as well. This review aims to present a comprehensive, up-to-date picture of the rapidly growing field of sphingolipid LSDs, their etiology, pathology, and potential therapeutic strategies. We first describe LSDs biochemically and briefly discuss their catabolism, followed by general aspects of the major diseases such as Gaucher, Krabbe, Fabry, and Farber among others. We conclude with an overview of the available and potential future therapies for many of the diseases. We strive to present the most important and recent findings from basic research and clinical applications, and to provide a valuable source for understanding these disorders. Keywords: sphingolipids, lysosomal storage diseases, inborn errors of metabolism, neurological diseases, sphingolipidoses, Gaucher, Krabbe, gangliosidosis, Fabry Introduction As essential components of membranes that play vital roles in a variety of signaling cascades, sphingolipids (SLs)represent a hot topic of metabolic research . SLs not only have structural functions but also play other vital roles in cellular homeostasis, adhesion, signaling, senescence, development, and death [2, 3]. SLs are also involved in the pathology of several immune and neurological diseases . SLs are a major class of lipids that differ from glycerolipids in having a long-chain base backbone (sphinganine or sphingosine, Sph) in lieu of glycerol (Fig. 1). An amide linkage joins a fatty acyl group to the amino nitrogen of the long-chain base, forming the second leg of the hydrophobic tail in the molecule, creating ceramide (Cer). Cer is the parent SL that can serve as the metabolic hub for the generation of other SLs . Fig. 1. Open in a new tab Common structures of representative SLs. SLs contain a long-chain base whether sphinganine (saturated) or sphingosine (monounsaturated at C4). N- acylation at the amino group on C2 creates ceramide, which may consist of varying number of carbon atoms. The representative ceramide shown here is palmitoyl-ceramide that contains 16 carbons. The addition of a phosphocholine at C1 creates the parent phosphosphingolipid sphingomyelin, and glycosylation at the same carbon generates glucosylceramide Cells generate many Cer species differing in their chain length, ranging from 14 to 32 carbons in mammals . This contributes to the first layer of heterogeneity among SLs. Another layer of variation arises from different attached head groups . Depending on the head groups, sphingolipids can be classified into phosphosphingolipids (e.g., sphingomyelin (SM)) and glycosphingolipids (GSLs). SMs are highly abundant in the myelin sheath surrounding the axonal regions of neural cells . GSLs, on the other hand, are more structurally diverse, and contain one or more sugars attached to the Cer moiety. In mammalian cells, the most commonly attached sugars are glucose, galactose, N-acetylglucosamine, N-acetylgalactosamine, sialic acid, and fucose [2, 8]. GSLs can be categorized into four subtypes: cerebrosides, sulfatides, globosides, and gangliosides. Cerebrosides have a single sugar attached to the Cer . Sulfatides have an additional sulfate attached to the cerebroside . Sulfatides are thought to participate in myelin formation and maintenance, in addition to neural cell differentiation . Globosides and gangliosides contain a more complex oligosaccharide attached to the Cer moiety; gangliosides have a negatively-charged sialic acid residue on their head group , whereas globosides lack this residue and hence are neutral at pH 7 . Irrespective of the type, SL biosynthesis occurs via the same pathways. SLs can be produced de novo from the condensation of serine and palmitoyl-CoA in the endoplasmic reticulum (ER), through a series of reactions culminating in the generation of Cer (Fig. 2), which can have several fates . Alternatively, the salvage pathway regenerates Cer from Sph and fatty acyl-CoAs. Lysosomal degradation of GSLs is required for the re-utilization of their products in salvage pathways . A number of human genetic disorders of SL biosynthesis have been described . One of the best-documented examples is the adult-onset, hereditary sensory and autonomic neuropathy that is caused by a defect in the first enzyme of SL biosynthesis, serine palmitoyltransferase . Fig. 2. Open in a new tab Three general pathways for the generation of Cer. In mammalian cells, Cer is biosynthesized de novo or generated by catabolism of complex SLs. In the de novo synthesis pathway (purple block arrow), a four-enzyme sequence culminates in the formation of Cer from the amino acid serine and palmitoyl-CoA. This pathway is located in the ER. Sphingomyelin can be hydrolyzed to Cer in the SM hydrolysis pathway (orange block arrow), which is a one-enzyme step. (Degradation of other complex SLs is not shown.) Alternatively, Cer can be produced in the salvage pathway (green block arrows), through the acylation of Sph by the ceramide synthase family of enzymes. The red blocks represent cartoons of the possible structures of the molecules. It should be noted that the acyl-chain length of Cer can vary greatly Catabolism of complex SLs is also a source of Cer generation. Most complex membrane lipids are catabolized through the endosomal/ lysosomal membrane digestion system, where the degradation products are re-utilized in salvage pathways, achieving eventual membrane homeostasis. Defects in the proteins and enzymes needed for lysosomal degradation can lead to a wide range of inherited lysosomal storage disorders, LSDs. In LSDs, the lysosome cannot degrade a specific molecule, leading to its accumulation along with other related molecules . LSDs are categorized into five main families: mucolipidoses, mucopolysaccharidoses, sphingolipidoses, glycoprotein, and glycogen storage diseases, depending on the type of the primary stored compound . Sphingolipidoses comprise a whole group of diseases caused by defects in the sequential lysosomal SL degradation pathway . In general, sphingolipidoses have an incidence of approximately 1 in 10,000 individuals. Although this represents a low incidence in most populations, certain populations, especially those that are relatively isolated either geographically or culturally, have a substantially higher incidence [14, 15]. Such disorders cause critical membrane impairment, and hence affect the survival and growth of most cells, especially neural cells. As a result, neurodegeneration, along with other visceral complications , are significant characteristics of many sphingolipidoses . Sphingolipidoses have a multitude of neurological and immunological manifestations, and these diseases have been studied widely as new therapeutic approaches have become available . Sphingolipidoses have many clinical manifestations in a variety of organ systems. The cardiovascular system, for instance, is also affected by some of these diseases. GM1 gangliosidosis exhibits cardiovascular lesions including cardiomegaly and diffuse, nodular thickening of the mitral and tricuspid valves , while Sandhoff disease patients experience cardiomegaly and mitral regurgitation . Additionally, a subtype of Gaucher disease is defined by cardiac involvement with aortic and valvular calcification . Fabry disease shows severe effects in the health of the cardiovascular system. Cardiovascular manifestations begin with mitral insufficiency in the pediatric period, followed by left ventricular hypertrophy, congestive heart failure, anginal pain, hypertension, and myocardial infarction in adolescence and adulthood caused by progressive globotriaosylceramide (Gb3) accumulation in the myocardial cells, coronary arteries, the valvular tissue, and the atrioventricular conduction system [20, 21]. Overview of SL Catabolism Despite being structurally and functionally diverse, SL biosynthesis and catabolism are both governed by a network of interconnected pathways diverging from a single common starting point, and converging into a common catabolic pathway . Cer serves as a metabolic hub, as it occupies the center of both synthetic and catabolic pathways . SL homeostasis in the cell is tightly regulated through multiple pathways . These pathways may have compensatory functions in some cases in which defective enzymes result in multiple responses to SL imbalance. Consequently, an understanding of SL metabolic networks contributes to greater understanding of the LSDs and subsequent therapeutic design [24, 25]. Because lipids cannot be excreted as readily as hydrophilic molecules, the absence of any single enzyme functioning in the coordinated breakdown pathways of complex SLs leads to the accumulation of lipids inside the cell. In fact, defects in these catabolizing enzymes, especially lysosomal hydrolases, are responsible for a considerable number of LSDs . Membrane GSLs can reach endosomal/ lysosomal compartments via autophagy, endocytosis, or phagocytosis [26, 27]. Inside the lysosome, luminal vesicles are formed by successive budding and fission steps, and the lipid composition of these vesicles is controlled by an endosomal lipid-sorting complex . Membrane-stabilizing sterols, including cholesterol (Chol), are sorted by two sterol-binding proteins, NPC1, and NPC2 . GSLs are then degraded sequentially on the surface of intra-lysosomal vesicles. Lysosomal hydrolases are responsible for attacking specific bonds, and cleave single monosaccharide molecules from the non-reducing ends in a stepwise manner . However, the soluble hydrolases cannot attack gangliosides and GSLs directly due to their hydrophobic nature. Thus, their degradation needs more complex cooperation between the hydrolases and other membrane-perturbing, and lipid-binding proteins, as well as glycoprotein cofactors and SL activator proteins [1, 27]. SL activator proteins are encoded by two genes: GM2-activator protein (GM2A) and prosaposin, a precursor that produces saposins (Saps) A-D upon post-translational modifications [29–34]. In addition, the polycationic nature of soluble hydrolases requires the anionic environment of the intra-lysosomal membranes at pH 5, which is provided by bis(monoacylglycero)phosphate (BMP), dolichol-phosphate, and phosphatidylinositol. Together, they attract soluble hydrolases to the GSL-containing membranes to facilitate degradation [1, 9]. SM degradation occurs via the action of the sphingomyelinase (SMase) family of enzymes that catalyze the hydrolysis of the phosphocholine head group . SMases fall into three main categories: alkaline SMases that are expressed exclusively in the intestines and liver, and work on dietary SM . Neutral SMases, whose functions are not fully understood, may play roles in inflammatory signals, cell growth, and survival . Acid SMases (aSMase) predominantly metabolize SM present in intra-lysosomal membranes. aSMases can also be excreted to catabolize SM-containing lipoproteins found in the plasma, and other SM molecules found in the ectoplasmic leaflet of the plasma membrane. They are thus thought to play specific signaling roles . All complex SLs can be degraded to produce Cer, which is then converted to Sph via the action of ceramidases . Ceramidases have an organelle-specific expression, allowing the cell to generate distinct SLs with certain sphingoid bases . Like SMases, ceramidases can be classified according to their pH optima. Whereas acid ceramidase is required for the lysosomal degradation of Cer, neutral ceramidases are necessary for sphingosine 1-phosphate (S1P)-mediated signaling on the plasma membrane. On the tissue level, neutral ceramidases are required for the breakdown of dietary SLs [38, 39]. Alkaline ceramidases work near the plasma membrane. After Cer is deacylated by any of the ceramidases, Sph can be converted to S1P through the action of two sphingosine kinases distributed in the cytosol, and other membrane compartments [40–43]. Different isoforms of the alkaline ceramidase family (alkaline ceramidases 1, 2, and 3) are required to maintain high blood levels of S1P in mice . Alkaline ceramidase 2 is specifically necessary to regulate the plasma pools of S1P and sphinganine 1-phosphate . Finally, S1P is degraded by S1P lyase to produce hexadecenal and phosphoethanolamine . Deficiencies in the hydrolases and other ancillary proteins involved in GSL, SM, and Cer degradation lead to the development of sphingolipidoses. About ten different disorders are caused by such deficiencies . SL-related LSDs: Sphingolipidoses SLs are catabolized in a strictly sequential manner. Defects in the machinery controlling each step of the pathway are prevalent, and several diseases have been described (Fig. 3). Most SLs are degraded in the lysosome via a single pathway; a deficiency in one enzyme will lead to the accumulation of the molecule to be catabolized. An exception to this pattern is lactosylceramide (LacCer), which can be degraded by two different lysosomal enzyme/ activator protein systems; thus, it does not accumulate solely by a deficit in a single enzyme . Nevertheless, LacCer can accumulate along with other substances, when multiple factors are absent (e.g., prosaposin) . Fig. 3. Open in a new tab Lysosomal SL catabolism and enzyme deficiencies causing storage diseases. A schematic of the various SL metabolic pathways is presented, indicating the enzymes whose deficiency leads to several diseases. Note that each enzyme is assisted by one or more Saps: GM2A assists both β-gal and β-hexosaminidase. SapB assists sialidase, α-GAL, GALC, and β-gal. SapC assists GALC, β-gal, GCase, acid ceramidase, and GALC. Acid ceramidase is also assisted by SapD To allow for a better understanding of each disease, genes of most deficient enzymes leading to sphingolipidoses have been cloned and targeted in animal models . Despite being sub-classified into types differing in the onset, severity, and associated tissues, each sphingolipidosis has a clinical continuum of severity (Table 1). Table 1. Summary of the Forms and Symptoms of Sphingolipidoses | LSD | Defective enzyme | Mutated gene | Major accumulating SL(s) | Onset | Symptoms and neurological manifestations | Refs. | --- --- --- | GM1 Gangliosidosis | β-gal | GLB1 | GM1 | Type I: First year Premature death at age 2-3 | • Developmental arrest • Seizures • Disintegration in the nervous system • Stiffening of joints • Hepatosplenomegaly • Edema • Gum hypertrophy • Skeletal abnormalities • Cherry-red spot (50% of the population) • Corneal cloudiness followed by blindness and deafness | [46, 48–52] | | Type II-late infantile: 7 months-3 years | • Developmental delay • Subsequent dementia • Cerebellar pyramidal, and extrapyramidal signs • Possible late loss of vision • No skeletal dysplasia | | Type III:3-30 years | • Dysarthia and gait disturbances • Dystonia in the neck and extremities • Extrapyramidal signs • Cardiomyopathy | | Sandhoff (Variant B) | α-subunit of β-Hexoseaminidase | HEXA | GM2, lyso-GM2 | Infantile (Tay-Sachs): 3-6 months | • Loss of skills • General weakness • Seizures • Bone abnormalities • Cherry-red spot • Startle response • Demyelination and swelling of neuronal cells • Reduction of consciousness, vision, and hearing. • Eventual spasticity and death. | [46, 53–57] | | Juvenile: 2-6 years with death at 10-15 years | • Progressive spasticity • Loss of speech and vision • Progressive dementia • Infertility | | Chronic: 2-5 years but patients can reach their fourth decade | • Chronic: Gait disturbances • Posture abnormalities, followed by distinct neurological symptoms. • No sensory or intellectual impairment • Adult: has heterogeneous symptoms with intact mental and visual capabilities. • Bipolar psychosis may develop | | Sandhoff (Variant O) | β-subunit of β-Hexoseaminidase | HEXB | GM2, lyso-GM2, uncharged glycolipids like GA2 | Infantile: 6 months | • Same as Tay-Sachs with fewer bone deformities, and organomegaly. | [46, 58, 59] | | Juvenile: 2-10 years | • Cerebellar ataxia • Slurred speech • Psychomotor retardation followed by gradual mental retardation • Spasticity | | Adult: in late adult life | • Pyramidal and extrapyramidal signs and symptoms of lower motor neurons • Supranuclear ophthalmoplegia • Movement problems | | Sandhoff (Variant AB) | GM2A protein | GM2A | GM2, GA2 | 3-6 months | • Muscle weakening. • Loss of motor skills (crawling and sitting) • Startle reaction to noises • Seizures • Loss of vision and hearing • Intellectual disability • Paralysis | [60–63] | | Gaucher Disease | GCase | GBA1 | GlcCer, GlcSph | Type I (Non-neuronopathic): Infancy to late adulthood | • Massive abdominal distension • Anemia and thrombocytopenia • Defective platelet function (abnormal coagulation) • Organomegaly • Poor development and delayed puberty • Bone diseases • Hepatopulmonary syndrome • No neurologic symptoms | [46, 64–71] | | Type II: 3-6 months with death at ~2 years | • Collodion skin • Visceral and bone marrow involvement • More severe neurological manifestations: Strabismus • Fast eye movement. • Bulbar palsy or paresis. • Severe hypertonia, rigidity, arching, swallowing impairment. • Seizures. • Progressive dementia. • Ataxia | | Type III: 2-5 years | • Visceral and bone marrow involvement • Less severe neurological manifestations with slower progression | | Niemann Pick A, B | aSMase | SMPD1 | SM | NPD-A: early onset premature death at the age of 3 | • Lymphadenopathy • Hepatosplenomegaly Hypotonia • Muscular weakness leading to feeding difficulties, followed by decreased platelet count, microcytic anemia • Osteoporosis • Cherry-red spots in the eye • Brownish-yellow color of skin • After six months of age, psychomotor retardation is observed • Loss of contact with the surroundings | [46, 72, 73] [74–77] | | NPD-B: chronic ranges from infancy-adulthood | • Slowly progressive systemic symptoms • No neurodegeneration • Hepatosplenomegaly • Anemia • Thrombocytopenia • Liver dysfunction • Lung and bone diseases. | | Farber Disease | Acid Ceramidase | ASAH1 | Cer | Type I: Early-onset premature death at age 2-3 years | • Hepatosplenomegaly • Joint contractures • Voice hoarseness • Inflammation of subcutaneous nodules, along with other neurological manifestations | [46, 78–81] | | Type II: intermediate | • Decreased neurological inflammation-related symptoms • Longer lifespan | | Type III: mild | | Type IV: Neonatal-visceral | • Organomegaly and visceral manifestations | | Type V: Neurological-Progressive | • Progressive neurodegeneration and seizures | | Type VI | • Combined Farber and Sandhoff diseases and associated symptoms | | Fabry Disease | α-GAL | GLA | Gb3, lyso-Gb3 | Males: During childhood or adolescence | • Corneal dystrophy • Acroparesthesia • Angiokeratomas, and hypohidrosis, followed by progressive multi-system involvement leading to kidney failure, cerebrovascular disease, and hypertrophic cardiomyopathy in affected males • Females range from having no symptoms to severe ones. | [82–89] | | Females: Heterozygous Mild late-onset disease (adult-onset) or severe disease. Homozygous Similar onset as males | | Krabbe Disease | GALC | GALC | psychosine | Infantile: 3-6 months. premature death between 2-5 years of age | • Motor dysfunction • Seizures • Cognitive decline | [46, 67, 90–92] | | Juvenile & adult: few years- 73 years | • Dementia • Blindness, Psychomotor retardation • Spastic paraparesis | | Metachromatic Leukodystrophy | ASA | ASA | sulfatide | Late infantile: Before 30 months | • Hypotonia • Mental regression, Unsteady gait, followed by loss of speech • Incontinence • Blindness • Seizures • Peripheral neuropathy • Complete loss of motor function • Loss of contact with the surroundings is observed before reaching 40 months of age | [46, 93–95] | | Juvenile: 2.5-16 years | • Later in onset but once the ability of walking is lost, the disease progresses as seen in the infantile form • Infertile | | Adult: After puberty | • Variable progression | | Niemann Pick C1 | NPC1 | NPC1 | Chol and other SLs | Perinatal (up to 2 months) | Systemic: • Mild thrombocytopenia (newborns or toddlers) • Prolonged neonatal cholestatic jaundice (in perinatal) • Hepatomegaly/ Splenomegaly Neurological: • Vertical supranuclear • Gaze palsy • Gelastic cataplexy • Ataxia • Dystonia • Dysarthria • Dysphagia • Hypotonia • Clumsiness • Delayed developmental milestones • Seizures • Hearing loss Psychiatric • Psychosis • Cognitive decline • Developmental delay | [96–99] | | Niemann Pick C2 | NPC2 | NPC2 | | Early-infantile (2 months–2 years of age) | | Late-infantile (2–6 years of age) | | Juvenile (6–12 years of age) | | Adolescent/adult (>12 years of age) | | Sialidosis | Sialidase (Neuraminidase) | NEU1 | sialyloligosaccharides | Sialidosis type I: Second to third decade | • Macular cherry-red spot • Gait abnormalities • Decreased visual acuity • Normal to slightly impaired intelligence • Action myoclonus • Intentional tremors • Cerebellar ataxia • Hyperreflexia • Hypotonia may occur • Cerebellar atrophy in advanced stages | | | Sialidosis type II-congenital hydropic: in utero | • Hydrops fetalis: Ascites, Edema • Hepatosplenomegaly • Course features • Stillbirths or death at a very early age • Inguinal hernia • Cardiac Abnormalities • Renal Abnormalities • Respiratory distress • Psychomotor retardation • Hydrocephalus • Seizures • Corneal clouding • Dysostosis multiplex | | | | | | Sialidosis type II-infantile: 0–12 months | • Coarse features • Hepatosplenomegaly • DysostosisMultiplex • Cherry red spot • Corneal Clouding • Cataract • Hearing loss • Inguinal hernia • Umbilical hernia • Hypotonia | | | | | | Sialidosis type II- juvenile: 2–20 years | • Psychomotor delay • Seizures • myoclonic jerks • Ataxia • Myoclonic epilepsy | Open in a new tab GM1 Gangliosidosis The lysosomal hydrolase GM1-β-galactosidase (β-gal) is assisted by either SapB or GM2A to catalyze the breakdown of GM1 ganglioside to GM2 . A defect in such an enzyme may lead to GM1 gangliosidosis, which is an autosomal recessive and neurodegenerative disease with an estimated incidence of 1 in 100,000–200,000 live births . Another disorder, Morquio syndrome type B, may also develop depending on the substrate specificity of the defective enzyme [46, 101]. Any mutation in the GLB1 gene leading to reduced or loss of activity of β-gal causes the accumulation of lysosomal GM1 . Depending on the specific GLB1 mutation, the residual activity of β-gal differs, leading to a continuum of clinical severity. GM1 gangliosidosis can be classified into three types: Infantile (Type I), late infantile/ juvenile (Type II), and adult (Type III). Also, Type II can be subdivided further into late-infantile (IIa) and juvenile (IIb) (Table 1). Although GM1 is crucial for many neuroprotective purposes , its massive lysosomal accumulation stimulates neuroinflammatory reactions and the unfolded protein response (UPR) in mouse models of the disease, leading to neuronal death and neurodegeneration . Although no cure for the disease currently available, chaperone therapy , substrate reduction therapy (SRT) , and gene therapy have been shown to reduce the storage levels of GM1 in the brains of mouse models (see later). GM2 Gangliosidosis GM2 gangliosidoses are autosomal recessive, neurodegenerative diseases caused by defects in the machinery responsible for GM2 degradation, leading to the accumulation of GM2 and other related lipids in neural cells . Normally, GM2 is degraded by the coordinated action of the lysosomal β-N-acetyl-hexosaminidase (β-hexosaminidase), which removes the terminal N-acetyl-galactosamine residue from GM2, and the ancillary protein GM2A. β-hexosaminidase has two hydrolytic subunits (α, and β) whose different combinations may form three distinct isozymes, with different substrate specificities. HexA (αβ) cleaves off terminal N-acetylglucosamine and N-acetylgalactosamine residues linked to uncharged and negatively charged glycoconjugates like GM2, whereas HexB (ββ) is more specific to uncharged substrates like glycolipid GA2 [46, 53]. HexS (αα) is a secondary type that contributes to the degradation of sulfated glycolipids, and glycosaminoglycans . A defect in any of the components comprising the GM2 degradation machinery leads to a different type of GM2 gangliosidosis: variant B (α-subunit deficiency, Tay-Sachs in its infantile form), variant O (β-subunit deficiency, Sandhoff disease), and variant AB (GM2A deficiency) (Table 1). A special variant (B1) has an altered enzymatic specificity of HexA. Though it has no activity towards negatively-charged substrates, including GM2, its activity remains intact towards uncharged substrates . This is attributed to the conservation of the β-subunit activity, subunit association, and enzyme processing, although the active site of the α-subunit is defective . The symptoms of B1 variant patients resemble those of the juvenile form of B variant. However, heterozygotes of B1 and null alleles show the late-infantile course of the disease . In the O-variant/ Sandhoff disease, the storage of negatively-charged glycolipids that characterize Tay-Sachs disease is accompanied by the storage of other uncharged glycolipids like GA2 in the brain and other visceral organs (Table 1) [46, 58]. A similar picture of Tay-Sachs disease with a delayed onset can be observed in patients with normal β-hexosaminidase A, B, and S isozymes. These AB-variant patients have a deficient GM2A, leading to the accumulation of GM2 and GA2 . Generally, GM2-gangliosidoses and their accumulated compounds (GM2, GA2, and cytotoxic lyso-GM2) cause neuroinflammation and other secondary effects, leading to swollen demyelinated neurons of mainly the central and also the peripheral nervous system in humans and model animals . Thus, multiple therapeutic strategies, including SRT and gene therapy , have been suggested to decrease the number of accumulated lipids. Gaucher Disease Gaucher disease (GD) is the most common autosomal recessive sphingolipidosis , with an incidence ranging from 0.39 to 5.80 per 100,000 in the general population . GD can be classified into three major types: Type I GD (non-neuronopathic), Type II GD (neuronopathic acute form), and Type III GD or the juvenile form (neuronopathic sub-acute) (Table 1). Type I GD has a higher prevalence (1 per 850) in Ashkenazi Jews as compared to 1-2 per 100,000 in non-Jewish populations . Mutations in the GBA1 gene that encodes glucosylceramide-β-glucosidase (GCase) lead to the accumulation of GlcCer. GCase normally works in coordination with SapC and lysosomal BMP to hydrolyze GlcCer into glucose and Cer. Therefore, in rare circumstances, GD can also be caused by a deficiency in SapC [65, 111]. The reduced cellular capacity to degrade GSL leads to the primary accumulation of GlcCer in cells, particularly phagocytizing macrophages mainly found in the liver, spleen, and bone marrow. This leads to the development of storage macrophages called “Gaucher cells” that characterize the disease . GlcCer is further metabolized through the action of lysosomal acid ceramidase to produce a secondary storage substance, glucosylsphingosine (GlcSph), which can exit the lysosomal compartment [66, 112]. Accumulated GlcCer and GlcSph in the cytosol can be further hydrolyzed by non-lysosomal GCase-2 to produce Cer, Sph, and S1P [113, 114]. Although these events were shown to occur peripherally, their occurrence in the brain is not clear . Krabbe’s Disease Globoid cell leukodystrophy, or Krabbe disease (KD), is another autosomal recessive, neurodegenerative disease that is characterized by a defective galactosylceramide β-galactosidase (GALC) . GALC uses the help of SapA and SapC to remove galactose from its primary substrate GalCer and other secondary galactose-containing SLs, e.g., galactosylsphingosine (psychosine) . The primary substrate does not accumulate in the central nervous system (CNS), because it can be degraded by another hydrolytic system (β-gal) . Instead, psychosine is the major accumulating product. Psychosine is a cytotoxic substance that causes demyelination by triggering the disintegration of oligodendrocytes and Schwann cells, the myelin-forming cells in the central and peripheral nervous system, respectively . Besides demyelination, KD causes infiltration of large, multinucleated macrophages, and perivascular microglia forming “globoid cells” engorged with undigested storage SLs in the white matter. This is accompanied by astrogliosis and pro-inflammatory cytokine dysregulation . There are different forms of KD: infantile-, juvenile- and adult-onset (Table 1) [46, 90]. A mimicry of the course of KD is achieved in the twitcher mouse, in which a premature stop codon in the coding region of the GALC gene was engineered . Other mouse models were modified to show low levels of residual activity . Such murine models can be utilized for stem cell transplantation and other therapeutic strategies, to target multiple pathogenic pathways as a means to reduce progression of the disease . Fabry Disease Fabry disease is a pan-ethnic, X-linked genetic disorder with an approximate incidence of about 1 per 117,000 live births in the general population , and 1 per 40,000 male live births . It is caused by a deficiency in α-galactosidase A (α-GAL), leading to the accumulation of Gb3 and other related SLs in multiple cells . Globotriaosylsphingosine (lyso-Gb3) is a deacylated form of Gb3, and it forms the secondary storage metabolite that is used as a biomarker of the disease, accumulating to high levels in vasoendothelial cells . Lyso-Gb3 was found to play roles in nephropathy and secondary inflammatory events [122, 123]. There are two major types of Fabry disease: infantile and late-onset forms 84. Metachromatic Leukodystrophy Metachromatic Leukodystrophy (MLD) is an autosomal recessive LSD, with an incidence of 1 per 40,000-160,000 live births. It is caused by mutations in the gene encoding arylsulfatase A (ASA) . ASA, which is assisted by SapB , catalyzes the conversion of O-sulfogalactosylceramide into GalCer and sulfate . MLD is characterized by the accumulation of sulfatides and other related glycolipids in the lysosome. Because sulfatides are present mainly in the white matter of the brain and peripheral nervous system (PNS), forming the myelin sheath, sulfatide accumulation causes predominantly demyelination. Secondarily, a cytotoxic sulfatide derivative, lyso-sulfatide, is thought to play a role in the pathogenesis of the disease . Based on the age of onset, three forms of MLD can be identified: late infantile, juvenile, and adult (Table 1). A similar clinical picture of MLD is observed in patients with SapB deficiency . Niemann Pick Disease (Types A, B, & C) Niemann Pick disease types A and B (NPD-A and B) are autosomal recessive LSDs, with an estimated prevalence of 0.4–0.6 per 100,000 . They are caused by the deficiency of aSMase, leading to the accumulation of SM within several cell types, including hepatocytes, macrophages, reticuloendothelial cells, and neurons [72, 74]. Accumulation of SM and related SLs in the monocyte/ macrophage system forms the so-called “foam cells” that characterize the disease . Clinically, the symptomatic spectrum of NPD ranges from extremely severe to relatively mild. Neurovisceral NPD-A is the most severe form, whereas NPD-B (the chronic visceral form) is on the other end of the spectrum (Table 1). In the absence of an aSMase deficiency, another type of NPD can still develop (NPC), which is another autosomal recessive neurodegenerative disease with an incidence of about 1 per 120,000 live births. It is caused by mutations in the NPC1 and NPC2 genes that encode Chol-transporting proteins (Table 1). NPC is characterized by the accumulation of Chol and SLs . Whereas NPC1 protein is required for the retrograde fusion of lysosomes with endosomes to form hybrid organelles [96, 126], NPC2 is involved in membrane fission events to regenerate lysosomes from hybrid organelles . Defects in them lead to the accumulation of unesterified Chol, SM, GSLs, and Sph . This results in the disruption of endocytosis, the vesicular fusion between late endosomes and lysosomes , and calcium ion homeostasis in multiple cells. Neuronal disruption of these events leads to dementia, loss of cerebellar Purkinje neurons, epilepsy, ataxia, and vertical gaze paralysis. Farber’s Disease Farber’s lipogranulomatosis (or disease) is an extremely rare autosomal recessive LSD caused by mutations in the ASAH1 gene that expresses acid ceramidase. Acid ceramidase hydrolyzes ceramide with the assistance of SapC or SapD . Enzyme deficiency leads to Cer accumulation . Farber’s disease is classified into different subtypes: Type I patients exhibit severe neurological manifestations culminating in premature death at age 2-3 years [78, 79]. Patients with types II and III have decreased neurological involvement and longer lifespan, and are therefore termed “intermediate” and “mild” forms, respectively, although they do show inflammation-related symptoms . Types IV and V are termed “Neonatal-visceral,” and “Neurological-Progressive” [78, 80]. Finally, prosaposin deficiency, in which the precursor of all Saps is deficient, may show some clinical manifestations similar to those of Farber’s . Pathophysiology of Sphingolipidoses Sphingolipidosis pathogenesis is a network of multiple affecting mechanisms, beginning with the accumulation of the primary substrate(s) of the deficient enzyme, then spreading to other compartments and progressing to other secondary effects/ deficiencies, and ultimately leading to an intricate pattern of defective storage . The primary cellular response to any LSD is the production of more lysosomes, but because these organelles are deficient in the same enzyme, the newly formed lysosomes will be abnormal as well, resulting in a halt in the lysosomal system. This halt is responsible for endocytic, autophagic, and inflammatory abnormalities eventually causing cellular death . Common factors influencing the pathogenesis of sphingolipidoses are presented hereafter. Cell-Type-Specific Patterns The observed heterogeneity in affected organs within sphingolipidoses is attributed to cell-type-specific glycolipid localization. Lipid storage and its associated pathogenesis occurs in tissues in which the accumulating lipid is either generated predominantly or endocytosed [46, 129]. For example, the neural dysfunction observed in GM1 and GM2 gangliosidoses is due to the abundance of sialic acid-containing GSLs (especially GM1 and GM2) in the brain, particularly on the surfaces of nerve cells [130, 131]. In GD, however, the primary accumulation of GlcCer is in macrophages. Macrophages phagocytize other cells, consolidating large amounts of accumulating GlcCer, and directly causing pathogenesis in phagocytic cells . Additionally, because the ratio of Cer to GlcCer is important in maintaining the epidermal permeability barrier , many GD patients experience ichthyotic, dry skin due to abnormal transepidermal water loss . On the other hand, in MLD and KD, the major pathological manifestations are severe demyelination and neurodegeneration . These are attributed to the high abundance and importance of sulfatides and GalCer in glycosynapses, myelination, and oligodendrocyte function . Residual Activity In LSDs, enzymes may be completely or partially deficient, leading to some remaining (residual) activity. An improperly-folded enzyme cannot reach the lysosome and is degraded in the ER, resulting in a complete loss of activity , whereas a less-active mutant enzyme that can reach the lysosome may contribute to a degree of residual activity . The diversity in the onset and severity of the disease is determined by the residual activity of the dysfunctional lysosomal enzyme. A more severe, early-onset course of a disease results from a complete deficiency/ extremely low activity of the enzyme, whereas a delayed, milder form can be due to a slight increase in the degree of residual activity [129, 139]. Nonetheless, a patient's phenotype cannot be predicted precisely based on this simple correlation. Biochemical evaluation of the mutated enzyme will be required to determine the molecular basis for the development of the disorder . Further, other epigenetic factors may result in phenotypic variability between patients carrying the same mutant alleles . Low residual activity below a certain threshold can cause substrate accumulation and a subsequent pathological phenotype . The ‘threshold theory’ may explain the pseudo-deficiency phenomenon in which a patient may carry a defective enzyme yet still show a normal phenotype, with no substrate accumulation, thus indicating the presence of an above-threshold activity of the enzyme . It also explains why some slight changes in residual activity can ameliorate significantly the symptoms. This theory and its associated explanations aided the development of the chaperone therapy as a therapeutic approach to many sphingolipidoses . Nature of Accumulating Storage Materials The nature of the storage material is a major contributor to the pathogenesis of LSDs, as it may result in the accumulation of other bioactive molecules . Psychosine, which can destabilize membranes due to its detergent-like properties , accumulates in cells of KD patients. Endogenous psychosine is synthesized by Cer galactosyltransferase (CGT), predominantly expressed in the third stage of oligodendrocyte differentiation and the Schwann cell myelinating process . Normally, GALC maintains low levels of brain psychosine, but under GALC-deficient conditions, psychosine accumulates to make up about 50% of brain cerebrosides . Psychosine accumulation disrupts lipid raft architecture, leading to dysregulation of some signaling pathways. Psychosine-induced inhibition of protein kinase C (PKC), which normally activates Schwann cell proliferation in PNS [145, 146], causes synaptic dysfunction, demyelination, and axonal defects. In the CNS, both exogenous and endogenous psychosine cause oligodendrocyte cell-body atrophy and apoptosis . Psychosine also induces cell death via the activation of the secretory phospholipase A2, which produces lysophosphatidylcholine and arachidonic acid that lead to oligodendrocyte death . Psychosine also inhibits the oligodendrocyte survival-signaling pathways Akt and ERK [148, 149]. Moreover, even if some oligodendrocytes survive psychosine toxicity during differentiation, psychosine inhibits oligodendrocyte peroxisomal function by inhibiting the expression PPARα that normally induces the expression of other peroxisomal proteins, DHAP-AT and PEX11, which are responsible for myelin formation and maintenance . Therefore, psychosine contributes to the pathogenesis of KD in the CNS by impeding normal oligodendrocyte differentiation and subsequent maturation leading to demyelination. Nevertheless, the complex neurological dysfunction observed in KD patients is not due to demyelination alone. Rather, it is a combination of demyelination and fast axonal transport inhibition. Psychosine accumulation blocks fast axonal transport by stimulating axonal GSK3β and PP1, altering their interaction with membrane rafts. These proteins abnormally phosphorylate and inhibit kinesin light chain, thus inhibiting the activity of the motor protein required for fast axonal transport . Additionally, microglial cells are also affected by psychosine accumulation after phagocytizing myelin aggregates and damaged oligodendrocytes. Psychosine appears to inhibit cytokinesis in the microglial cell cycle, resulting in the formation of multinucleated globoid cells that characterize KD via an unknown inhibition pathway [151, 152]. Unlike psychosine, much less is known about the effects of lysosulfatide, lyso-GM1, and lyso-GM2. Lysosulfatide is a cytotoxic compound that accumulates in the brains of MLD patients and ASA-deficient mice. It was suggested that lysosulfatide contributes to disease pathology by lipid raft disruption . Lyso-GM1 and lyso-GM2 accumulate in GM1 and GM2 gangliosidoses, respectively. Although the exact mechanisms by which they contribute to pathogenesis are still unknown, they inhibit PKC [152, 154]. GlcSph is another cytotoxic material elevated in the brains of GD type 2 and 3 patients. GlcSph together with GlcCer, hexosylsphingosine, and BMP, and the associated altered SL/ Chol content, contribute to the disruption of membrane raft architecture, thereby impairing cell signaling, calcium homeostasis, and resulting in other secondary effects [155, 156]. Other secondary metabolites that are unrelated to the defective enzyme may also accumulate in LSD cells. In NPC patients, for instance, secondary storage of GM2 and GM3 is caused by defects in trafficking and lysosomal calcium ion homeostasis. Although not completely understood, Chol accumulation in many sphingolipidoses is also caused by defects in lipid trafficking [157, 158]. Another interesting feature of various LSDs is the accumulation of ɑ-synuclein, a protein that characterizes Parkinson’s disease, and is usually found in the presynaptic termini of brain neurons [159, 160]. ɑ-synuclein oligomers are found in GD, KD, and NPC patients . It may also aggregate with other lipids to form Lewy bodies that were found in brain samples of GD and GM2 gangliosidosis patients . ɑ-synuclein aggregates might participate in pathogenesis via multiple secondary effects, including altered calcium ion homeostasis , inhibited autophagy , and disrupted mitochondrial function . Secondary Effects Inflammation and Cytokine Release One of the first innate immune responses against infection, injury, or damage is the acute inflammatory response. It is initiated by immune cells when they recognize damage-associated molecular patterns released from injured or dying cells [165, 166]. The response involves the release of inflammatory cytokines that result in leukocyte migration into tissues. The normal acute inflammation stops once the trigger disappears, whereas chronic systemic inflammation involves the continuous activation of the inflammatory response, resulting in attacks on neighboring cells, and causing their death . The role of inflammation in the pathogenesis of sphingolipidoses was first established in GD patients and models. GD storage substrates, GlcCer and GlcSph, accumulate mainly in macrophages, resulting in their abnormal activation . Dysfunctional macrophages activate their inflammasome due to the impaired autophagic process, which leads to an unregulated secretion of interleukin 1β (IL-1β). Furthermore, the levels of several other cytokines including tumor necrosis factor α (TNF-ɑ) and chitotriosidase are elevated in the plasma of GD patients [168, 169]. These mediators recruit other immune cells, including other macrophages and neutrophils, to the site of inflammation. Because these cells carry the mutation, however, their arrival amplifies the disease . Moreover, GD patients suffer from increased immunoglobulin production (known as gammopathy). Monoclonal gammopathies result in increased susceptibility to myeloid cancer through increased levels of IL-6 and IL-10. IL-6 contributes to an expansion of myeloid cells, while IL-10 results in the production of autoantibodies and B-cell lymphomas [128, 170]. M2 macrophage activation may link GD to cancer, though the mechanism is not fully understood [65, 171]. In neuronopathic GD mouse models, increased levels of the macrophage colony-stimulating factor, TNF-ɑ, IL-1β, and TGF-β contribute to neuroinflammation [168, 172]. This cytokine release is linked with microglial activation that results in neuronal cell death . Mechanistically, once GlcCer levels surpass a specific threshold, neurons trigger signaling cascades that result in microglial activation. Microglial activation induces a neuroinflammatory cascade leading to the release of cytokines and reactive oxygen species (ROS), as well as the increased permeabilization of the blood-brain barrier (BBB) . These events lead to chronic neuroinflammation that ultimately leads to neuronal apoptosis through receptor-mediated caspase activation, followed by caspase-dependent- and independent- activation of mitochondrial cell death. Microglial activation followed by neuronal cell death is also observed in GM1 and GM2 mouse models . Pathogenesis of Fabry disease is also partly caused by inflammation . Fabry disease patients have high nitric oxide and lipid peroxidation levels, as well as abnormal glutathione metabolism indicative of enhanced ROS production . ROS-induced oxidative protein damage contributes to the generation of neoantigens that induce autoimmune disorders . Studies on Fabry disease knockout mice showed that Gb3 storage leads to disruption of the CD1 antigen-presentation pathway, and invariant natural killer T cell distribution . Gb3 and lyso-Gb3 also induce the constitutive secretion of proinflammatory cytokines via the Toll-like receptor 4 (TLR4)-mediated pathway . Inflammation is also implicated in Farber disease pathogenesis. In knock-in mouse models, Cer accumulation results in an early elevation of multiple proinflammatory cytokines, mainly monocyte chemoattractant protein 1, which cause the formation of subcutaneous nodules and other pathological manifestations characterizing the disease . Although inflammation is a secondary effect resulting from downstream cascades in sphingolipidoses, it might be targeted in LSD therapeutic approaches to alleviate inflammatory symptoms. Non-steroidal anti-inflammatory drugs (NSAIDs), for instance, were used to treat Sandhoff mice models with elevated levels of the macrophage inflammatory protein ɑ, thus preventing the recruitment of immune cells to the brain and the subsequent neuroinflammation . NSAIDs were also used in treating NPC1 mouse models . Calcium Ion Homeostasis Calcium is a crucial factor in the regulation of myriad cellular events. An intracellular defect that leads to impaired Ca 2+ homeostasis will lead to ER and oxidative stress, and eventually cell death. The mechanisms by which impaired Ca 2+ homeostasis occurs can be variable among different LSDs, depending on the type of interaction between the storage material and specific Ca 2+ pumps or channels in different organelles . Depending on the defective organelle, Ca 2+ homeostasis can be classified as altered ER, mitochondrial, or lysosomal function (Fig. 4). Fig. 4. Open in a new tab Summary of the major cellular interactions leading to the neurological features of sphingolipidoses. A schematic of the main events that lead to caspase-dependent and independent activation of neuronal apoptosis through myriad intercalated pathways, such as the disruption of TFEEB fine-tuning, impaired Ca 2+ homeostasis in smooth ER, mitochondria and lysosomes, lysosomal membrane permeabilization, and impaired autophagy, along with others (not depicted here). Proteins are depicted in oblong shapes, while lipids are in circles and cellular events are in rectangles. Upward green arrows represent increase in cellular concentration while downward red arrows illustrate decreased concentration. Note that organellar sizes are not to scale.not to scale. This figure was created in Biorender.com Altered ER Ca 2+ homeostasis can be observed in the neuronal forms of GD and KD, in which increased ER Ca 2+ release occurs due to direct modulation of the ryanodine receptor by GlcCer and psychosine [180, 181]. In Sandhoff disease and NPD-A disease , cytosolic Ca 2+ uptake into the sarcoplasmic reticulum by the Sarco/ endoplasmic reticulum Ca 2+-ATPase (SERCA) is decreased . In Sandhoff disease, uptake reduction is attributed to the modulation of SERCA activity by the protruding sialic acid part of the ganglioside GM2 . Sarcoplasmic reticulum Ca 2+ stores are also depleted in GM1 gangliosidosis through the interaction of GM1 with the phosphorylated inositol triphosphate-gated Ca 2+ channel . Moreover, GM1, GM2, and GM3 interact with, and reduce the activity of the plasma membrane Ca 2+-ATPase (PMCA), which pumps cytosolic Ca 2+ into the extracellular space . In liposomes containing high SM, PMCA activity is diminished, possibly through SM interference with the proper folding of PMCA, or through alteration in raft compartmentalization, thus altering its interaction with other binding proteins . Mitochondria are strongly engaged in Ca 2+ signaling, both by providing the energy required for its transport, and by directly participating in its signaling events. A defect in Ca 2+ homeostasis will cause severe mitochondrial damage in at least two sphingolipidoses . GM1 accumulation in mouse embryonic fibroblasts (MEFs) in β-gal-deficient mice increases mitochondrial Ca 2+ load, leading to the stimulation of the mitochondrial apoptotic pathway . On the other hand, cytosolic Ca 2+ levels are elevated in KD-mimicking oligodendrocytes, inducing transient mitochondrial membrane hyperpolarization, followed by depolarization and apoptosis . Impaired lysosomal Ca 2+ homeostasis is observed in NPC1-inactivated cells as a result of Sph accumulation. Sph accumulation is the first event occurring after NPC1 inactivation, followed by an alteration in lysosomal Ca 2+ levels caused by Sph storage . This Ca 2+ defect is caused by altered nicotinic acid adenine dinucleotide phosphate (NAADP)-mediated lysosomal Ca 2+ signaling. NAADP is a strong Ca 2+-releasing second messenger that targets lysosomal Ca 2+ channels to modulate Ca 2+ levels required for proper endolysosomal trafficking . Therefore, a defect in this pathway eventually leads to altered endocytosis and vesicular fusion in NPC1 fibroblasts, macrophages, astroglia, and cerebellar Purkinje cells . Under normal circumstances, Ca 2+ is released from the lysosomal lumen into the cytosol via TRPML1. Its release stimulates the kinase activity of mTORC1 complex , and the Ca 2+-dependent phosphatase calcineurin . Activated mTORC1 phosphorylates multiple targets including the transcription factor TFEEB that becomes inactive upon phosphorylation . Calcineurin, on the other hand, dephosphorylates and translocates TFEEB to the nucleus, which allows the transcription of target genes responsible for autophagy regulation, and lysosomal biogenesis and function . Although TRPML1-mediated Ca 2+ release may appear to simultaneously activate and inhibit the activity of TFEEB, it may be an important factor in the fine-tuning of TFEEB activity and nuclear translocation. The accumulation of lysosomal SM in NPC cells inhibits the activity of TRPML1 , causing inhibition of lysosomal trafficking . As a result, drugs regulating the expression of TFEEB or other lysosomal-trafficking regulators may be a potential future therapeutic strategy for LSDs . Taken together, these findings suggest that altered Ca 2+ homeostasis in the ER, mitochondria, or lysosomes is involved in LSD pathogenesis due to signaling crosstalk and physical contact among the three organelles (Fig. 4). Impaired Autophagy Dysfunctional autophagy is a principal pathophysiological mechanism in multiple LSDs . Lysosomal autophagy is the process by which the cell degrades its macromolecules and damaged/ injured organelles to maintain physiological and cellular homeostasis. There are four levels of the autophagic degradative pathway: autophagosome formation, autophagosome-lysosome fusion, autophagosome degradation, and lysosomal membrane recycling . A defect in one or more leads to autophagic impairment. In several sphingolipidoses, impaired autophagosome degradation leads to increased levels of the autophagosome marker LC3-II, damaged mitochondria, and polyubiquitinated proteins, which are putative stimulators of apoptotic cell death . In addition to the accumulated autophagosome levels, Beclin-1, a major autophagy regulator, increases . This suggests that the cell attempts to compensate for impaired autophagic degradation by creating more autophagosomes, which in turn increases the amount of damaged material inside the cell, further worsening lysosomal trafficking . As a consequence of impaired autophagy and autophagosome accumulation, cellular levels of the autophagic p62/ sequestosome-1 increase . p62/ sequestosome-1 is a receptor that recognizes ubiquitinated proteins and selectively targets them for autophagy . In the brains and astrocytes of GD mouse models, there is an increase in p62 along with dysfunctional mitochondria, ubiquitinated proteins and insoluble ɑ-synuclein, indicative of aberrant autophagy [200, 201]. Moreover, a Drosophila neuronopathic GD model shows severe lysosomal defects, neurodegeneration and reduced lifespan . In composite, these results suggest that dysfunctional autophagic flux is a central mechanism underlying neurodegeneration in several LSDs. Lysosomal Membrane Permeabilization and Cell Death Lysosomal membrane permeabilization (LMP) causes lysosomal contents to be extruded into the cytosol, eventually leading to cell death . LMP is activated by multiple factors, including oxidative stress and cytosolic Ca 2+ [204, 205]. The lysosomal membrane is more susceptible to oxidative attack than other membranes: lysosomes contain high levels of iron because they are the degradation site for heme. Intra-lysosomal iron reacts with hydrogen peroxide to produce free radicals that destabilize the lysosomal membrane, leading to its permeabilization . In the lysosomal membrane, Hsp70 binds to BMP and inhibits LMP. When cytosolic Ca 2+ levels increase, the mammalian cysteine protease Calpain is activated. Calpain then cleaves Hsp70, and thus sensitizes lysosomal membranes to LMP. This eventually leads to neuronal cell death in a cathepsin-dependent manner . LMP can eventually result in lysosomal damage, autophagosome accumulation, and ultimately cell death . Interestingly, Cer channels are present in lysosomal membranes, which may also lead to permeabilization , as they do in MOM [208, 209]. As cathepsins exit the lysosomes, cell death ensues. Although liberated cathepsins function optimally at acidic pH, some cathepsins (e.g., B, D, and L) can perform their proteolytic cleavage at neutral pH. These cathepsins can proteolytically activate specific molecules that are involved in cell death cascades, including BH3 and Bid, activate other members of the Bcl-2 family (e.g., Bak, and Bax). Activated Bak and Bax can, in turn, activate mitochondrial permeabilization and cytochrome c release, thereby initiating mitochondrial caspase-mediated cell death . Other BH3 proteins such as Noxa are also involved in LMP-activated cell death . Some caspase-independent cell death pathways, such as the RIPK1 and RIPK3-dependent pathways, can be activated by LMP as well . LMP-induced cell death is observed in models of several sphingolipidoses. LMP induces neurodegeneration in aSMase-knockout mice (NPD-A mouse models) through the release of cathepsin B, which causes autophagic impairment and cell death . Similarly, microglia and astrocytes of neuronopathic GD mouse models show translocation of cathepsin D to the cytosol . RIPK3-deficient mice are protected against chemically-induced neuronopathic GD by the irreversible inhibitor of GCase, conduritol B epoxide . This suggests that LMP participates in the RIPK3-mediated cell death in neuronopathic GD. Moreover, an absence of caspase activity in combination with elevated levels of RIPK1 and RIPK3 in neural cells of GBA-deficient mice suggests that the mode of neuronal cell death is independent of caspases, even at times of advanced neurodegeneration. The elevated levels of RIPK1 in microglial cells also suggests its participation in neuroinflammation . However, these caspase-independent cell death pathways are not observed in GM1 gangliosidosis, NPC, and Sandhoff disease models . Because lysosomal destabilization contributes to sphingolipidosis pathology, LMP inhibition by the chaperone Hsp70 could be a potential therapeutic strategy, warranting further investigation in current clinical trials . ER Stress and the Unfolded Protein Response A quality control system in the ER is responsible for determining whether a certain protein is properly folded. If a protein fails to adopt a proper conformation, it accumulates, creating ER stress. To achieve homeostasis, cells activate the UPR. The UPR can be activated via three ER transmembrane proteins that represent the major sensors in eukaryotes: IRE1, ATF6, and PERK. ER chaperones are key players in the UPR. They bind to unfolded proteins and/ or translocate them to the cytoplasm. If the UPR fails to achieve homeostasis by decreasing ER stress, it eventually leads to apoptosis. The UPR can also be activated indirectly by depleted ER Ca 2+ levels . The activation of the UPR was documented in GM1 gangliosidosis mouse models, in which the upregulation of the transcriptional regulator CHOP and the chaperone BiP were observed. GM1 accumulation in the models leads to decreased Ca 2+ levels in the ER through SERCA inhibition. This may activate the UPR, leading eventually to apoptotic cell death . Furthermore, patients of other sphingolipidoses including GD-type 2 and Tay-Sachs disease have increased UPR . The UPR participates in the pathology of KD in a mutation-dependent manner. Different mutations of the GALC gene stimulate varying combinations of UPR sensors, resulting in varying residual activities of the mutated enzyme, and leading to differential pathological severity . These results suggest that increased translocation of the enzyme to the cytosol decreases its trafficking to the lysosome, and thus decreases its residual activity, eventually leading to more sever pathological manifestations. Impaired Lipid Trafficking and Endocytosis Endocytosis and vesicular trafficking rely largely on SL and Chol levels. Lipid mis-sorting is a common feature of sphingolipidoses. For example, caveolae-internalized BODIPY-labeled LacCer accumulates in endosomes/ lysosomes of LSD fibroblasts due to faulty intracellular Chol distribution . Similar mis-sorting of BODIPY-LacCer is observed in GCase-inhibited cells, resulting in increased storage of GlcCer . Such mis-sorting was reversed by lowering Chol and GlcCer levels in LSD fibroblasts and GD cell models, respectively , suggesting that impaired lipid trafficking is a secondary consequence of Chol accumulation in multiple LSDs. Impaired trafficking is not exclusive to membrane lipids, as it is also observed for membrane proteins . Trafficking of both mannose 6-phosphate and transferrin receptors is impaired in MLD mouse models . In NPC mouse models, mannose 6-phosphate receptors are concentrated in late endosomes, suggesting that there are more endosomal pools of plasma membrane receptors in multiple sphingolipidoses . There is a strong correlation between increased lipid storage and impaired endocytosis. In models of four sphingolipidoses (NPD-A, NPC, Fabry disease, and GD), endocytosis is disrupted in a Chol-dependent manner. The activities of pinocytosis, macropinocytosis, clathrin-dependent, and caveolin-dependent-endocytosis, as well as intracellular lipid and protein trafficking are affected . Hence, targeting pathways involved in lipid and protein trafficking could serve as potential therapeutic approaches to alleviate pathogenesis in sphingolipidoses . Mitochondrial Function and Oxidative Stress The cellular physiological integrity of non-mitotic neural cells is dependent on the coordination between the degradative role of lysosomes and the energy-production capacity of mitochondria. Therefore, any lysosomal impairment could affect mitochondrial morphology, trafficking, and/ or degradation, particularly in neural cells [162, 224]. Mitochondrial morphological abnormalities are accompanied by dysfunctional mitochondrial respiration and a reduction in mitochondrial membrane potential in neurons and astrocytes of neuronopathic GD mouse models . Neurons of NPC1 mouse models have smaller mitochondria with decreased membrane potential and ATP production. Human embryonic stem-cell derived neurons with decreased NPC1 activity have fragmented mitochondria and decreased activities of mitochondrial proteins, but no change in membrane potential . Trafficking of mitochondria towards energy-requiring regions of the cells is an important aspect of mitochondrial function, especially in neurons that have long axons . Such trafficking requires the spatiotemporal fine-tuning of intracellular Ca 2+ levels . Because Ca 2+ ion homeostasis is impaired in several LSDs, mitochondrial trafficking is also expected to be impaired. Indeed, psychosine-treated neurons show a reduced rate of mitochondrial movement in axons in vitro, suggesting a potential pathogenic mechanism of KD . Mitophagy is a specialized autophagic pathway that removes abnormally-shaped and fragmented mitochondria . Mitophagy begins with the accumulation PINK1 on the mitochondrial outer membrane (MOM), triggered by the reduction of mitochondrial membrane potential. PINK1 undergoes autophosphorylation followed by phosphorylation and recruitment of Parkin, which ubiquitinates MOM proteins. Ubiquitinated proteins, in turn, recruit both Nuclear domain 10 protein 52 and optineurin, which bind to 1A/1B light chain 3 (LC3), a microtubule-associated protein that triggers autophagosome formation around dysfunctional mitochondria. Meanwhile, Parkin also interacts with Beclin-1, further triggering mitophagy [228, 229]. Therefore, when mitophagy is aberrant, dysfunctional mitochondria accumulate. Dysfunctional mitochondria have disrupted respiratory chains that accumulate ROS, causing oxidative damage to cellular DNA, lipids, and proteins, events that characterize several LSDs . In B lymphocytes extracted from human NPD-B cells, there are significant changes in autophagosome accumulation and mitochondrial fragmentation, along with induction of mitophagy and aberrant lipid trafficking . GD fibroblasts and activated GM1/ GM2 gangliosidoses microglia/ macrophages have upregulated apurinic endonuclease 1, which is an oxidative damage DNA repair enzyme , and elevated inducible nitric oxide synthase and nitrotyrosine levels . Also, NPC fibroblasts contain oxidized lipids, proteins, and DNA . Similarly, the brains of GM2 gangliosidosis mice suffer oxidative damage and the induction of cell death . Permeabilization of MOM via Cer channels [3, 208, 235–237] that are exacerbated by pro-apoptotic Bcl-2 family proteins is another mechanism relating mitochondrial health to LSDs. SL pathology can therefore impact mitochondrial apoptosis. Thus, disrupted mitochondrial clearance and oxidative stress appear to be common pathological pathways in several LSDs. Systematic investigation of the involvement these pathways is needed to be able to target them using novel therapeutic approaches. Therapeutic Approaches to Sphingolipidoses Treatments for SL-related LSDs are based on two major concepts. Either the treatment is targeted to decrease the concentration of accumulating substrates (depicted in Fig. 5, orange circles) or to reduce the rate of their synthesis (Fig. 5, gold circles). The former strategy focuses on increasing the residual activity of the hydrolytic enzyme by increasing the concentration of wild-type enzymes above-threshold levels. This can be achieved via multiple approaches . The latter strategy aims to reduce the influx of the substrate to the lysosome. Fig. 5. Open in a new tab General therapeutic strategies for the treatment of sphingolipidoses. There are five main therapeutic approaches to treat sphingolipidoses. Substrate reduction therapy (SRT, orange) involves the prevention of influx of substrates into the lysosome, to lower synthesis of the accumulating substance. The other treatment strategies (gold) involve enhancing the activity of the missing or malfunctioning enzyme. Enzyme replacement therapy (ERT) uses purified enzyme to reverse the pathology (the crystal structure of α-GAL is shown). Chaperone therapy (CT) is used to assist the folding of misfolded enzymes to be targeted to the lysosome. Bone marrow transplantation and stem cell transplantation (BMT) are used to supply the body with the correct form of the missing enzyme, and gene therapy (GT) is used to modify the genes responsible for the aberrant phenotype Enzyme Replacement Therapy (ERT) ERT supplies the active enzyme exogenously to patients weekly or biweekly . Deficient cells take up the recombinant enzyme by receptor-mediated endocytosis, and then transport it to lysosomes where it will function. Therapeutic enzymes are derived from genetically-modified plants, model animals, or human cells . The mannose 6-phosphate receptor system, found in nearly all cells, is generally used to target the enzyme for uptake. ERT was found to improve the course of many LSDs by reducing the accumulating substance, decreasing organomegaly, and ameliorating the function of many organs . For example, most GD type 1 patients respond well to ERT using several recombinant enzymes such as imiglucerase. ERT patients show improved platelet counts and hemoglobin concentrations, decreased splenomegaly, skeletal pain, and bone-related symptoms within six months of enzyme administration. Other disease manifestations, however, need longer periods to improve. Notably, because the recombinant enzymes cannot cross the BBB, they do not improve the neurological manifestations of types 2 and 3 GD . Nevertheless, a recent study in a neuronopathic GD mouse model used a non-invasive, CNS-selective delivery system mediated by nanovesicles of SapC and dioleoylphosphatidylserine to deliver GCase to deficient cells of the CNS. Treated mice showed improvements in neurodegeneration, brain inflammation, and associated phenotype compared to controls . ERT was also tested on Fabry disease patients and MLD mouse models. For MLD, intravenous administration of Metazym (recombinant human ASA) did not show a beneficial effect on the CNS- and PNS-related manifestations . However, in humanized ASA knockout MLD mouse models, a three-fold decrease in PNS and CNS sulfatide accumulation was achieved by increasing the catalytic rate constant of the intravenously-administered enzyme . Also, recombinant human ASA was used in clinical trials and is under development for intrathecal administration in patients with late-infantile and juvenile MLD . The intrathecal injection was also shown to be a potential approach for treating infantile NPC . For Fabry disease, on the other hand, two α-GAL preparations (agalsidase-α and agalsidase-β) were authorized by the European Medicines Agency to treat the disease, as they were found to help in Gb3 clearance, pain improvement, and decreased occurrence of complications upon prolonged treatments [247, 248]. Nonetheless, both enzymes have limited effects on cerebral, renal, and cardiac disease manifestations [249–251]. ERT was shown to decrease visceral but not neurological manifestations of Farber’s disease in mouse models even though to date, no cure for the disease is available . The variability in the clinical efficacy of ERT can be attributed to the wide variation of pathological manifestations exhibited by patients, and the immune response of patients toward the recombinant enzyme, which may limit the efficacy of the treatment . Enzyme Enhancement Therapy/ Chaperone Therapy Newly-synthesized enzymes must adopt the correct conformation to function properly. Otherwise, misfolded enzymes are degraded in the proteasome. Abnormal folding may result from genetic mutations that characterize multiple LSDs, eventually preventing the enzyme from reaching its destination and performing its function. Some missense mutations, however, may produce mutant enzymes whose function may be restored at least partially by the use of small stabilizing molecules, or chaperones . The efficacy of this “chaperone therapy” (CT) was first investigated in vitro using different mutant forms of α-GAL, the deficient enzyme in Fabry disease. 1-deoxygalactonojirimycin (migalastat) is an analog of the natural substrate , and binds reversibly to the active site of the enzyme with a very high affinity, stabilizing it and resulting in decreased levels of the storage material, Gb3 . In vivo studies also showed decreased Gb3 levels in α-GAL knockout mouse models upon oral administration of migalastat . Multiple carbohydrate analogs and non-carbohydrate molecules that increase the activity of the defective GCase in GD cells have been evaluated . Isofagomine (afegostat tartrate) and ambroxol, were found to be promising in preclinical and early clinical studies, respectively . Isofagomine binds mutant (and wild-type) GCase stabilizing it, thus leading to the increased catalytic activity of the enzyme in the bone, spleen, liver, and lung of non-neuronopathic GD transgenic mouse models . Upon oral administration, isofagomine decreased neurological manifestations and neuroinflammation in neuronopathic GD mouse models . Ambroxol, on the other hand, was shown stablilize wild-type GCase under high-temperature conditions. Its affinity to GCase increased at lysosomal pH levels in vitro . In vivo, subcutaneous injections of ambroxol increased GCase levels in the spleen and liver of transgenic non-neuronopathic GD mice . In another sphingolipidosis, β-gal activity was enhanced in transgenic animal models of GM1 gangliosidosis by the small molecules 5N,6S-(N'-butyliminomethylidene)-6-thio-1-deoxygalactonojirimycin and N-octyl-4-epi-β-valienamine . Despite the promising efficacy of CT in treating multiple LSDs, its use faces challenges that need evaluation in future research. These include insufficient increases in enzymatic activity that result in non-significant benefits, and the mutation-specific unresponsiveness of some defective enzymes to molecular chaperones . Bone Marrow and Stem Cell Transplantation Cell-mediated therapy is based on using stem cells as delivery vehicles to carry either normally -expressed or genetically-overexpressed enzymes that are deficient in host cells. These cells can self-renew and differentiate into healthy tissue, to produce the deficient enzyme, restoring lysosomal function and preventing the accumulation of storage material . Hematopoietic stem cell (HSC) transplantation/ bone marrow transplantation (BMT) was the first and only method to treat LSDs before the development of ERT. HSCs are multipotent progenitor cells that can differentiate into all types of blood cells. Compensation for the defective enzyme is achieved in the neurons of the CNS and PNS via the partial replacement of the host’s microglial cells by donor HSCs. Donor HSCs are derived from peripheral blood, bone marrow, or umbilical cord blood, and can cross the and blood-nerve barrier and differentiate into fully functional microglia/ macrophages . For instance, umbilical cord blood of unrelated donors can be used to treat infants with infantile KD. Such an approach showed increased blood GALC levels, progressive myelination of CNS neurons, age-appropriate cognitive function, and developmental skills before the expression of pathological symptoms in newborns. However, patients showed mild-to-severe and mild-to-moderate delays in motor function and expressive language, respectively . In minimally symptomatic Farber disease patients, on the other hand, BMT improved peripheral manifestations but failed to improve neurological degradation . Nevertheless, stably engrafted allogenic HSCs showed a slow substitution of ASA deficient cells, leading to belated disease stabilization by 12-24 months. Therefore bone-marrow-derived HSCs is inappropriate for treating patients with late infantile MLD . This may be attributed to the long lifespan of microglia, which slows their repopulation in the brain . An alternative strategy is the use of multipotent neural stem cells (NSCs) directly delivered to the brain by intracranial injection. Ex vivo genetic modification is used to increase the expression of the required enzyme before transplantation . This approach has been evaluated in Tay-Sachs mouse models and showed increased levels of β-hexosaminidase in their brains . Delayed onset of disease and reduced storage were also shown in neonatal Sandhoff mouse models treated with unmodified murine NSCs . Moreover, unmodified human and immortalized murine NSCs also showed a therapeutic effect when used on neonatal NPD-A mice, resulting in decreased brain Chol levels and neural and glial vacuolation . Neonatal NPC1 diseased mice showed delayed ataxia onset and increased Purkinje cell survival upon treatment with NSCs . Other types of stem cells like mesenchymal stem cells (MSCs) can also be used to treat NPC1, as bone marrow-derived MSCs were shown to decrease inflammation and apoptosis in the brains of NPC1 diseased mice . Although primary research findings show promising therapeutic results of stem cell therapy, multiple factors should be optimized before applying it to human brains. These include but are not limited to determining the dose of cells and target sites of injection; evaluating proper ex vivo genetic modification of cells to maximize the amount of cross-corrected enzyme; characterizing a nontumorigenic human stem cell source, and immunosuppressing the patient for allogenic transplants . The latter issue could be optimized by using modified human induced pluripotent stem cells (iPSCs) produced by introducing embryogenesis-related genes to adult somatic cells, as they are derived from the patient’s fibroblasts. However, iPSC reprogramming has its challenges that need to be overcome first . Gene Therapy Because most sphingolipidoses are single-gene disorders, with no extremely complex regulatory mechanisms, gene therapy can be considered as a potential therapeutic approach for such diseases. Gene therapy involves two general approaches, in vivo and ex vivo . Ex vivo gene therapy involves genetically modifying stem cells before transplantation. Genetic modification is required either to modify the gene of the mutant enzyme in patient-derived stem cells to avoid the potential patient immune response, or to overexpress the enzyme in the transplanted stem cells . Lentiviral vectors were used to transfer the ASA gene into HSCs derived from three children affected by MLD. Treated children did not show any pathological manifestations, even after the period of predicted onset . In vivo gene therapy directly delivers the gene into a specific organ using a vector. The enzyme resulting from the transferred gene can be produced and secreted to be taken up by other cells via the mannose 6-phosphate receptor. Because the enzyme cannot cross the BBB, this approach was mostly studied in peripheral organs. Whereas some studies using direct injections of adeno-associated virus vector (AAV9) into the CNS were shown to be effective and safe , other studies proved otherwise. Direct injection of AAV2-human aSMase into the CNS of non-human primates demonstrated dose-dependent toxicity. High dosage of the viral vector induced significant motor deficits in primates. Moreover, aSMase delivered by AAV2 lacked intercellular transport from transfected cells to other cells, which would affect its therapeutic benefit . Interestingly, GBA gene was systematically delivered to neuronopathic murine GD models via intraperitoneal injection of AAV9 at postnatal day 5. Treated models showed improved GCase activity, increased lifespan, and improved neurological symptoms . In addition, multiple administration methods were used to transfer NPC1-gene-containing-AAV9 to NPC1-deficient mice. Intra-cardiac injection at postnatal day 24 was found to extend the lifespan by 32% . Intracerebroventricular injection directly after birth also resulted in improvement in liver pathology and increased lifespan by 111% . Recently, gene editing using CRISPR/ Cas9 has been employed to create models of sphingolipidoses, and in some cases, for therapeutic use. In β-gal-deficient iPSCs, the aberrant gene was edited by targeting GLB1 exons 2 and 6. Treated iPSCs showed increased β-gal activity and reduced GM1 ganglioside storage, demonstrating the predicted efficacy of the gene therapy-based treatment in GM1 gangliosidosis . Moreover, the activity of α-GAL in the fibroblasts of Fabry disease patients was restored by CRISPR/ Cas9 therapy. Single guide RNAs were used to delete the GLA IVS4 + 919 G➔A mutation responsible for disruption of normal RNA splicing, resulting in an enzyme with no catalytic activity. Upon editing, fibroblasts showed increased α-GAL activity and decreased Gb3 storage levels . CRISPR/ Cas9 was also used to gene-correct fibroblast-derived iPSCs prepared from infantile Sandhoff patients. To investigate the efficacy, a cerebral organoid was formed from edited and non-edited iPSCs to mimic neurodevelopment in the first trimester. GM2 accumulation and high cellular size were only detected in non-edited Sandhoff iPSCs . Finally, a recent study showed that engineering human neural stem cells using CRISPR/ Cas9 can be used to cross-correct fibroblasts of KD patients in vitro. Transplantation of such cells in oligodendrocyte-mutant, shiverer-immunodeficient mice resulted in neural stem cell differentiation, with an overexpressed-GALC phenotype . Substrate Reduction Therapy In contrast to the earlier therapies that focus on the increase in enzymatic activity, SRT is based on reducing the influx of the accumulating substrate to the lysosome by reducing its biosynthetic rate . The first proof-of-principle genetic model of this approach was a mouse model created by crossbreeding a Sandhoff diseased mouse with another having a defective GM2/GD2-synthase. The resulting offspring showed much longer lifespans, although the model suffered from accumulation of other oligosaccharides that resulted in late-onset neurological manifestations . Using small inhibitor molecules of SL biosynthetic enzymes reduces substrate influx into lysosomes. An example is the use of N-butyldeoxynojirimycin (miglustat) as a modest inhibitor of GlcCer synthase that produces GlcCer, a common precursor to many GSLs accumulating in multiple sphingolipidoses. Its efficacy has been tested in Tay-Sachs mice. Moreover, it is currently being used as a potential drug treatment for patients with non-neuronopathic GD . Even though it was initially thought to be an effective inhibitor of the enzyme to reduce substrate incorporation into the lysosome, recent evidence showed that it may be working as a chaperone for GCase [287, 288]. For chronic neuronopathic GD patients, combination therapy of intravenous ERT and oral miglustat was shown to prevent neurological symptoms . Combination therapy using miglustat and NSAIDs to further reduce neuroinflammation was shown to increase the lifespan of miglustat-treated Sandhoff model mice . In addition to miglustat, another FDA-approved GlcCer synthase partial inhibitor, eliglustat tartrate, was shown in its Phase 2 trial to decrease mean volumes of spleen and liver, and increase platelet count and hemoglobin concentration in patients of GD type 1, with about 98% of adverse effects being mild or moderate effects . Moreover, Genz-682452, a novel GlcCer synthase inhibitor with CNS access, was previously shown to be a potential Fabry disease combinatorial treatment with ERT. It was investigated to treat brain manifestations of GD type 3, and was shown to decrease the severity of gliosis and the storage of brain glycolipids by 20% in two neuronopathic GD type 3 mouse models . GlcCer synthase inhibitors can be used to treat GSL-based sphingolipidoses, but not other diseases like NPD-A, NPD-B, MLD, and KD . In NPC mice, however, miglustat helped to prolong lifespan and decrease GSL accumulation in the brain, and was approved in Europe as a treatment for neurological manifestations of juvenile and adult GD and NPC . Notably, hydroxypropyl-beta-cyclodextrin is a substrate reduction drug for NPC currently in phase 3 clinical trial, and offers hope for the cure of the disease [294–296]. This drug also inhibited cerebellar Purkinje cell damage in NPC disease mouse models , thus alleviating disease symptoms. Despite lack of an inhibitor of CGT, inhibition of 3-ketosphinganine synthase by L-cycloserine was shown to increase the lifespan, and to decrease astrocyte gliosis and macrophage infiltration in KD mouse models . Taken together, these findings suggest that using SRT has advantages over ERT, because the small molecule inhibitors used are orally administered, easier to produce than recombinant enzymes, can cross the BBB easily to treat neurological pathology, and cost less. Conclusions A rapid search of articles on sphingolipidoses shows a remarkable and exponential rise in publications since the 1940s. Currently, more than 15,000 articles address the topic. At present, significant progress has been made in the understanding the underlying molecular mechanisms governing the pathogenesis of sphingolipidoses. Therapeutically, myriad options are available to combat these debilitating diseases and increasingly more patients are benefiting from them. Combinatorial therapeutic options are currently being used for better efficacy, improving symptoms and extending quality of life. Novel use of CRISPR/ Cas9 in gene editing and gene therapy offers hope for future disease eradication. We believe we have presented a thorough picture of a subset of lysosomal storage diseases that involve aberrant SL metabolism, and possible treatment avenues of these diseases. SL research is thriving, and the contribution from scientists worldwide is making enormous leaps in the understanding of both basic SL biochemistry and applications in health and disease. Acknowledgments and Funding The authors thank Prof. Marco Colombini from the University of Maryland, College Park for his scientific input and advice. Figure 4 was drawn using a paid premium subscription on Biorender.com.This work was supported by two grants awarded to Dr. Johnny Stiban from the office of the dean of graduate studies (Grants # 240193 and 241104). Abbreviations α-GAL α-Galactosidase A β-gal GM1-β-galactosidase β-hexoseaminidase β-N-acetyl-hexoseaminidase AAV9 Adeno-associated virus vector ASA Arylsulfatase A aSMase Acid sphingomyelinase BBB Blood-brain barrier BMP Bis(monoacylglycero)phosphate BMT Bone marrow transplantation CGT Ceramide galactosyltransferase CNS Central nervous system CT Chaperone therapy Cer Ceramide Chol Cholesterol ER Endoplasmic reticulum ERT Enzyme replacement therapy GCase Glucosylceramide-β-glucosidase GD Gaucher disease GM2A GM2-activator protein GSL Glycosphingolipid GalCer Galactosylceramide GALC Galactosylceramide-β-galactosidase Gb3 Globotriaosylceramide GlcCer Glucosylceramide GlcSph Glucosylsphingosine HSCs Hematopoietic stem cells IL Interleukin iPSCs Induced pluripotent stem cells KD Krabbe disease LMP Lysosomal membrane permeabilization LSDs Lysosomal storage diseases LacCer Lactosylceramide Lyso-Gb3 Globotriaosylsphingosine MEFs Mouse embryonic fibroblasts MLD Metachromatic leukodystrophy MOM Mitochondrial outer membrane MSCs Mesenchymal stem cells NAADP Nicotinic acid adenine dinucleotide phosphate NPC Niemann Pick disease type C NPD-A, NPD-B Niemann Pick disease types A, B NSAIDs Non-steroidal anti-inflammatory drugs NSCs Neural stem cells PKC Protein kinase C PMCA Plasma membrane Ca 2+-ATPase PNS Peripheral nervous system psychosine Galactosylsphingosine ROS Reactive oxygen species Sph Sphingosine S1P Sphingosine 1-phosphate SERCA Sarco/endoplasmic reticulum Ca 2+-ATPase SL Sphingolipid SM Sphingomyelin SMase Sphingomyelinase Saps Saposins SRT Substrate reduction therapy TLR Toll-like receptor TNF-α Tumor necrosis factor α UPR Unfolded protein response Authors’ contributions M.A.R. wrote the manuscript and designed Fig. 4. Y.K. reviewed the manuscript and introduced the references. She crafted Fig. 3. L.S.K. edited the final manuscript. J.S. wrote parts of the manuscript and reviewed it. He rendered Figs. 1, 2, 4 and 5. The authors read and approved the final manuscript. Availability of Data and Materials Not applicable Declarations Ethics Approval and Consent to Participate Not applicable Consent for Publication Not applicable Competing Interests The authors declare that they have no competing interests. Footnotes Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. References 1.Kolter T, Sandhoff K. Lysosomal degradation of membrane lipids. FEBS Lett. 2010;584:1700–1712. doi: 10.1016/j.febslet.2009.10.021. [DOI] [PubMed] [Google Scholar] 2.Lahiri S, Futerman AH. The metabolism and function of sphingolipids and glycosphingolipids. Cell Mol Life Sci. 2007;64:2270–2284. doi: 10.1007/s00018-007-7076-0. [DOI] [PMC free article] [PubMed] [Google Scholar] 3.Abou-Ghali M, Stiban J. Regulation of ceramide channel formation and disassembly: Insights on the initiation of apoptosis. Saudi J Biol Sci. 2015;22:760–772. doi: 10.1016/j.sjbs.2015.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar] 4.Albeituni S, Stiban J. Roles of Ceramides and Other Sphingolipids in Immune Cell Function and Inflammation. Adv Exp Med Biol. 2019;1161:169–191. doi: 10.1007/978-3-030-21735-8_15. [DOI] [PubMed] [Google Scholar] 5.Futerman A. Biochemistry of Lipids, Lipoproteins and Membranes 6th Edition. 2016. Sphingolipids; pp. 297–326. [Google Scholar] 6.Platt FM. Sphingolipid lysosomal storage disorders. Nature. 2014;510:68–75. doi: 10.1038/nature13476. [DOI] [PubMed] [Google Scholar] 7.Bhagavan NV, EunHa C. Lipids II: Phospholipids, Glycosphingolipids, and Cholesterol. Essentials of Medical Biochemistry 2nd Edition. 2015. pp. 299–320. [Google Scholar] 8.Merrill AH., Jr . Biochemistry of Lipids, Lipoproteins and Membranes 5th Edition. 2008. Sphingolipids; pp. 363–397. [Google Scholar] 9.Sandhoff K, Harzer K. Gangliosides and gangliosidoses: principles of molecular and metabolic pathogenesis. J Neurosci. 2013;33:10195–10208. doi: 10.1523/JNEUROSCI.0822-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Astudillo L, Sabourdy F, Therville N, Bode H, Segui B, Andrieu-Abadie N, Hornemann T, Levade T. Human genetic disorders of sphingolipid biosynthesis. J Inherit Metab Dis. 2015;38:65–76. doi: 10.1007/s10545-014-9736-1. [DOI] [PubMed] [Google Scholar] 11.Schulze H, Sandhoff K. Lysosomal lipid storage diseases. Cold Spring Harb Perspect Biol. 2011;3. [DOI] [PMC free article] [PubMed] 12.Ballabio A, Gieselmann V. Lysosomal disorders: from storage to cellular damage. Biochim Biophys Acta. 2009;1793:684–696. doi: 10.1016/j.bbamcr.2008.12.001. [DOI] [PubMed] [Google Scholar] 13.Puri V, Watanabe R, Dominguez M, Sun X, Wheatley CL, Marks DL, Pagano RE. Cholesterol modulates membrane traffic along the endocytic pathway in sphingolipid-storage diseases. Nat Cell Biol. 1999;1:386–388. doi: 10.1038/14084. [DOI] [PubMed] [Google Scholar] 14.Fuller, M., Meikle, P. J., and Hopwood, J. J. (2006) Epidemiology of lysosomal storage diseases: an overview. in Fabry Disease: Perspectives from 5 Years of FOS (Mehta, A., Beck, M., and Sunder-Plassmann, G. eds.), Oxford. pp [PubMed] 15.Bellettato CM, Hubert L, Scarpa M, Wangler MF. Inborn Errors of Metabolism Involving Complex Molecules: Lysosomal and Peroxisomal Storage Diseases. Pediatr Clin North Am. 2018;65:353–373. doi: 10.1016/j.pcl.2017.11.011. [DOI] [PubMed] [Google Scholar] 16.Staretz-Chacham O, Lang TC, LaMarca ME, Krasnewich D, Sidransky E. Lysosomal storage disorders in the newborn. Pediatrics. 2009;123:1191–1207. doi: 10.1542/peds.2008-0635. [DOI] [PMC free article] [PubMed] [Google Scholar] 17.Wang G, Bieberich E. Sphingolipids in neurodegeneration (with focus on ceramide and S1P) Adv Biol Regul. 2018;70:51–64. doi: 10.1016/j.jbior.2018.09.013. [DOI] [PMC free article] [PubMed] [Google Scholar] 18.Santos R, Amaral O. Advances in Sphingolipidoses: CRISPR-Cas9 Editing as an Option for Modelling and Therapy. Int J Mol Sci. 2019;20. [DOI] [PMC free article] [PubMed] 19.Hadley RN, Hagstrom JW. Cardiac lesions in a patient with familial neurovisceral lipidosis (generalized gangliosidosis) Am J Clin Pathol. 1971;55:237–240. doi: 10.1093/ajcp/55.2.237. [DOI] [PubMed] [Google Scholar] 20.Gilbert-Barness E. Review: Metabolic cardiomyopathy and conduction system defects in children. Ann Clin Lab Sci. 2004;34:15–34. [PubMed] [Google Scholar] 21.Conway R. The Sphingolipidoses. in Health Care for People with Intellectual and Developmental Disabilities across the Lifespan (Rubin, I. L., Merrick, J., Greydanus, D. E., and Patel, D. R. eds.) Cham: Springer International Publishing; 2016. pp. 659–682. [Google Scholar] 22.Gault CR, Obeid LM, Hannun YA. An overview of sphingolipid metabolism: from synthesis to breakdown. Adv Exp Med Biol. 2010;688:1–23. doi: 10.1007/978-1-4419-6741-1_1. [DOI] [PMC free article] [PubMed] [Google Scholar] 23.Pralhada Rao R, Vaidyanathan N, Rengasamy M, Mammen Oommen A, Somaiya N, Jagannath MR. Sphingolipid metabolic pathway: an overview of major roles played in human diseases. J Lipids. 2013;2013:178910. doi: 10.1155/2013/178910. [DOI] [PMC free article] [PubMed] [Google Scholar] 24.Pant DC, Aguilera-Albesa S, Pujol A. Ceramide signalling in inherited and multifactorial brain metabolic diseases. Neurobiol Dis. 2020;143:105014. doi: 10.1016/j.nbd.2020.105014. [DOI] [PubMed] [Google Scholar] 25.Park KH, Ye ZW, Zhang J, Hammad SM, Townsend DM, Rockey DC, Kim SH. 3-ketodihydrosphingosine reductase mutation induces steatosis and hepatic injury in zebrafish. Sci Rep. 2019;9:1138. doi: 10.1038/s41598-018-37946-0. [DOI] [PMC free article] [PubMed] [Google Scholar] 26.Florey O, Overholtzer M. Autophagy proteins in macroendocytic engulfment. Trends Cell Biol. 2012;22:374–380. doi: 10.1016/j.tcb.2012.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar] 27.Kolter T, Sandhoff K. Principles of lysosomal membrane digestion: stimulation of sphingolipid degradation by sphingolipid activator proteins and anionic lysosomal lipids. Annu Rev Cell Dev Biol. 2005;21:81–103. doi: 10.1146/annurev.cellbio.21.122303.120013. [DOI] [PubMed] [Google Scholar] 28.Wollert T, Hurley JH. Molecular mechanism of multivesicular body biogenesis by ESCRT complexes. Nature. 2010;464:864–869. doi: 10.1038/nature08849. [DOI] [PMC free article] [PubMed] [Google Scholar] 29.Matsuda J. Sphingolipid Activator Proteins.Tokyo: Springer Experimental Glycoscience; 2008. p. 125–9. 30.Bradova V, Smid F, Ulrich-Bott B, Roggendorf W, Paton BC, Harzer K. Prosaposin deficiency: further characterization of the sphingolipid activator protein-deficient sibs. Multiple glycolipid elevations (including lactosylceramidosis), partial enzyme deficiencies and ultrastructure of the skin in this generalized sphingolipid storage disease. Hum Genet. 1993;92:143–152. doi: 10.1007/BF00219682. [DOI] [PubMed] [Google Scholar] 31.O'Brien JS, Kishimoto Y. Saposin proteins: structure, function, and role in human lysosomal storage disorders. FASEB J. 1991;5:301–308. doi: 10.1096/fasebj.5.3.2001789. [DOI] [PubMed] [Google Scholar] 32.Morimoto S, Kishimoto Y, Tomich J, Weiler S, Ohashi T, Barranger JA, Kretz KA, O'Brien JS. Interaction of saposins, acidic lipids, and glucosylceramidase. J Biol Chem. 1990;265:1933–1937. [PubMed] [Google Scholar] 33.Morimoto S, Martin BM, Yamamoto Y, Kretz KA, O'Brien JS, Kishimoto Y. Saposin A: second cerebrosidase activator protein. Proc Natl Acad Sci U S A. 1989;86:3389–3393. doi: 10.1073/pnas.86.9.3389. [DOI] [PMC free article] [PubMed] [Google Scholar] 34.Morimoto S, Martin BM, Kishimoto Y, O'Brien JS. Saposin D: a sphingomyelinase activator. Biochem Biophys Res Commun. 1988;156:403–410. doi: 10.1016/s0006-291x(88)80855-6. [DOI] [PubMed] [Google Scholar] 35.Duan RD, Cheng Y, Hansen G, Hertervig E, Liu JJ, Syk I, Sjostrom H, Nilsson A. Purification, localization, and expression of human intestinal alkaline sphingomyelinase. J Lipid Res. 2003;44:1241–1250. doi: 10.1194/jlr.M300037-JLR200. [DOI] [PubMed] [Google Scholar] 36.Shamseddine AA, Airola MV, Hannun YA. Roles and regulation of neutral sphingomyelinase-2 in cellular and pathological processes. Adv Biol Regul. 2015;57:24–41. doi: 10.1016/j.jbior.2014.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar] 37.Schissel SL, Schuchman EH, Williams KJ, Tabas I. Zn2+-stimulated sphingomyelinase is secreted by many cell types and is a product of the acid sphingomyelinase gene. J Biol Chem. 1996;271:18431–18436. doi: 10.1074/jbc.271.31.18431. [DOI] [PubMed] [Google Scholar] 38.Kono M, Dreier JL, Ellis JM, Allende ML, Kalkofen DN, Sanders KM, Bielawski J, Bielawska A, Hannun YA, Proia RL. Neutral ceramidase encoded by the Asah2 gene is essential for the intestinal degradation of sphingolipids. J Biol Chem. 2006;281:7324–7331. doi: 10.1074/jbc.M508382200. [DOI] [PubMed] [Google Scholar] 39.Tani M, Igarashi Y, Ito M. Involvement of neutral ceramidase in ceramide metabolism at the plasma membrane and in extracellular milieu. J Biol Chem. 2005;280:36592–36600. doi: 10.1074/jbc.M506827200. [DOI] [PubMed] [Google Scholar] 40.Maceyka M, Rohrbach T, Milstien S, Spiegel S. Role of Sphingosine Kinase 1 and Sphingosine-1-Phosphate Axis in Hepatocellular Carcinoma. Handb Exp Pharmacol. 2020;259:3–17. doi: 10.1007/164_2019_217. [DOI] [PMC free article] [PubMed] [Google Scholar] 41.Rohrbach T, Maceyka M, Spiegel S. Sphingosine kinase and sphingosine-1-phosphate in liver pathobiology. Crit Rev Biochem Mol Biol. 2017;52:543–553. doi: 10.1080/10409238.2017.1337706. [DOI] [PMC free article] [PubMed] [Google Scholar] 42.Lima S, Milstien S, Spiegel S. Sphingosine and Sphingosine Kinase 1 Involvement in Endocytic Membrane Trafficking. J Biol Chem. 2017;292:3074–3088. doi: 10.1074/jbc.M116.762377. [DOI] [PMC free article] [PubMed] [Google Scholar] 43.Maceyka M, Harikumar KB, Milstien S, Spiegel S. Sphingosine-1-phosphate signaling and its role in disease. Trends Cell Biol. 2012;22:50–60. doi: 10.1016/j.tcb.2011.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar] 44.Snider AJ, Wu BX, Jenkins RW, Sticca JA, Kawamori T, Hannun YA, Obeid LM. Loss of neutral ceramidase increases inflammation in a mouse model of inflammatory bowel disease. Prostaglandins Other Lipid Mediat. 2012;99:124–130. doi: 10.1016/j.prostaglandins.2012.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar] 45.Li F, Xu R, Low BE, Lin CL, Garcia-Barros M, Schrandt J, Mileva I, Snider A, Luo CK, Jiang XC, Li MS, Hannun YA, Obeid LM, Wiles MV, Mao C. Alkaline ceramidase 2 is essential for the homeostasis of plasma sphingoid bases and their phosphates. FASEB J. 2018;32:3058–3069. doi: 10.1096/fj.201700445RR. [DOI] [PMC free article] [PubMed] [Google Scholar] 46.Kolter T, Sandhoff K. Sphingolipid metabolism diseases. Biochim Biophys Acta. 2006;1758:2057–2079. doi: 10.1016/j.bbamem.2006.05.027. [DOI] [PubMed] [Google Scholar] 47.Suzuki K, Vanier MT. Induced mouse models of abnormal sphingolipid metabolism. J Biochem. 1998;124:8–19. doi: 10.1093/oxfordjournals.jbchem.a022101. [DOI] [PubMed] [Google Scholar] 48.Yu RK, Tsai YT, Ariga T. Functional roles of gangliosides in neurodevelopment: an overview of recent advances. Neurochem Res. 2012;37:1230–1244. doi: 10.1007/s11064-012-0744-y. [DOI] [PMC free article] [PubMed] [Google Scholar] 49.Yoshida K, Oshima A, Sakuraba H, Nakano T, Yanagisawa N, Inui K, Okada S, Uyama E, Namba R, Kondo K, et al. GM1 gangliosidosis in adults: clinical and molecular analysis of 16 Japanese patients. Ann Neurol. 1992;31:328–332. doi: 10.1002/ana.410310316. [DOI] [PubMed] [Google Scholar] 50.Regier DS, Tifft CJ. GLB1-Related Disorders. in GeneReviews((R)) Adam MP, Ardinger HH, Pagon RA, Wallace SE, Bean LJH, Stephens K, Amemiya A. eds. Seattle: 1993. 51.Patterson MC. Gangliosidoses. Handb Clin Neurol. 2013;113:1707–1708. doi: 10.1016/B978-0-444-59565-2.00039-3. [DOI] [PubMed] [Google Scholar] 52.Roze E, Paschke E, Lopez N, Eck T, Yoshida K, Maurel-Ollivier A, Doummar D, Caillaud C, Galanaud D, Billette de Villemeur T, Vidailhet M, Roubergue A. Dystonia and parkinsonism in GM1 type 3 gangliosidosis. Mov Disord. 2005;20:1366–1369. doi: 10.1002/mds.20593. [DOI] [PubMed] [Google Scholar] 53.Bradbury AM, Morrison NE, Hwang M, Cox NR, Baker HJ, Martin DR. Neurodegenerative lysosomal storage disease in European Burmese cats with hexosaminidase beta-subunit deficiency. Mol Genet Metab. 2009;97:53–59. doi: 10.1016/j.ymgme.2009.01.003. [DOI] [PubMed] [Google Scholar] 54.Kodama T, Togawa T, Tsukimura T, Kawashima I, Matsuoka K, Kitakaze K, Tsuji D, Itoh K, Ishida Y, Suzuki M, Suzuki T, Sakuraba H. Lyso-GM2 ganglioside: a possible biomarker of Tay-Sachs disease and Sandhoff disease. PLoS One. 2011;6:e29074. doi: 10.1371/journal.pone.0029074. [DOI] [PMC free article] [PubMed] [Google Scholar] 55.Hadipour, Z., Shafeghati, Y., Tonekaboni, H., Verheijen, F. W., Rolfs, A., and Hadipour, F. (2018) Tay-Sachs Disease; Report of 6 Iranian Patients and Review of Literature. 2 56.Georgiou T, Christopoulos G, Anastasiadou V, Hadjiloizou S, Cregeen D, Jackson M, Mavrikiou G, Kleanthous M, Drousiotou A. The first family with Tay-Sachs disease in Cyprus: Genetic analysis reveals a nonsense (c.78G>A) and a silent (c.1305C>T) mutation and allows preimplantation genetic diagnosis. Meta Gene. 2014;2:200–205. doi: 10.1016/j.mgene.2014.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar] 57.Kaback MM, Desnick RJ. Hexosaminidase A Deficiency. in GeneReviews((R)) Adam MP, Ardinger HH, Pagon RA, Wallace SE, Bean LJH, Stephens K, Amemiya A eds. Seattle: 1993. 58.Sandhoff K, Harzer K, Wassle W, Jatzkewitz H. Enzyme alterations and lipid storage in three variants of Tay-Sachs disease. J Neurochem. 1971;18:2469–2489. doi: 10.1111/j.1471-4159.1971.tb00204.x. [DOI] [PubMed] [Google Scholar] 59.Tavasoli AR, Parvaneh N, Ashrafi MR, Rezaei Z, Zschocke J, Rostami P. Clinical presentation and outcome in infantile Sandhoff disease: a case series of 25 patients from Iranian neurometabolic bioregistry with five novel mutations. Orphanet J Rare Dis. 2018;13:130. doi: 10.1186/s13023-018-0876-5. [DOI] [PMC free article] [PubMed] [Google Scholar] 60.Conzelmann E, Sandhoff K. AB variant of infantile GM2 gangliosidosis: deficiency of a factor necessary for stimulation of hexosaminidase A-catalyzed degradation of ganglioside GM2 and glycolipid GA2. Proc Natl Acad Sci U S A. 1978;75:3979–3983. doi: 10.1073/pnas.75.8.3979. [DOI] [PMC free article] [PubMed] [Google Scholar] 61.Chen B, Rigat B, Curry C, Mahuran DJ. Structure of the GM2A gene: identification of an exon 2 nonsense mutation and a naturally occurring transcript with an in-frame deletion of exon 2. Am J Hum Genet. 1999;65:77–87. doi: 10.1086/302463. [DOI] [PMC free article] [PubMed] [Google Scholar] 62.Mahuran DJ. Biochemical consequences of mutations causing the GM2 gangliosidoses. Biochim Biophys Acta. 1999;1455:105–138. doi: 10.1016/s0925-4439(99)00074-5. [DOI] [PubMed] [Google Scholar] 63.Schepers U, Glombitza G, Lemm T, Hoffmann A, Chabas A, Ozand P, Sandhoff K. Molecular analysis of a GM2-activator deficiency in two patients with GM2-gangliosidosis AB variant. Am J Hum Genet. 1996;59:1048–1056. [PMC free article] [PubMed] [Google Scholar] 64.Shemesh E, Deroma L, Bembi B, Deegan P, Hollak C, Weinreb NJ, et al. Enzyme replacement and substrate reduction therapy for Gaucher disease. Cochrane Database Syst Rev. 2015:CD010324. [DOI] [PMC free article] [PubMed] 65.Stirnemann J, Belmatoug N, Camou F, Serratrice C, Froissart R, Caillaud C, et al. A Review of Gaucher Disease Pathophysiology, Clinical Presentation and Treatments. Int J Mol Sci. 2017;18. [DOI] [PMC free article] [PubMed] 66.Dekker N, van Dussen L, Hollak CE, Overkleeft H, Scheij S, Ghauharali K, van Breemen MJ, Ferraz MJ, Groener JE, Maas M, Wijburg FA, Speijer D, Tylki-Szymanska A, Mistry PK, Boot RG, Aerts JM. Elevated plasma glucosylsphingosine in Gaucher disease: relation to phenotype, storage cell markers, and therapeutic response. Blood. 2011;118:e118–e127. doi: 10.1182/blood-2011-05-352971. [DOI] [PMC free article] [PubMed] [Google Scholar] 67.Won JS, Kim J, Paintlia MK, Singh I, Singh AK. Role of endogenous psychosine accumulation in oligodendrocyte differentiation and survival: implication for Krabbe disease. Brain Res. 2013;1508:44–52. doi: 10.1016/j.brainres.2013.02.024. [DOI] [PMC free article] [PubMed] [Google Scholar] 68.Nagral A. Gaucher disease. J Clin Exp Hepatol. 2014;4:37–50. doi: 10.1016/j.jceh.2014.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar] 69.Pastores, G. M., and Hughes, D. A. (1993) Gaucher Disease. in GeneReviews((R)) (Adam, M. P., Ardinger, H. H., Pagon, R. A., Wallace, S. E., Bean, L. J. H., Stephens, K., and Amemiya, A. eds.), Seattle (WA). pp 70.Kaplan P, Andersson HC, Kacena KA, Yee JD. The clinical and demographic characteristics of nonneuronopathic Gaucher disease in 887 children at diagnosis. Arch Pediatr Adolesc Med. 2006;160:603–608. doi: 10.1001/archpedi.160.6.603. [DOI] [PubMed] [Google Scholar] 71.Kauli R, Zaizov R, Lazar L, Pertzelan A, Laron Z, Galatzer A, Phillip M, Yaniv Y, Cohen IJ. Delayed growth and puberty in patients with Gaucher disease type 1: natural history and effect of splenectomy and/or enzyme replacement therapy. Isr Med Assoc J. 2000;2:158–163. [PubMed] [Google Scholar] 72.Schuchman EH, Desnick RJ. Types A and B Niemann-Pick disease. Mol Genet Metab. 2017;120:27–33. doi: 10.1016/j.ymgme.2016.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar] 73.McGovern MM, Aron A, Brodie SE, Desnick RJ, Wasserstein MP. Natural history of Type A Niemann-Pick disease: possible endpoints for therapeutic trials. Neurology. 2006;66:228–232. doi: 10.1212/01.wnl.0000194208.08904.0c. [DOI] [PubMed] [Google Scholar] 74.Cassiman D, Packman S, Bembi B, Turkia HB, Al-Sayed M, Schiff M, Imrie J, Mabe P, Takahashi T, Mengel KE, Giugliani R, Cox GF. Cause of death in patients with chronic visceral and chronic neurovisceral acid sphingomyelinase deficiency (Niemann-Pick disease type B and B variant): Literature review and report of new cases. Mol Genet Metab. 2016;118:206–213. doi: 10.1016/j.ymgme.2016.05.001. [DOI] [PubMed] [Google Scholar] 75.Rodriguez-Lafrasse C, Vanier MT. Sphingosylphosphorylcholine in Niemann-Pick disease brain: accumulation in type A but not in type B. Neurochem Res. 1999;24:199–205. doi: 10.1023/a:1022501702403. [DOI] [PubMed] [Google Scholar] 76.Hollak CE, de Sonnaville ES, Cassiman D, Linthorst GE, Groener JE, Morava E, Wevers RA, Mannens M, Aerts JM, Meersseman W, Akkerman E, Niezen-Koning KE, Mulder MF, Visser G, Wijburg FA, Lefeber D, Poorthuis BJ. Acid sphingomyelinase (Asm) deficiency patients in The Netherlands and Belgium: disease spectrum and natural course in attenuated patients. Mol Genet Metab. 2012;107:526–533. doi: 10.1016/j.ymgme.2012.06.015. [DOI] [PubMed] [Google Scholar] 77.McGovern MM, Lippa N, Bagiella E, Schuchman EH, Desnick RJ, Wasserstein MP. Morbidity and mortality in type B Niemann-Pick disease. Genet Med. 2013;15:618–623. doi: 10.1038/gim.2013.4. [DOI] [PubMed] [Google Scholar] 78.Yu FPS, Amintas S, Levade T, Medin JA. Acid ceramidase deficiency: Farber disease and SMA-PME. Orphanet J Rare Dis. 2018;13:121. doi: 10.1186/s13023-018-0845-z. [DOI] [PMC free article] [PubMed] [Google Scholar] 79.Zetterstrom R. Disseminated lipogranulomatosis (Farber's disease) Acta Paediatr. 1958;47:501–510. doi: 10.1111/j.1651-2227.1958.tb07665.x. [DOI] [PubMed] [Google Scholar] 80.Willis A, Vanhuse C, Newton KP, Wasserstein M, Morotti RA. Farber's disease type IV presenting with cholestasis and neonatal liver failure: report of two cases. Pediatr Dev Pathol. 2008;11:305–308. doi: 10.2350/07-08-0318.1. [DOI] [PubMed] [Google Scholar] 81.Fusch C, Huenges R, Moser HW, Sewell AC, Roggendorf W, Kustermann-Kuhn B, Poulos A, Carey WF, Harzer K. A case of combined Farber and Sandhoff disease. Eur J Pediatr. 1989;148:558–562. doi: 10.1007/BF00441558. [DOI] [PubMed] [Google Scholar] 82.Chaves-Markman AV, Markman M, Calado EB, Pires RF, Santos-Veloso MAO, Pereira CMF, Lordsleem A, Lima SG, Markman Filho B, Oliveira DC. GLA Gene Mutation in Hypertrophic Cardiomyopathy with a New Variant Description: Is it Fabry's Disease? Arq Bras Cardiol. 2019;113:77–84. doi: 10.5935/abc.20190112. [DOI] [PMC free article] [PubMed] [Google Scholar] 83.Aerts JM, Groener JE, Kuiper S, Donker-Koopman WE, Strijland A, Ottenhoff R, van Roomen C, Mirzaian M, Wijburg FA, Linthorst GE, Vedder AC, Rombach SM, Cox-Brinkman J, Somerharju P, Boot RG, Hollak CE, Brady RO, Poorthuis BJ. Elevated globotriaosylsphingosine is a hallmark of Fabry disease. Proc Natl Acad Sci U S A. 2008;105:2812–2817. doi: 10.1073/pnas.0712309105. [DOI] [PMC free article] [PubMed] [Google Scholar] 84.Hsu TR, Niu DM. Fabry disease: Review and experience during newborn screening. Trends Cardiovasc Med. 2018;28:274–281. doi: 10.1016/j.tcm.2017.10.001. [DOI] [PubMed] [Google Scholar] 85.Germain DP. Fabry disease. Orphanet J Rare Dis. 2010;5:30. doi: 10.1186/1750-1172-5-30. [DOI] [PMC free article] [PubMed] [Google Scholar] 86.Deegan PB, Baehner AF, Barba Romero MA, Hughes DA, Kampmann C, Beck M, European FOSI. Natural history of Fabry disease in females in the Fabry Outcome Survey. J Med Genet. 2006;43:347–352. doi: 10.1136/jmg.2005.036327. [DOI] [PMC free article] [PubMed] [Google Scholar] 87.Echevarria L, Benistan K, Toussaint A, Dubourg O, Hagege AA, Eladari D, Jabbour F, Beldjord C, De Mazancourt P, Germain DP. X-chromosome inactivation in female patients with Fabry disease. Clin Genet. 2016;89:44–54. doi: 10.1111/cge.12613. [DOI] [PubMed] [Google Scholar] 88.Lee BH, Heo SH, Kim GH, Park JY, Kim WS, Kang DH, Choe KH, Kim WH, Yang SH, Yoo HW. Mutations of the GLA gene in Korean patients with Fabry disease and frequency of the E66Q allele as a functional variant in Korean newborns. J Hum Genet. 2010;55:512–517. doi: 10.1038/jhg.2010.58. [DOI] [PubMed] [Google Scholar] 89.Masson C, Cisse I, Simon V, Insalaco P, Audran M. Fabry disease: a review. Joint Bone Spine. 2004;71:381–383. doi: 10.1016/j.jbspin.2003.10.015. [DOI] [PubMed] [Google Scholar] 90.Bongarzone ER, Escolar ML, Gray SJ, Kafri T, Vite CH, Sands MS. Insights into the Pathogenesis and Treatment of Krabbe Disease. Pediatr Endocrinol Rev. 2016;13(Suppl 1):689–696. [PubMed] [Google Scholar] 91.Matsuda J, Vanier MT, Saito Y, Tohyama J, Suzuki K, Suzuki K. A mutation in the saposin A domain of the sphingolipid activator protein (prosaposin) gene results in a late-onset, chronic form of globoid cell leukodystrophy in the mouse. Hum Mol Genet. 2001;10:1191–1199. doi: 10.1093/hmg/10.11.1191. [DOI] [PubMed] [Google Scholar] 92.Wenger DA, Rafi MA, Luzi P, Datto J, Costantino-Ceccarini E. Krabbe disease: genetic aspects and progress toward therapy. Mol Genet Metab. 2000;70:1–9. doi: 10.1006/mgme.2000.2990. [DOI] [PubMed] [Google Scholar] 93.Ozkan A, Ozkara HA. Metachromatic leukodystrophy: Biochemical characterization of two (p.307Glu-->Lys, p.318Trp-->Cys) arylsulfatase A mutations. Intractable Rare Dis Res. 2016;5:280–283. doi: 10.5582/irdr.2016.01085. [DOI] [PMC free article] [PubMed] [Google Scholar] 94.Groeschel S, Kuhl JS, Bley AE, Kehrer C, Weschke B, Doring M, Bohringer J, Schrum J, Santer R, Kohlschutter A, Krageloh-Mann I, Muller I. Long-term Outcome of Allogeneic Hematopoietic Stem Cell Transplantation in Patients With Juvenile Metachromatic Leukodystrophy Compared With Nontransplanted Control Patients. JAMA Neurol. 2016;73:1133–1140. doi: 10.1001/jamaneurol.2016.2067. [DOI] [PubMed] [Google Scholar] 95.Kehrer C, Groeschel S, Kustermann-Kuhn B, Burger F, Kohler W, Kohlschutter A, Bley A, Steinfeld R, Gieselmann V, Krageloh-Mann I, German L. Language and cognition in children with metachromatic leukodystrophy: onset and natural course in a nationwide cohort. Orphanet J Rare Dis. 2014;9:18. doi: 10.1186/1750-1172-9-18. [DOI] [PMC free article] [PubMed] [Google Scholar] 96.Lloyd-Evans E, Platt FM. Lipids on trial: the search for the offending metabolite in Niemann-Pick type C disease. Traffic. 2010;11:419–428. doi: 10.1111/j.1600-0854.2010.01032.x. [DOI] [PubMed] [Google Scholar] 97.Ko DC, Gordon MD, Jin JY, Scott MP. Dynamic movements of organelles containing Niemann-Pick C1 protein: NPC1 involvement in late endocytic events. Mol Biol Cell. 2001;12:601–614. doi: 10.1091/mbc.12.3.601. [DOI] [PMC free article] [PubMed] [Google Scholar] 98.Waller-Evans H, Lloyd-Evans E. Regulation of TRPML1 function. Biochem Soc Trans. 2015;43:442–446. doi: 10.1042/BST20140311. [DOI] [PubMed] [Google Scholar] 99.Mengel E, Klunemann HH, Lourenco CM, Hendriksz CJ, Sedel F, Walterfang M, Kolb SA. Niemann-Pick disease type C symptomatology: an expert-based clinical description. Orphanet J Rare Dis. 2013;8:166. doi: 10.1186/1750-1172-8-166. [DOI] [PMC free article] [PubMed] [Google Scholar] 100.Khan A, Sergi C. Sialidosis: A Review of Morphology and Molecular Biology of a Rare Pediatric Disorder. Diagnostics (Basel). 2018;8. [DOI] [PMC free article] [PubMed] 101.Caciotti A, Garman SC, Rivera-Colon Y, Procopio E, Catarzi S, Ferri L, Guido C, Martelli P, Parini R, Antuzzi D, Battini R, Sibilio M, Simonati A, Fontana E, Salviati A, Akinci G, Cereda C, Dionisi-Vici C, Deodato F, d'Amico A, d'Azzo A, Bertini E, Filocamo M, Scarpa M, di Rocco M, Tifft CJ, Ciani F, Gasperini S, Pasquini E, Guerrini R, Donati MA, Morrone A. GM1 gangliosidosis and Morquio B disease: an update on genetic alterations and clinical findings. Biochim Biophys Acta. 2011;1812:782–790. doi: 10.1016/j.bbadis.2011.03.018. [DOI] [PMC free article] [PubMed] [Google Scholar] 102.Latour YL, Yoon R, Thomas SE, Grant C, Li C, Sena-Esteves M, Allende ML, Proia RL, Tifft CJ. Human GLB1 knockout cerebral organoids: A model system for testing AAV9-mediated GLB1 gene therapy for reducing GM1 ganglioside storage in GM1 gangliosidosis. Mol Genet Metab Rep. 2019;21:100513. doi: 10.1016/j.ymgmr.2019.100513. [DOI] [PMC free article] [PubMed] [Google Scholar] 103.Ledeen RW, Wu G. The multi-tasked life of GM1 ganglioside, a true factotum of nature. Trends Biochem Sci. 2015;40:407–418. doi: 10.1016/j.tibs.2015.04.005. [DOI] [PubMed] [Google Scholar] 104.Matsuda J, Suzuki O, Oshima A, Yamamoto Y, Noguchi A, Takimoto K, Itoh M, Matsuzaki Y, Yasuda Y, Ogawa S, Sakata Y, Nanba E, Higaki K, Ogawa Y, Tominaga L, Ohno K, Iwasaki H, Watanabe H, Brady RO, Suzuki Y. Chemical chaperone therapy for brain pathology in G(M1)-gangliosidosis. Proc Natl Acad Sci U S A. 2003;100:15912–15917. doi: 10.1073/pnas.2536657100. [DOI] [PMC free article] [PubMed] [Google Scholar] 105.Kasperzyk JL, d'Azzo A, Platt FM, Alroy J, Seyfried TN. Substrate reduction reduces gangliosides in postnatal cerebrum-brainstem and cerebellum in GM1 gangliosidosis mice. J Lipid Res. 2005;46:744–751. doi: 10.1194/jlr.M400411-JLR200. [DOI] [PubMed] [Google Scholar] 106.Cachon-Gonzalez MB, Zaccariotto E, Cox TM. Genetics and Therapies for GM2 Gangliosidosis. Curr Gene Ther. 2018;18:68–89. doi: 10.2174/1566523218666180404162622. [DOI] [PMC free article] [PubMed] [Google Scholar] 107.Hepbildikler ST, Sandhoff R, Kolzer M, Proia RL, Sandhoff K. Physiological substrates for human lysosomal beta -hexosaminidase S. J Biol Chem. 2002;277:2562–2572. doi: 10.1074/jbc.M105457200. [DOI] [PubMed] [Google Scholar] 108.Kytzia HJ, Hinrichs U, Maire I, Suzuki K, Sandhoff K. Variant of GM2-gangliosidosis with hexosaminidase A having a severely changed substrate specificity. EMBO J. 1983;2:1201–1205. doi: 10.1002/j.1460-2075.1983.tb01567.x. [DOI] [PMC free article] [PubMed] [Google Scholar] 109.Brown CA, Neote K, Leung A, Gravel RA, Mahuran DJ. Introduction of the alpha subunit mutation associated with the B1 variant of Tay-Sachs disease into the beta subunit produces a beta-hexosaminidase B without catalytic activity. J Biol Chem. 1989;264:21705–21710. [PubMed] [Google Scholar] 110.Nalysnyk L, Rotella P, Simeone JC, Hamed A, Weinreb N. Gaucher disease epidemiology and natural history: a comprehensive review of the literature. Hematology. 2017;22:65–73. doi: 10.1080/10245332.2016.1240391. [DOI] [PubMed] [Google Scholar] 111.Liou B, Zhang W, Fannin V, Quinn B, Ran H, Xu K, Setchell KDR, Witte D, Grabowski GA, Sun Y. Combination of acid beta-glucosidase mutation and Saposin C deficiency in mice reveals Gba1 mutation dependent and tissue-specific disease phenotype. Sci Rep. 2019;9:5571. doi: 10.1038/s41598-019-41914-7. [DOI] [PMC free article] [PubMed] [Google Scholar] 112.Elleder M. Glucosylceramide transfer from lysosomes--the missing link in molecular pathology of glucosylceramidase deficiency: a hypothesis based on existing data. J Inherit Metab Dis. 2006;29:707–715. doi: 10.1007/s10545-006-0411-z. [DOI] [PubMed] [Google Scholar] 113.Yildiz Y, Matern H, Thompson B, Allegood JC, Warren RL, Ramirez DM, Hammer RE, Hamra FK, Matern S, Russell DW. Mutation of beta-glucosidase 2 causes glycolipid storage disease and impaired male fertility. J Clin Invest. 2006;116:2985–2994. doi: 10.1172/JCI29224. [DOI] [PMC free article] [PubMed] [Google Scholar] 114.Mistry PK, Liu J, Sun L, Chuang WL, Yuen T, Yang R, Lu P, Zhang K, Li J, Keutzer J, Stachnik A, Mennone A, Boyer JL, Jain D, Brady RO, New MI, Zaidi M. Glucocerebrosidase 2 gene deletion rescues type 1 Gaucher disease. Proc Natl Acad Sci U S A. 2014;111:4934–4939. doi: 10.1073/pnas.1400768111. [DOI] [PMC free article] [PubMed] [Google Scholar] 115.Taguchi YV, Liu J, Ruan J, Pacheco J, Zhang X, Abbasi J, Keutzer J, Mistry PK, Chandra SS. Glucosylsphingosine Promotes alpha-Synuclein Pathology in Mutant GBA-Associated Parkinson's Disease. J Neurosci. 2017;37:9617–9631. doi: 10.1523/JNEUROSCI.1525-17.2017. [DOI] [PMC free article] [PubMed] [Google Scholar] 116.Suzuki K. Globoid cell leukodystrophy (Krabbe's disease): update. J Child Neurol. 2003;18:595–603. doi: 10.1177/08830738030180090201. [DOI] [PubMed] [Google Scholar] 117.Im DS, Heise CE, Nguyen T, O'Dowd BF, Lynch KR. Identification of a molecular target of psychosine and its role in globoid cell formation. J Cell Biol. 2001;153:429–434. doi: 10.1083/jcb.153.2.429. [DOI] [PMC free article] [PubMed] [Google Scholar] 118.Sakai N, Inui K, Tatsumi N, Fukushima H, Nishigaki T, Taniike M, Nishimoto J, Tsukamoto H, Yanagihara I, Ozono K, Okada S. Molecular cloning and expression of cDNA for murine galactocerebrosidase and mutation analysis of the twitcher mouse, a model of Krabbe's disease. J Neurochem. 1996;66:1118–1124. doi: 10.1046/j.1471-4159.1996.66031118.x. [DOI] [PubMed] [Google Scholar] 119.Luzi P, Rafi MA, Zaka M, Curtis M, Vanier MT, Wenger DA. Generation of a mouse with low galactocerebrosidase activity by gene targeting: a new model of globoid cell leukodystrophy (Krabbe disease) Mol Genet Metab. 2001;73:211–223. doi: 10.1006/mgme.2001.3194. [DOI] [PubMed] [Google Scholar] 120.Meikle PJ, Hopwood JJ, Clague AE, Carey WF. Prevalence of lysosomal storage disorders. JAMA. 1999;281:249–254. doi: 10.1001/jama.281.3.249. [DOI] [PubMed] [Google Scholar] 121.Villalobos J, Politei JM, Martins AM, Cabrera G, Amartino H, Lemay R, Ospina S, Ordonez SS, Varas C. Fabry disease in latin america: data from the fabry registry. JIMD Rep. 2013;8:91–99. doi: 10.1007/8904_2012_165. [DOI] [PMC free article] [PubMed] [Google Scholar] 122.Sanchez-Nino MD, Sanz AB, Carrasco S, Saleem MA, Mathieson PW, Valdivielso JM, Ruiz-Ortega M, Egido J, Ortiz A. Globotriaosylsphingosine actions on human glomerular podocytes: implications for Fabry nephropathy. Nephrol Dial Transplant. 2011;26:1797–1802. doi: 10.1093/ndt/gfq306. [DOI] [PubMed] [Google Scholar] 123.Lukas J, Giese AK, Markoff A, Grittner U, Kolodny E, Mascher H, Lackner KJ, Meyer W, Wree P, Saviouk V, Rolfs A. Functional characterisation of alpha-galactosidase a mutations as a basis for a new classification system in fabry disease. PLoS Genet. 2013;9:e1003632. doi: 10.1371/journal.pgen.1003632. [DOI] [PMC free article] [PubMed] [Google Scholar] 124.Kingma SD, Bodamer OA, Wijburg FA. Epidemiology and diagnosis of lysosomal storage disorders; challenges of screening. Best Pract Res Clin Endocrinol Metab. 2015;29:145–157. doi: 10.1016/j.beem.2014.08.004. [DOI] [PubMed] [Google Scholar] 125.Fiorenza MT, Moro E, Erickson RP. The pathogenesis of lysosomal storage disorders: beyond the engorgement of lysosomes to abnormal development and neuroinflammation. Hum Mol Genet. 2018;27:R119–R129. doi: 10.1093/hmg/ddy155. [DOI] [PubMed] [Google Scholar] 126.Lloyd-Evans E, Platt FM. Lysosomal Ca (2+) homeostasis: role in pathogenesis of lysosomal storage diseases. Cell Calcium. 2011;50:200–205. doi: 10.1016/j.ceca.2011.03.010. [DOI] [PubMed] [Google Scholar] 127.Goldman SD, Krise JP. Niemann-Pick C1 functions independently of Niemann-Pick C2 in the initial stage of retrograde transport of membrane-impermeable lysosomal cargo. J Biol Chem. 2010;285:4983–4994. doi: 10.1074/jbc.M109.037622. [DOI] [PMC free article] [PubMed] [Google Scholar] 128.Simonaro CM. Lysosomes, Lysosomal Storage Diseases, and Inflammation. J Inborn Errors Metabol Screening. 2016. 129.Hoops SL, Kolter T, Sandhoff K. Handbook of Neurochemistry and Molecular Neurobiology (A., L., G., T., and G., G. eds.) Boston, MA: Springer; 2009. Sphingolipid-Inherited Diseases of the Central Nervous System. [Google Scholar] 130.Vajn K, Viljetic B, Degmecic IV, Schnaar RL, Heffer M. Differential distribution of major brain gangliosides in the adult mouse central nervous system. PLoS One. 2013;8:e75720. doi: 10.1371/journal.pone.0075720. [DOI] [PMC free article] [PubMed] [Google Scholar] 131.Tettamanti G, Bonali F, Marchesini S, Zambotti V. A new procedure for the extraction, purification and fractionation of brain gangliosides. Biochim Biophys Acta. 1973;296:160–170. doi: 10.1016/0005-2760(73)90055-6. [DOI] [PubMed] [Google Scholar] 132.Jmoudiak M, Futerman AH. Gaucher disease: pathological mechanisms and modern management. Br J Haematol. 2005;129:178–188. doi: 10.1111/j.1365-2141.2004.05351.x. [DOI] [PubMed] [Google Scholar] 133.Stone DL, Carey WF, Christodoulou J, Sillence D, Nelson P, Callahan M, Tayebi N, Sidransky E. Type 2 Gaucher disease: the collodion baby phenotype revisited. Arch Dis Child Fetal Neonatal Ed. 2000;82:F163–F166. doi: 10.1136/fn.82.2.F163. [DOI] [PMC free article] [PubMed] [Google Scholar] 134.Holleran WM, Ginns EI, Menon GK, Grundmann JU, Fartasch M, McKinney CE, Elias PM, Sidransky E. Consequences of beta-glucocerebrosidase deficiency in epidermis, Ultrastructure and permeability barrier alterations in Gaucher disease. J Clin Invest. 1994;93:1756–1764. doi: 10.1172/JCI117160. [DOI] [PMC free article] [PubMed] [Google Scholar] 135.Duncan ID, Radcliff AB. Inherited and acquired disorders of myelin: The underlying myelin pathology. Exp Neurol. 2016;283:452–475. doi: 10.1016/j.expneurol.2016.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar] 136.Boggs JM. Role of galactosylceramide and sulfatide in oligodendrocytes and CNS myelin: formation of a glycosynapse. Adv Neurobiol. 2014;9:263–291. doi: 10.1007/978-1-4939-1154-7_12. [DOI] [PubMed] [Google Scholar] 137.Kafert S, Heinisch U, Zlotogora J, Gieselmann V. A missense mutation P136L in the arylsulfatase A gene causes instability and loss of activity of the mutant enzyme. Hum Genet. 1995;95:201–204. doi: 10.1007/BF00209402. [DOI] [PubMed] [Google Scholar] 138.McInnes B, Potier M, Wakamatsu N, Melancon SB, Klavins MH, Tsuji S, Mahuran DJ. An unusual splicing mutation in the HEXB gene is associated with dramatically different phenotypes in patients from different racial backgrounds. J Clin Invest. 1992;90:306–314. doi: 10.1172/JCI115863. [DOI] [PMC free article] [PubMed] [Google Scholar] 139.Conzelmann E, Kytzia HJ, Navon R, Sandhoff K. Ganglioside GM2 N-acetyl-beta-D-galactosaminidase activity in cultured fibroblasts of late-infantile and adult GM2 gangliosidosis patients and of healthy probands with low hexosaminidase level. Am J Hum Genet. 1983;35:900–913. [PMC free article] [PubMed] [Google Scholar] 140.Leinekugel P, Michel S, Conzelmann E, Sandhoff K. Quantitative correlation between the residual activity of beta-hexosaminidase A and arylsulfatase A and the severity of the resulting lysosomal storage disease. Hum Genet. 1992;88:513–523. doi: 10.1007/BF00219337. [DOI] [PubMed] [Google Scholar] 141.Gieselmann V. What can cell biology tell us about heterogeneity in lysosomal storage diseases? Acta Paediatr Suppl. 2005;94:80–86. doi: 10.1111/j.1651-2227.2005.tb02118.x. [DOI] [PubMed] [Google Scholar] 142.Kolter T, Sandhoff K. Glycosphingolipid degradation and animal models of GM2-gangliosidoses. J Inherit Metab Dis. 1998;21:548–563. doi: 10.1023/a:1005419122018. [DOI] [PubMed] [Google Scholar] 143.Desnick RJ, Schuchman EH. Enzyme replacement and enhancement therapies: lessons from lysosomal disorders. Nat Rev Genet. 2002;3:954–966. doi: 10.1038/nrg963. [DOI] [PubMed] [Google Scholar] 144.Bosio A, Binczek E, Stoffel W. Functional breakdown of the lipid bilayer of the myelin membrane in central and peripheral nervous system by disrupted galactocerebroside synthesis. Proc Natl Acad Sci U S A. 1996;93:13280–13285. doi: 10.1073/pnas.93.23.13280. [DOI] [PMC free article] [PubMed] [Google Scholar] 145.Vanier M, Svennerholm L. Chemical pathology of Krabbe disease: the occurrence of psychosine and other neutral sphingoglycolipids. Adv Exp Med Biol. 1976;68:115–126. doi: 10.1007/978-1-4684-7735-1_8. [DOI] [PubMed] [Google Scholar] 146.Yamada H, Martin P, Suzuki K. Impairment of protein kinase C activity in twitcher Schwann cells in vitro. Brain Res. 1996;718:138–144. doi: 10.1016/0006-8993(96)00098-4. [DOI] [PubMed] [Google Scholar] 147.Giri S, Khan M, Rattan R, Singh I, Singh AK. Krabbe disease: psychosine-mediated activation of phospholipase A2 in oligodendrocyte cell death. J Lipid Res. 2006;47:1478–1492. doi: 10.1194/jlr.M600084-JLR200. [DOI] [PubMed] [Google Scholar] 148.Pang Y, Zheng B, Fan LW, Rhodes PG, Cai Z. IGF-1 protects oligodendrocyte progenitors against TNFalpha-induced damage by activation of PI3K/Akt and interruption of the mitochondrial apoptotic pathway. Glia. 2007;55:1099–1107. doi: 10.1002/glia.20530. [DOI] [PubMed] [Google Scholar] 149.Zaka M, Rafi MA, Rao HZ, Luzi P, Wenger DA. Insulin-like growth factor-1 provides protection against psychosine-induced apoptosis in cultured mouse oligodendrocyte progenitor cells using primarily the PI3K/Akt pathway. Mol Cell Neurosci. 2005;30:398–407. doi: 10.1016/j.mcn.2005.08.004. [DOI] [PubMed] [Google Scholar] 150.Cantuti Castelvetri L, Givogri MI, Hebert A, Smith B, Song Y, Kaminska A, Lopez-Rosas A, Morfini G, Pigino G, Sands M, Brady ST, Bongarzone ER. The sphingolipid psychosine inhibits fast axonal transport in Krabbe disease by activation of GSK3beta and deregulation of molecular motors. J Neurosci. 2013;33:10048–10056. doi: 10.1523/JNEUROSCI.0217-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar] 151.Kanazawa T, Nakamura S, Momoi M, Yamaji T, Takematsu H, Yano H, Sabe H, Yamamoto A, Kawasaki T, Kozutsumi Y. Inhibition of cytokinesis by a lipid metabolite, psychosine. J Cell Biol. 2000;149:943–950. doi: 10.1083/jcb.149.4.943. [DOI] [PMC free article] [PubMed] [Google Scholar] 152.Eckhardt M. Pathology and current treatment of neurodegenerative sphingolipidoses. Neuromolecular Med. 2010;12:362–382. doi: 10.1007/s12017-010-8133-7. [DOI] [PubMed] [Google Scholar] 153.Blomqvist M, Gieselmann V, Mansson JE. Accumulation of lysosulfatide in the brain of arylsulfatase A-deficient mice. Lipids Health Dis. 2011;10:28. doi: 10.1186/1476-511X-10-28. [DOI] [PMC free article] [PubMed] [Google Scholar] 154.Neuenhofer S, Conzelmann E, Schwarzmann G, Egge H, Sandhoff K. Occurrence of lysoganglioside lyso-GM2 (II3-Neu5Ac-gangliotriaosylsphingosine) in GM2 gangliosidosis brain. Biol Chem Hoppe Seyler. 1986;367:241–244. doi: 10.1515/bchm3.1986.367.1.241. [DOI] [PubMed] [Google Scholar] 155.Schueler UH, Kolter T, Kaneski CR, Blusztajn JK, Herkenham M, Sandhoff K, Brady RO. Toxicity of glucosylsphingosine (glucopsychosine) to cultured neuronal cells: a model system for assessing neuronal damage in Gaucher disease type 2 and 3. Neurobiol Dis. 2003;14:595–601. doi: 10.1016/j.nbd.2003.08.016. [DOI] [PubMed] [Google Scholar] 156.Hong YB, Kim EY, Jung SC. Upregulation of proinflammatory cytokines in the fetal brain of the Gaucher mouse. J Korean Med Sci. 2006;21:733–738. doi: 10.3346/jkms.2006.21.4.733. [DOI] [PMC free article] [PubMed] [Google Scholar] 157.Vitner EB, Platt FM, Futerman AH. Common and uncommon pathogenic cascades in lysosomal storage diseases. J Biol Chem. 2010;285:20423–20427. doi: 10.1074/jbc.R110.134452. [DOI] [PMC free article] [PubMed] [Google Scholar] 158.Lloyd-Evans E, Morgan AJ, He X, Smith DA, Elliot-Smith E, Sillence DJ, Churchill GC, Schuchman EH, Galione A, Platt FM. Niemann-Pick disease type C1 is a sphingosine storage disease that causes deregulation of lysosomal calcium. Nat Med. 2008;14:1247–1255. doi: 10.1038/nm.1876. [DOI] [PubMed] [Google Scholar] 159.Clayton DF, George JM. Synucleins in synaptic plasticity and neurodegenerative disorders. J Neurosci Res. 1999;58:120–129. [PubMed] [Google Scholar] 160.Navarro-Romero A, Montpeyo M, Martinez-Vicente M. The Emerging Role of the Lysosome in Parkinson's Disease. Cells. 2020;9. [DOI] [PMC free article] [PubMed] 161.Pchelina SN, Nuzhnyi EP, Emelyanov AK, Boukina TM, Usenko TS, Nikolaev MA, Salogub GN, Yakimovskii AF, Zakharova EY. Increased plasma oligomeric alpha-synuclein in patients with lysosomal storage diseases. Neurosci Lett. 2014;583:188–193. doi: 10.1016/j.neulet.2014.09.041. [DOI] [PubMed] [Google Scholar] 162.Plotegher N, Duchen MR. Mitochondrial Dysfunction and Neurodegeneration in Lysosomal Storage Disorders. Trends Mol Med. 2017;23:116–134. doi: 10.1016/j.molmed.2016.12.003. [DOI] [PubMed] [Google Scholar] 163.Angelova PR, Ludtmann MH, Horrocks MH, Negoda A, Cremades N, Klenerman D, Dobson CM, Wood NW, Pavlov EV, Gandhi S, Abramov AY. Ca2+ is a key factor in alpha-synuclein-induced neurotoxicity. J Cell Sci. 2016;129:1792–1801. doi: 10.1242/jcs.180737. [DOI] [PMC free article] [PubMed] [Google Scholar] 164.Song JX, Lu JH, Liu LF, Chen LL, Durairajan SS, Yue Z, Zhang HQ, Li M. HMGB1 is involved in autophagy inhibition caused by SNCA/alpha-synuclein overexpression: a process modulated by the natural autophagy inducer corynoxine B. Autophagy. 2014;10:144–154. doi: 10.4161/auto.26751. [DOI] [PMC free article] [PubMed] [Google Scholar] 165.Land WG. The Role of Damage-Associated Molecular Patterns (DAMPs) in Human Diseases: Part II: DAMPs as diagnostics, prognostics and therapeutics in clinical medicine. Sultan Qaboos Univ Med J. 2015;15:e157–e170. [PMC free article] [PubMed] [Google Scholar] 166.Barton GM. A calculated response: control of inflammation by the innate immune system. J Clin Invest. 2008;118:413–420. doi: 10.1172/JCI34431. [DOI] [PMC free article] [PubMed] [Google Scholar] 167.Rozenfeld P, Feriozzi S. Contribution of inflammatory pathways to Fabry disease pathogenesis. Mol Genet Metab. 2017;122:19–27. doi: 10.1016/j.ymgme.2017.09.004. [DOI] [PubMed] [Google Scholar] 168.Barak V, Acker M, Nisman B, Kalickman I, Abrahamov A, Zimran A, Yatziv S. Cytokines in Gaucher's disease. Eur Cytokine Netw. 1999;10:205–210. [PubMed] [Google Scholar] 169.Allen MJ, Myer BJ, Khokher AM, Rushton N, Cox TM. Pro-inflammatory cytokines and the pathogenesis of Gaucher's disease: increased release of interleukin-6 and interleukin-10. QJM. 1997;90:19–25. doi: 10.1093/qjmed/90.1.19. [DOI] [PubMed] [Google Scholar] 170.Machaczka M, Lerner R, Klimkowska M, Hagglund H. Treatment of multiple myeloma in patients with Gaucher disease. Am J Hematol. 2009;84:694–696. doi: 10.1002/ajh.21492. [DOI] [PubMed] [Google Scholar] 171.Boven LA, van Meurs M, Boot RG, Mehta A, Boon L, Aerts JM, Laman JD. Gaucher cells demonstrate a distinct macrophage phenotype and resemble alternatively activated macrophages. Am J Clin Pathol. 2004;122:359–369. doi: 10.1309/BG5V-A8JR-DQH1-M7HN. [DOI] [PubMed] [Google Scholar] 172.Hollak CE, Evers L, Aerts JM, van Oers MH. Elevated levels of M-CSF, sCD14 and IL8 in type 1 Gaucher disease. Blood Cells Mol Dis. 1997;23:201–212. doi: 10.1006/bcmd.1997.0137. [DOI] [PubMed] [Google Scholar] 173.Farfel-Becker T, Vitner EB, Pressey SN, Eilam R, Cooper JD, Futerman AH. Spatial and temporal correlation between neuron loss and neuroinflammation in a mouse model of neuronopathic Gaucher disease. Hum Mol Genet. 2011;20:1375–1386. doi: 10.1093/hmg/ddr019. [DOI] [PubMed] [Google Scholar] 174.Vitner EB, Farfel-Becker T, Eilam R, Biton I, Futerman AH. Contribution of brain inflammation to neuronal cell death in neuronopathic forms of Gaucher's disease. Brain. 2012;135:1724–1735. doi: 10.1093/brain/aws095. [DOI] [PubMed] [Google Scholar] 175.Tseng WL, Chou SJ, Chiang HC, Wang ML, Chien CS, Chen KH, Leu HB, Wang CY, Chang YL, Liu YY, Jong YJ, Lin SZ, Chiou SH, Lin SJ, Yu WC. Imbalanced Production of Reactive Oxygen Species and Mitochondrial Antioxidant SOD2 in Fabry Disease-Specific Human Induced Pluripotent Stem Cell-Differentiated Vascular Endothelial Cells. Cell Transplant. 2017;26:513–527. doi: 10.3727/096368916X694265. [DOI] [PMC free article] [PubMed] [Google Scholar] 176.Pereira CS, Azevedo O, Maia ML, Dias AF, Sa-Miranda C, Macedo MF. Invariant natural killer T cells are phenotypically and functionally altered in Fabry disease. Mol Genet Metab. 2013;108:241–248. doi: 10.1016/j.ymgme.2013.01.018. [DOI] [PubMed] [Google Scholar] 177.Alayoubi AM, Wang JC, Au BC, Carpentier S, Garcia V, Dworski S, El-Ghamrasni S, Kirouac KN, Exertier MJ, Xiong ZJ, Prive GG, Simonaro CM, Casas J, Fabrias G, Schuchman EH, Turner PV, Hakem R, Levade T, Medin JA. Systemic ceramide accumulation leads to severe and varied pathological consequences. EMBO Mol Med. 2013;5:827–842. doi: 10.1002/emmm.201202301. [DOI] [PMC free article] [PubMed] [Google Scholar] 178.Jeyakumar M, Smith DA, Williams IM, Borja MC, Neville DC, Butters TD, Dwek RA, Platt FM. NSAIDs increase survival in the Sandhoff disease mouse: synergy with N-butyldeoxynojirimycin. Ann Neurol. 2004;56:642–649. doi: 10.1002/ana.20242. [DOI] [PubMed] [Google Scholar] 179.Smith D, Wallom KL, Williams IM, Jeyakumar M, Platt FM. Beneficial effects of anti-inflammatory therapy in a mouse model of Niemann-Pick disease type C1. Neurobiol Dis. 2009;36:242–251. doi: 10.1016/j.nbd.2009.07.010. [DOI] [PubMed] [Google Scholar] 180.Korkotian E, Schwarz A, Pelled D, Schwarzmann G, Segal M, Futerman AH. Elevation of intracellular glucosylceramide levels results in an increase in endoplasmic reticulum density and in functional calcium stores in cultured neurons. J Biol Chem. 1999;274:21673–21678. doi: 10.1074/jbc.274.31.21673. [DOI] [PubMed] [Google Scholar] 181.Lloyd-Evans E, Pelled D, Riebeling C, Bodennec J, de-Morgan, A., Waller, H., Schiffmann, R., and Futerman, A. H. Glucosylceramide and glucosylsphingosine modulate calcium mobilization from brain microsomes via different mechanisms. J Biol Chem. 2003;278:23594–23599. doi: 10.1074/jbc.M300212200. [DOI] [PubMed] [Google Scholar] 182.Pelled D, Lloyd-Evans E, Riebeling C, Jeyakumar M, Platt FM, Futerman AH. Inhibition of calcium uptake via the sarco/endoplasmic reticulum Ca2+-ATPase in a mouse model of Sandhoff disease and prevention by treatment with N-butyldeoxynojirimycin. J Biol Chem. 2003;278:29496–29501. doi: 10.1074/jbc.M302964200. [DOI] [PubMed] [Google Scholar] 183.Ginzburg L, Futerman AH. Defective calcium homeostasis in the cerebellum in a mouse model of Niemann-Pick A disease. J Neurochem. 2005;95:1619–1628. doi: 10.1111/j.1471-4159.2005.03534.x. [DOI] [PubMed] [Google Scholar] 184.Ginzburg L, Li SC, Li YT, Futerman AH. An exposed carboxyl group on sialic acid is essential for gangliosides to inhibit calcium uptake via the sarco/endoplasmic reticulum Ca2+-ATPase: relevance to gangliosidoses. J Neurochem. 2008;104:140–146. doi: 10.1111/j.1471-4159.2007.04983.x. [DOI] [PubMed] [Google Scholar] 185.Sano R, Annunziata I, Patterson A, Moshiach S, Gomero E, Opferman J, Forte M, d'Azzo A. GM1-ganglioside accumulation at the mitochondria-associated ER membranes links ER stress to Ca (2+)-dependent mitochondrial apoptosis. Mol Cell. 2009;36:500–511. doi: 10.1016/j.molcel.2009.10.021. [DOI] [PMC free article] [PubMed] [Google Scholar] 186.Zhao Y, Fan X, Yang F, Zhang X. Gangliosides modulate the activity of the plasma membrane Ca (2+)-ATPase from porcine brain synaptosomes. Arch Biochem Biophys. 2004;427:204–212. doi: 10.1016/j.abb.2004.04.009. [DOI] [PubMed] [Google Scholar] 187.Pang Y, Zhu H, Wu P, Chen J. The characterization of plasma membrane Ca2+-ATPase in rich sphingomyelin-cholesterol domains. FEBS Lett. 2005;579:2397–2403. doi: 10.1016/j.febslet.2005.03.038. [DOI] [PubMed] [Google Scholar] 188.Voccoli V, Tonazzini I, Signore G, Caleo M, Cecchini M. Role of extracellular calcium and mitochondrial oxygen species in psychosine-induced oligodendrocyte cell death. Cell Death Dis. 2014;5:e1529. doi: 10.1038/cddis.2014.483. [DOI] [PMC free article] [PubMed] [Google Scholar] 189.Galione A, Morgan AJ, Arredouani A, Davis LC, Rietdorf K, Ruas M, Parrington J. NAADP as an intracellular messenger regulating lysosomal calcium-release channels. Biochem Soc Trans. 2010;38:1424–1431. doi: 10.1042/BST0381424. [DOI] [PubMed] [Google Scholar] 190.Li RJ, Xu J, Fu C, Zhang J, Zheng YG, Jia H, et al. Regulation of mTORC1 by lysosomal calcium and calmodulin. Elife. 2016;5. [DOI] [PMC free article] [PubMed] 191.Medina DL, Di Paola S, Peluso I, Armani A, De Stefani D, Venditti R, Montefusco S, Scotto-Rosato A, Prezioso C, Forrester A, Settembre C, Wang W, Gao Q, Xu H, Sandri M, Rizzuto R, De Matteis MA, Ballabio A. Lysosomal calcium signalling regulates autophagy through calcineurin and TFEB. Nat Cell Biol. 2015;17:288–299. doi: 10.1038/ncb3114. [DOI] [PMC free article] [PubMed] [Google Scholar] 192.Settembre C, Zoncu R, Medina DL, Vetrini F, Erdin S, Erdin S, Huynh T, Ferron M, Karsenty G, Vellard MC, Facchinetti V, Sabatini DM, Ballabio A. A lysosome-to-nucleus signalling mechanism senses and regulates the lysosome via mTOR and TFEB. EMBO J. 2012;31:1095–1108. doi: 10.1038/emboj.2012.32. [DOI] [PMC free article] [PubMed] [Google Scholar] 193.Darios F, Stevanin G. Impairment of Lysosome Function and Autophagy in Rare Neurodegenerative Diseases. J Mol Biol. 2020;432:2714–2734. doi: 10.1016/j.jmb.2020.02.033. [DOI] [PMC free article] [PubMed] [Google Scholar] 194.Seranova E, Connolly KJ, Zatyka M, Rosenstock TR, Barrett T, Tuxworth RI, Sarkar S. Dysregulation of autophagy as a common mechanism in lysosomal storage diseases. Essays Biochem. 2017;61:733–749. doi: 10.1042/EBC20170055. [DOI] [PMC free article] [PubMed] [Google Scholar] 195.Settembre C, Fraldi A, Jahreiss L, Spampanato C, Venturi C, Medina D, de Pablo R, Tacchetti C, Rubinsztein DC, Ballabio A. A block of autophagy in lysosomal storage disorders. Hum Mol Genet. 2008;17:119–129. doi: 10.1093/hmg/ddm289. [DOI] [PubMed] [Google Scholar] 196.Takamura A, Higaki K, Kajimaki K, Otsuka S, Ninomiya H, Matsuda J, Ohno K, Suzuki Y, Nanba E. Enhanced autophagy and mitochondrial aberrations in murine G(M1)-gangliosidosis. Biochem Biophys Res Commun. 2008;367:616–622. doi: 10.1016/j.bbrc.2007.12.187. [DOI] [PubMed] [Google Scholar] 197.Wei Y, Pattingre S, Sinha S, Bassik M, Levine B. JNK1-mediated phosphorylation of Bcl-2 regulates starvation-induced autophagy. Mol Cell. 2008;30:678–688. doi: 10.1016/j.molcel.2008.06.001. [DOI] [PMC free article] [PubMed] [Google Scholar] 198.Lieberman AP, Puertollano R, Raben N, Slaugenhaupt S, Walkley SU, Ballabio A. Autophagy in lysosomal storage disorders. Autophagy. 2012;8:719–730. doi: 10.4161/auto.19469. [DOI] [PMC free article] [PubMed] [Google Scholar] 199.Lamark T, Kirkin V, Dikic I, Johansen T. NBR1 and p62 as cargo receptors for selective autophagy of ubiquitinated targets. Cell Cycle. 2009;8:1986–1990. doi: 10.4161/cc.8.13.8892. [DOI] [PubMed] [Google Scholar] 200.Xu YH, Xu K, Sun Y, Liou B, Quinn B, Li RH, Xue L, Zhang W, Setchell KD, Witte D, Grabowski GA. Multiple pathogenic proteins implicated in neuronopathic Gaucher disease mice. Hum Mol Genet. 2014;23:3943–3957. doi: 10.1093/hmg/ddu105. [DOI] [PMC free article] [PubMed] [Google Scholar] 201.Farfel-Becker T, Vitner EB, Kelly SL, Bame JR, Duan J, Shinder V, Merrill AH, Jr, Dobrenis K, Futerman AH. Neuronal accumulation of glucosylceramide in a mouse model of neuronopathic Gaucher disease leads to neurodegeneration. Hum Mol Genet. 2014;23:843–854. doi: 10.1093/hmg/ddt468. [DOI] [PMC free article] [PubMed] [Google Scholar] 202.Kinghorn KJ, Grönke S, Castillo-Quan JI, Woodling NS, Li L, Sirka E, Gegg M, Mills K, Hardy J, Bjedov I, Partridge L. A Drosophila Model of Neuronopathic Gaucher Disease Demonstrates Lysosomal-Autophagic Defects and Altered mTOR Signalling and Is Functionally Rescued by Rapamycin. J Neurosci. 2016;36:11654–11670. doi: 10.1523/JNEUROSCI.4527-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar] 203.Serrano-Puebla A, Boya P. Lysosomal membrane permeabilization in cell death: new evidence and implications for health and disease. Ann N Y Acad Sci. 2016;1371:30–44. doi: 10.1111/nyas.12966. [DOI] [PubMed] [Google Scholar] 204.Terman A, Kurz T, Gustafsson B, Brunk UT. Lysosomal labilization. IUBMB Life. 2006;58:531–539. doi: 10.1080/15216540600904885. [DOI] [PubMed] [Google Scholar] 205.Sahara S, Yamashima T. Calpain-mediated Hsp70.1 cleavage in hippocampal CA1 neuronal death. Biochem Biophys Res Commun. 2010;393:806–811. doi: 10.1016/j.bbrc.2010.02.087. [DOI] [PubMed] [Google Scholar] 206.Gabande-Rodriguez E, Boya P, Labrador V, Dotti CG, Ledesma MD. High sphingomyelin levels induce lysosomal damage and autophagy dysfunction in Niemann Pick disease type A. Cell Death Differ. 2014;21:864–875. doi: 10.1038/cdd.2014.4. [DOI] [PMC free article] [PubMed] [Google Scholar] 207.Yamane M, Moriya S, Kokuba H. Visualization of ceramide channels in lysosomes following endogenous palmitoyl-ceramide accumulation as an initial step in the induction of necrosis. Biochem Biophys Rep. 2017;11:174–181. doi: 10.1016/j.bbrep.2017.02.010. [DOI] [PMC free article] [PubMed] [Google Scholar] 208.Samanta S, Stiban J, Maugel TK, Colombini M. Visualization of ceramide channels by transmission electron microscopy. Biochim Biophys Acta. 2011;1808:1196–1201. doi: 10.1016/j.bbamem.2011.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar] 209.Stiban J, Fistere D, Colombini M. Dihydroceramide hinders ceramide channel formation: Implications on apoptosis. Apoptosis. 2006;11:773–780. doi: 10.1007/s10495-006-5882-8. [DOI] [PubMed] [Google Scholar] 210.Doerflinger M, Glab JA, Puthalakath H. BH3-only proteins: a 20-year stock-take. FEBS J. 2015;282:1006–1016. doi: 10.1111/febs.13190. [DOI] [PubMed] [Google Scholar] 211.Eno CO, Zhao G, Venkatanarayan A, Wang B, Flores ER, Li C. Noxa couples lysosomal membrane permeabilization and apoptosis during oxidative stress. Free Radic Biol Med. 2013;65:26–37. doi: 10.1016/j.freeradbiomed.2013.05.051. [DOI] [PMC free article] [PubMed] [Google Scholar] 212.Holler N, Zaru R, Micheau O, Thome M, Attinger A, Valitutti S, Bodmer JL, Schneider P, Seed B, Tschopp J. Fas triggers an alternative, caspase-8-independent cell death pathway using the kinase RIP as effector molecule. Nat Immunol. 2000;1:489–495. doi: 10.1038/82732. [DOI] [PubMed] [Google Scholar] 213.Vitner EB, Dekel H, Zigdon H, Shachar T, Farfel-Becker T, Eilam R, Karlsson S, Futerman AH. Altered expression and distribution of cathepsins in neuronopathic forms of Gaucher disease and in other sphingolipidoses. Hum Mol Genet. 2010;19:3583–3590. doi: 10.1093/hmg/ddq273. [DOI] [PubMed] [Google Scholar] 214.Vitner EB, Salomon R, Farfel-Becker T, Meshcheriakova A, Ali M, Klein AD, Platt FM, Cox TM, Futerman AH. RIPK3 as a potential therapeutic target for Gaucher's disease. Nat Med. 2014;20:204–208. doi: 10.1038/nm.3449. [DOI] [PubMed] [Google Scholar] 215.Irahara-Miyana K, Otomo T, Kondo H, Hossain MA, Ozono K, Sakai N. Unfolded protein response is activated in Krabbe disease in a manner dependent on the mutation type. J Hum Genet. 2018;63:699–706. doi: 10.1038/s10038-018-0445-8. [DOI] [PubMed] [Google Scholar] 216.Tessitore A, del Martin MP, Sano R, Ma Y, Mann L, Ingrassia A, Laywell ED, Steindler DA, Hendershot LM, d’Azzo A. GM1-ganglioside-mediated activation of the unfolded protein response causes neuronal death in a neurodegenerative gangliosidosis. Mol Cell. 2004;15:753–766. doi: 10.1016/j.molcel.2004.08.029. [DOI] [PubMed] [Google Scholar] 217.Wei H, Kim SJ, Zhang Z, Tsai PC, Wisniewski KE, Mukherjee AB. ER and oxidative stresses are common mediators of apoptosis in both neurodegenerative and non-neurodegenerative lysosomal storage disorders and are alleviated by chemical chaperones. Hum Mol Genet. 2008;17:469–477. doi: 10.1093/hmg/ddm324. [DOI] [PubMed] [Google Scholar] 218.Chen CS, Patterson MC, Wheatley CL, O'Brien JF, Pagano RE. Broad screening test for sphingolipid-storage diseases. Lancet. 1999;354:901–905. doi: 10.1016/S0140-6736(98)10034-X. [DOI] [PubMed] [Google Scholar] 219.Sillence DJ, Puri V, Marks DL, Butters TD, Dwek RA, Pagano RE, Platt FM. Glucosylceramide modulates membrane traffic along the endocytic pathway. J Lipid Res. 2002;43:1837–1845. doi: 10.1194/jlr.m200232-jlr200. [DOI] [PubMed] [Google Scholar] 220.Klein D, Schmandt T, Muth-Kohne E, Perez-Bouza A, Segschneider M, Gieselmann V, Brustle O. Embryonic stem cell-based reduction of central nervous system sulfatide storage in an animal model of metachromatic leukodystrophy. Gene Ther. 2006;13:1686–1695. doi: 10.1038/sj.gt.3302834. [DOI] [PubMed] [Google Scholar] 221.Kobayashi T, Beuchat MH, Lindsay M, Frias S, Palmiter RD, Sakuraba H, Parton RG, Gruenberg J. Late endosomal membranes rich in lysobisphosphatidic acid regulate cholesterol transport. Nat Cell Biol. 1999;1:113–118. doi: 10.1038/10084. [DOI] [PubMed] [Google Scholar] 222.Rappaport J, Manthe RL, Solomon M, Garnacho C, Muro S. A Comparative Study on the Alterations of Endocytic Pathways in Multiple Lysosomal Storage Disorders. Mol Pharm. 2016;13:357–368. doi: 10.1021/acs.molpharmaceut.5b00542. [DOI] [PMC free article] [PubMed] [Google Scholar] 223.Fraldi A, Annunziata F, Lombardi A, Kaiser HJ, Medina DL, Spampanato C, Fedele AO, Polishchuk R, Sorrentino NC, Simons K, Ballabio A. Lysosomal fusion and SNARE function are impaired by cholesterol accumulation in lysosomal storage disorders. EMBO J. 2010;29:3607–3620. doi: 10.1038/emboj.2010.237. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted] 224.Almeida A, Almeida J, Bolanos JP, Moncada S. Different responses of astrocytes and neurons to nitric oxide: the role of glycolytically generated ATP in astrocyte protection. Proc Natl Acad Sci U S A. 2001;98:15294–15299. doi: 10.1073/pnas.261560998. [DOI] [PMC free article] [PubMed] [Google Scholar] 225.Osellame LD, Rahim AA, Hargreaves IP, Gegg ME, Richard-Londt A, Brandner S, Waddington SN, Schapira AH, Duchen MR. Mitochondria and quality control defects in a mouse model of Gaucher disease--links to Parkinson's disease. Cell Metab. 2013;17:941–953. doi: 10.1016/j.cmet.2013.04.014. [DOI] [PMC free article] [PubMed] [Google Scholar] 226.Vilaca R, Silva E, Nadais A, Teixeira V, Matmati N, Gaifem J, Hannun YA, Sa Miranda MC, Costa V. Sphingolipid signalling mediates mitochondrial dysfunctions and reduced chronological lifespan in the yeast model of Niemann-Pick type C1. Mol Microbiol. 2014;91:438–451. doi: 10.1111/mmi.12470. [DOI] [PMC free article] [PubMed] [Google Scholar] 227.MacAskill AF, Atkin TA, Kittler JT. Mitochondrial trafficking and the provision of energy and calcium buffering at excitatory synapses. Eur J Neurosci. 2010;32:231–240. doi: 10.1111/j.1460-9568.2010.07345.x. [DOI] [PubMed] [Google Scholar] 228.Narendra D, Tanaka A, Suen DF, Youle RJ. Parkin is recruited selectively to impaired mitochondria and promotes their autophagy. J Cell Biol. 2008;183:795–803. doi: 10.1083/jcb.200809125. [DOI] [PMC free article] [PubMed] [Google Scholar] 229.Barazzuol L, Giamogante F, Brini M, Cali T. PINK1/Parkin Mediated Mitophagy, Ca (2+) Signalling, and ER-Mitochondria Contacts in Parkinson's Disease. Int J Mol Sci. 2020;21. [DOI] [PMC free article] [PubMed] 230.Canonico B, Cesarini E, Salucci S, Luchetti F, Falcieri E, Di Sario G, Palma F, Papa S. Defective Autophagy, Mitochondrial Clearance and Lipophagy in Niemann-Pick Type B Lymphocytes. PLoS One. 2016;11:e0165780. doi: 10.1371/journal.pone.0165780. [DOI] [PMC free article] [PubMed] [Google Scholar] 231.Deganuto M, Pittis MG, Pines A, Dominissini S, Kelley MR, Garcia R, Quadrifoglio F, Bembi B, Tell G. Altered intracellular redox status in Gaucher disease fibroblasts and impairment of adaptive response against oxidative stress. J Cell Physiol. 2007;212:223–235. doi: 10.1002/jcp.21023. [DOI] [PubMed] [Google Scholar] 232.Jeyakumar M, Thomas R, Elliot-Smith E, Smith DA, van der Spoel AC, d'Azzo A, Perry VH, Butters TD, Dwek RA, Platt FM. Central nervous system inflammation is a hallmark of pathogenesis in mouse models of GM1 and GM2 gangliosidosis. Brain. 2003;126:974–987. doi: 10.1093/brain/awg089. [DOI] [PubMed] [Google Scholar] 233.Zampieri S, Mellon SH, Butters TD, Nevyjel M, Covey DF, Bembi B, Dardis A. Oxidative stress in NPC1 deficient cells: protective effect of allopregnanolone. J Cell Mol Med. 2009;13:3786–3796. doi: 10.1111/j.1582-4934.2008.00493.x. [DOI] [PMC free article] [PubMed] [Google Scholar] 234.Suzuki K, Yamaguchi A, Yamanaka S, Kanzaki S, Kawashima M, Togo T, Katsuse O, Koumitsu N, Aoki N, Iseki E, Kosaka K, Yamaguchi K, Hashimoto M, Aoki I, Hirayasu Y. Accumulated alpha-synuclein affects the progression of GM2 gangliosidoses. Exp Neurol. 2016;284:38–49. doi: 10.1016/j.expneurol.2016.07.011. [DOI] [PubMed] [Google Scholar] 235.Stiban J, Perera M. Very long chain ceramides interfere with C16-ceramide-induced channel formation: A plausible mechanism for regulating the initiation of intrinsic apoptosis. Biochim Biophys Acta. 2015;1848:561–567. doi: 10.1016/j.bbamem.2014.11.018. [DOI] [PubMed] [Google Scholar] 236.Colombini M. Ceramide Channels. Adv Exp Med Biol. 2019;1159:33–48. doi: 10.1007/978-3-030-21162-2_3. [DOI] [PubMed] [Google Scholar] 237.Colombini M. Ceramide channels and mitochondrial outer membrane permeability. J Bioenerg Biomembr. 2017;49:57–64. doi: 10.1007/s10863-016-9646-z. [DOI] [PubMed] [Google Scholar] 238.Perera MN, Lin SH, Peterson YK, Bielawska A, Szulc ZM, Bittman R, Colombini M. Bax and Bcl-xL exert their regulation on different sites of the ceramide channel. Biochem J. 2012;445:81–91. doi: 10.1042/BJ20112103. [DOI] [PubMed] [Google Scholar] 239.Xu YH, Barnes S, Sun Y, Grabowski GA. Multi-system disorders of glycosphingolipid and ganglioside metabolism. J Lipid Res. 2010;51:1643–1675. doi: 10.1194/jlr.R003996. [DOI] [PMC free article] [PubMed] [Google Scholar] 240.Valayannopoulos V. Enzyme replacement therapy and substrate reduction therapy in lysosomal storage disorders with neurological expression. Handb Clin Neurol. 2013;113:1851–1857. doi: 10.1016/B978-0-444-59565-2.00055-1. [DOI] [PubMed] [Google Scholar] 241.Beck M. Treatment strategies for lysosomal storage disorders. Dev Med Child Neurol. 2018;60:13–18. doi: 10.1111/dmcn.13600. [DOI] [PubMed] [Google Scholar] 242.van Dussen L, Biegstraaten M, Dijkgraaf MG, Hollak CE. Modelling Gaucher disease progression: long-term enzyme replacement therapy reduces the incidence of splenectomy and bone complications. Orphanet J Rare Dis. 2014;9:112. doi: 10.1186/s13023-014-0112-x. [DOI] [PMC free article] [PubMed] [Google Scholar] 243.Sun Y, Liou B, Chu Z, Fannin V, Blackwood R, Peng Y, Grabowski GA, Davis HW, Qi X. Systemic enzyme delivery by blood-brain barrier-penetrating SapC-DOPS nanovesicles for treatment of neuronopathic Gaucher disease. EBioMedicine. 2020;55:102735. doi: 10.1016/j.ebiom.2020.102735. [DOI] [PMC free article] [PubMed] [Google Scholar] 244.Solomon M, Muro S. Lysosomal enzyme replacement therapies: Historical development, clinical outcomes, and future perspectives. Adv Drug Deliv Rev. 2017;118:109–134. doi: 10.1016/j.addr.2017.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar] 245.Simonis H, Yaghootfam C, Sylvester M, Gieselmann V, Matzner U. Evolutionary redesign of the lysosomal enzyme arylsulfatase A increases efficacy of enzyme replacement therapy for metachromatic leukodystrophy. Hum Mol Genet. 2019;28:1810–1821. doi: 10.1093/hmg/ddz020. [DOI] [PubMed] [Google Scholar] 246.Wright T, Li A, Lotterhand J, Graham AR, Huang Y, Avila N, Pan J. Nonclinical comparability studies of recombinant human arylsulfatase A addressing manufacturing process changes. PLoS One. 2018;13:e0195186. doi: 10.1371/journal.pone.0195186. [DOI] [PMC free article] [PubMed] [Google Scholar] 247.Eng CM, Guffon N, Wilcox WR, Germain DP, Lee P, Waldek S, Caplan L, Linthorst GE, Desnick RJ, International Collaborative Fabry Disease Study, G Safety and efficacy of recombinant human alpha-galactosidase A replacement therapy in Fabry's disease. N Engl J Med. 2001;345:9–16. doi: 10.1056/NEJM200107053450102. [DOI] [PubMed] [Google Scholar] 248.Schiffmann R, Kopp JB, Austin HA, 3rd, Sabnis S, Moore DF, Weibel T, Balow JE, Brady RO. Enzyme replacement therapy in Fabry disease: a randomized controlled trial. JAMA. 2001;285:2743–2749. doi: 10.1001/jama.285.21.2743. [DOI] [PubMed] [Google Scholar] 249.Schiffmann R, Ries M, Timmons M, Flaherty JT, Brady RO. Long-term therapy with agalsidase alfa for Fabry disease: safety and effects on renal function in a home infusion setting. Nephrol Dial Transplant. 2006;21:345–354. doi: 10.1093/ndt/gfi152. [DOI] [PubMed] [Google Scholar] 250.Weidemann F, Niemann M, Breunig F, Herrmann S, Beer M, Stork S, Voelker W, Ertl G, Wanner C, Strotmann J. Long-term effects of enzyme replacement therapy on fabry cardiomyopathy: evidence for a better outcome with early treatment. Circulation. 2009;119:524–529. doi: 10.1161/CIRCULATIONAHA.108.794529. [DOI] [PubMed] [Google Scholar] 251.Germain DP, Waldek S, Banikazemi M, Bushinsky DA, Charrow J, Desnick RJ, Lee P, Loew T, Vedder AC, Abichandani R, Wilcox WR, Guffon N. Sustained, long-term renal stabilization after 54 months of agalsidase beta therapy in patients with Fabry disease. J Am Soc Nephrol. 2007;18:1547–1557. doi: 10.1681/ASN.2006080816. [DOI] [PubMed] [Google Scholar] 252.Hamidieh AA, Rostami T, Behfar M, Kiumarsi A, Ghavamzadeh A. Favorable outcome of allogenic hematopoietic stem cell transplantation in farber disease. Inl J Clin Rheumatol. 2018;13. 253.Guce AI, Clark NE, Rogich JJ, Garman SC. The molecular basis of pharmacological chaperoning in human alpha-galactosidase. Chem Biol. 2011;18:1521–1526. doi: 10.1016/j.chembiol.2011.10.012. [DOI] [PMC free article] [PubMed] [Google Scholar] 254.Yam GH, Bosshard N, Zuber C, Steinmann B, Roth J. Pharmacological chaperone corrects lysosomal storage in Fabry disease caused by trafficking-incompetent variants. Am J Physiol Cell Physiol. 2006;290:C1076–C1082. doi: 10.1152/ajpcell.00426.2005. [DOI] [PubMed] [Google Scholar] 255.Khanna R, Soska R, Lun Y, Feng J, Frascella M, Young B, Brignol N, Pellegrino L, Sitaraman SA, Desnick RJ, Benjamin ER, Lockhart DJ, Valenzano KJ. The pharmacological chaperone 1-deoxygalactonojirimycin reduces tissue globotriaosylceramide levels in a mouse model of Fabry disease. Mol Ther. 2010;18:23–33. doi: 10.1038/mt.2009.220. [DOI] [PMC free article] [PubMed] [Google Scholar] 256.Boyd RE, Lee G, Rybczynski P, Benjamin ER, Khanna R, Wustman BA, Valenzano KJ. Pharmacological chaperones as therapeutics for lysosomal storage diseases. J Med Chem. 2013;56:2705–2725. doi: 10.1021/jm301557k. [DOI] [PubMed] [Google Scholar] 257.Valenzano KJ, Khanna R, Powe AC, Boyd R, Lee G, Flanagan JJ, Benjamin ER. Identification and characterization of pharmacological chaperones to correct enzyme deficiencies in lysosomal storage disorders. Assay Drug Dev Technol. 2011;9:213–235. doi: 10.1089/adt.2011.0370. [DOI] [PMC free article] [PubMed] [Google Scholar] 258.Lieberman RL, Wustman BA, Huertas P, Powe AC, Jr, Pine CW, Khanna R, Schlossmacher MG, Ringe D, Petsko GA. Structure of acid beta-glucosidase with pharmacological chaperone provides insight into Gaucher disease. Nat Chem Biol. 2007;3:101–107. doi: 10.1038/nchembio850. [DOI] [PubMed] [Google Scholar] 259.Sun Y, Liou B, Xu YH, Quinn B, Zhang W, Hamler R, Setchell KD, Grabowski GA. Ex vivo and in vivo effects of isofagomine on acid beta-glucosidase variants and substrate levels in Gaucher disease. J Biol Chem. 2012;287:4275–4287. doi: 10.1074/jbc.M111.280016. [DOI] [PMC free article] [PubMed] [Google Scholar] 260.Maegawa GH, Tropak MB, Buttner JD, Rigat BA, Fuller M, Pandit D, Tang L, Kornhaber GJ, Hamuro Y, Clarke JT, Mahuran DJ. Identification and characterization of ambroxol as an enzyme enhancement agent for Gaucher disease. J Biol Chem. 2009;284:23502–23516. doi: 10.1074/jbc.M109.012393. [DOI] [PMC free article] [PubMed] [Google Scholar] 261.Sanders A, Hemmelgarn H, Melrose HL, Hein L, Fuller M, Clarke LA. Transgenic mice expressing human glucocerebrosidase variants: utility for the study of Gaucher disease. Blood Cells Mol Dis. 2013;51:109–115. doi: 10.1016/j.bcmd.2013.03.006. [DOI] [PubMed] [Google Scholar] 262.Suzuki Y, Ichinomiya S, Kurosawa M, Matsuda J, Ogawa S, Iida M, Kubo T, Tabe M, Itoh M, Higaki K, Nanba E, Ohno K. Therapeutic chaperone effect of N-octyl 4-epi-beta-valienamine on murine G(M1)-gangliosidosis. Mol Genet Metab. 2012;106:92–98. doi: 10.1016/j.ymgme.2012.02.012. [DOI] [PubMed] [Google Scholar] 263.Parenti G, Andria G, Valenzano KJ. Pharmacological Chaperone Therapy: Preclinical Development, Clinical Translation, and Prospects for the Treatment of Lysosomal Storage Disorders. Mol Ther. 2015;23:1138–1148. doi: 10.1038/mt.2015.62. [DOI] [PMC free article] [PubMed] [Google Scholar] 264.Shihabuddin LS, Cheng SH. Neural stem cell transplantation as a therapeutic approach for treating lysosomal storage diseases. Neurotherapeutics. 2011;8:659–667. doi: 10.1007/s13311-011-0067-8. [DOI] [PMC free article] [PubMed] [Google Scholar] 265.Beerepoot S, Nierkens S, Boelens JJ, Lindemans C, Bugiani M, Wolf NI. Peripheral neuropathy in metachromatic leukodystrophy: current status and future perspective. Orphanet J Rare Dis. 2019;14:240. doi: 10.1186/s13023-019-1220-4. [DOI] [PMC free article] [PubMed] [Google Scholar] 266.Escolar ML, Poe MD, Provenzale JM, Richards KC, Allison J, Wood S, Wenger DA, Pietryga D, Wall D, Champagne M, Morse R, Krivit W, Kurtzberg J. Transplantation of umbilical-cord blood in babies with infantile Krabbe's disease. N Engl J Med. 2005;352:2069–2081. doi: 10.1056/NEJMoa042604. [DOI] [PubMed] [Google Scholar] 267.Yeager AM, Uhas KA, Coles CD, Davis PC, Krause WL, Moser HW. Bone marrow transplantation for infantile ceramidase deficiency (Farber disease) Bone Marrow Transplant. 2000;26:357–363. doi: 10.1038/sj.bmt.1702489. [DOI] [PubMed] [Google Scholar] 268.Orchard PJ, Tolar J. Transplant outcomes in leukodystrophies. Semin Hematol. 2010;47:70–78. doi: 10.1053/j.seminhematol.2009.10.006. [DOI] [PubMed] [Google Scholar] 269.Lacorazza HD, Flax JD, Snyder EY, Jendoubi M. Expression of human beta-hexosaminidase alpha-subunit gene (the gene defect of Tay-Sachs disease) in mouse brains upon engraftment of transduced progenitor cells. Nat Med. 1996;2:424–429. doi: 10.1038/nm0496-424. [DOI] [PubMed] [Google Scholar] 270.Lee JP, Jeyakumar M, Gonzalez R, Takahashi H, Lee PJ, Baek RC, Clark D, Rose H, Fu G, Clarke J, McKercher S, Meerloo J, Muller FJ, Park KI, Butters TD, Dwek RA, Schwartz P, Tong G, Wenger D, Lipton SA, Seyfried TN, Platt FM, Snyder EY. Stem cells act through multiple mechanisms to benefit mice with neurodegenerative metabolic disease. Nat Med. 2007;13:439–447. doi: 10.1038/nm1548. [DOI] [PubMed] [Google Scholar] 271.Sidman RL, Li J, Stewart GR, Clarke J, Yang W, Snyder EY, Shihabuddin LS. Injection of mouse and human neural stem cells into neonatal Niemann-Pick A model mice. Brain Res. 2007;1140:195–204. doi: 10.1016/j.brainres.2007.01.011. [DOI] [PubMed] [Google Scholar] 272.Lee JM, Bae JS, Jin HK. Intracerebellar transplantation of neural stem cells into mice with neurodegeneration improves neuronal networks with functional synaptic transmission. J Vet Med Sci. 2010;72:999–1009. doi: 10.1292/jvms.09-0514. [DOI] [PubMed] [Google Scholar] 273.Bae JS, Furuya S, Ahn SJ, Yi SJ, Hirabayashi Y, Jin HK. Neuroglial activation in Niemann-Pick Type C mice is suppressed by intracerebral transplantation of bone marrow-derived mesenchymal stem cells. Neurosci Lett. 2005;381:234–236. doi: 10.1016/j.neulet.2005.02.029. [DOI] [PubMed] [Google Scholar] 274.Huang HP, Chuang CY, Kuo HC. Induced pluripotent stem cell technology for disease modeling and drug screening with emphasis on lysosomal storage diseases. Stem Cell Res Ther. 2012;3:34. doi: 10.1186/scrt125. [DOI] [PMC free article] [PubMed] [Google Scholar] 275.Sands MS, Davidson BL. Gene therapy for lysosomal storage diseases. Mol Ther. 2006;13:839–849. doi: 10.1016/j.ymthe.2006.01.006. [DOI] [PubMed] [Google Scholar] 276.Biffi A, Montini E, Lorioli L, Cesani M, Fumagalli F, Plati T, Baldoli C, Martino S, Calabria A, Canale S, Benedicenti F, Vallanti G, Biasco L, Leo S, Kabbara N, Zanetti G, Rizzo WB, Mehta NA, Cicalese MP, Casiraghi M, Boelens JJ, Del Carro U, Dow DJ, Schmidt M, Assanelli A, Neduva V, Di Serio C, Stupka E, Gardner J, von Kalle C, Bordignon C, Ciceri F, Rovelli A, Roncarolo MG, Aiuti A, Sessa M, Naldini L. Lentiviral hematopoietic stem cell gene therapy benefits metachromatic leukodystrophy. Science. 2013;341:1233158. doi: 10.1126/science.1233158. [DOI] [PubMed] [Google Scholar] 277.Salegio EA, Samaranch L, Jenkins RW, Clarke CJ, Lamarre C, Beyer J, Kells AP, Bringas J, Sebastian WS, Richardson RM, Rosenbluth KH, Hannun YA, Bankiewicz KS, Forsayeth J. Safety study of adeno-associated virus serotype 2-mediated human acid sphingomyelinase expression in the nonhuman primate brain. Hum Gene Ther. 2012;23:891–902. doi: 10.1089/hum.2012.052. [DOI] [PMC free article] [PubMed] [Google Scholar] 278.Du S, Ou H, Cui R, Jiang N, Zhang M, Li X, Ma J, Zhang J, Ma D. Delivery of Glucosylceramidase Beta Gene Using AAV9 Vector Therapy as a Treatment Strategy in Mouse Models of Gaucher Disease. Hum Gene Ther. 2019;30:155–167. doi: 10.1089/hum.2018.072. [DOI] [PubMed] [Google Scholar] 279.Xie C, Gong XM, Luo J, Li BL, Song BL. AAV9-NPC1 significantly ameliorates Purkinje cell death and behavioral abnormalities in mouse NPC disease. J Lipid Res. 2017;58:512–518. doi: 10.1194/jlr.M071274. [DOI] [PMC free article] [PubMed] [Google Scholar] 280.Hughes MP, Smith DA, Morris L, Fletcher C, Colaco A, Huebecker M, Tordo J, Palomar N, Massaro G, Henckaerts E, Waddington SN, Platt FM, Rahim AA. AAV9 intracerebroventricular gene therapy improves lifespan, locomotor function and pathology in a mouse model of Niemann-Pick type C1 disease. Hum Mol Genet. 2018;27:3079–3098. doi: 10.1093/hmg/ddy212. [DOI] [PMC free article] [PubMed] [Google Scholar] 281.Chang, S.-K., Lu, Y.-H., Chen, Y.-R., Hsieh, Y.-P., Lin, W.-J., Hsu, T.-R., and Niu, D.-M. (2017) AB043. Correction of the GLA IVS4+919 G>A mutation with CRISPR/Cas9 deletion strategy in fibroblasts of Fabry disease. Annals of Translational Medicine, 41 282.Allende ML, Cook EK, Larman BC, Nugent A, Brady JM, Golebiowski D, Sena-Esteves M, Tifft CJ, Proia RL. Cerebral organoids derived from Sandhoff disease-induced pluripotent stem cells exhibit impaired neurodifferentiation. J Lipid Res. 2018;59:550–563. doi: 10.1194/jlr.M081323. [DOI] [PMC free article] [PubMed] [Google Scholar] 283.Dever DP, Scharenberg SG, Camarena J, Kildebeck EJ, Clark JT, Martin RM, Bak RO, Tang Y, Dohse M, Birgmeier JA, Jagadeesh KA, Bejerano G, Tsukamoto A, Gomez-Ospina N, Uchida N, Porteus MH. CRISPR/Cas9 Genome Engineering in Engraftable Human Brain-Derived Neural Stem Cells. iScience. 2019;15:524–535. doi: 10.1016/j.isci.2019.04.036. [DOI] [PMC free article] [PubMed] [Google Scholar] 284.Van Rossum A, Holsopple M. Enzyme Replacement or Substrate Reduction? A Review of Gaucher Disease Treatment Options. Hosp Pharm. 2016;51:553–563. doi: 10.1310/hpj5107-553. [DOI] [PMC free article] [PubMed] [Google Scholar] 285.Liu Y, Wada R, Kawai H, Sango K, Deng C, Tai T, McDonald MP, Araujo K, Crawley JN, Bierfreund U, Sandhoff K, Suzuki K, Proia RL. A genetic model of substrate deprivation therapy for a glycosphingolipid storage disorder. J Clin Invest. 1999;103:497–505. doi: 10.1172/JCI5542. [DOI] [PMC free article] [PubMed] [Google Scholar] 286.Cox T, Lachmann R, Hollak C, Aerts J, van Weely S, Hrebicek M, Platt F, Butters T, Dwek R, Moyses C, Gow I, Elstein D, Zimran A. Novel oral treatment of Gaucher's disease with N-butyldeoxynojirimycin (OGT 918) to decrease substrate biosynthesis. Lancet. 2000;355:1481–1485. doi: 10.1016/S0140-6736(00)02161-9. [DOI] [PubMed] [Google Scholar] 287.Alfonso P, Pampin S, Estrada J, Rodriguez-Rey JC, Giraldo P, Sancho J, Pocovi M. Miglustat (NB-DNJ) works as a chaperone for mutated acid beta-glucosidase in cells transfected with several Gaucher disease mutations. Blood Cells Mol Dis. 2005;35:268–276. doi: 10.1016/j.bcmd.2005.05.007. [DOI] [PubMed] [Google Scholar] 288.Abian O, Alfonso P, Velazquez-Campoy A, Giraldo P, Pocovi M, Sancho J. Therapeutic strategies for Gaucher disease: miglustat (NB-DNJ) as a pharmacological chaperone for glucocerebrosidase and the different thermostability of velaglucerase alfa and imiglucerase. Mol Pharm. 2011;8:2390–2397. doi: 10.1021/mp200313e. [DOI] [PubMed] [Google Scholar] 289.Andersson U, Smith D, Jeyakumar M, Butters TD, Borja MC, Dwek RA, Platt FM. Improved outcome of N-butyldeoxygalactonojirimycin-mediated substrate reduction therapy in a mouse model of Sandhoff disease. Neurobiol Dis. 2004;16:506–515. doi: 10.1016/j.nbd.2004.04.012. [DOI] [PubMed] [Google Scholar] 290.Lukina E, Watman N, Dragosky M, Lau H, Avila Arreguin E, Rosenbaum H, Zimran A, Foster MC, Gaemers SJM, Peterschmitt MJ. Outcomes after 8 years of eliglustat therapy for Gaucher disease type 1: Final results from the Phase 2 trial. Am J Hematol. 2019;94:29–38. doi: 10.1002/ajh.25300. [DOI] [PMC free article] [PubMed] [Google Scholar] 291.Ashe KM, Budman E, Bangari DS, Siegel CS, Nietupski JB, Wang B, Desnick RJ, Scheule RK, Leonard JP, Cheng SH, Marshall J. Efficacy of Enzyme and Substrate Reduction Therapy with a Novel Antagonist of Glucosylceramide Synthase for Fabry Disease. Mol Med. 2015;21:389–399. doi: 10.2119/molmed.2015.00088. [DOI] [PMC free article] [PubMed] [Google Scholar] 292.LeVine SM, Pedchenko TV, Bronshteyn IG, Pinson DM. L-cycloserine slows the clinical and pathological course in mice with globoid cell leukodystrophy (twitcher mice) J Neurosci Res. 2000;60:231–236. doi: 10.1002/(SICI)1097-4547(20000415)60:2<231::AID-JNR12>3.0.CO;2-E. [DOI] [PubMed] [Google Scholar] 293.Patterson MC, Vecchio D, Prady H, Abel L, Wraith JE. Miglustat for treatment of Niemann-Pick C disease: a randomised controlled study. Lancet Neurol. 2007;6:765–772. doi: 10.1016/S1474-4422(07)70194-1. [DOI] [PubMed] [Google Scholar] 294.Ory DS, Ottinger EA, Farhat NY, King KA, Jiang X, Weissfeld L, Berry-Kravis E, Davidson CD, Bianconi S, Keener LA, Rao R, Soldatos A, Sidhu R, Walters KA, Xu X, Thurm A, Solomon B, Pavan WJ, Machielse BN, Kao M, Silber SA, McKew JC, Brewer CC, Vite CH, Walkley SU, Austin CP, Porter FD. Intrathecal 2-hydroxypropyl-beta-cyclodextrin decreases neurological disease progression in Niemann-Pick disease, type C1: a non-randomised, open-label, phase 1-2 trial. Lancet. 2017;390:1758–1768. doi: 10.1016/S0140-6736(17)31465-4. [DOI] [PMC free article] [PubMed] [Google Scholar] 295.Cougnoux A, Drummond RA, Collar AL, Iben JR, Salman A, Westgarth H, Wassif CA, Cawley NX, Farhat NY, Ozato K, Lionakis MS, Porter FD. Microglia activation in Niemann-Pick disease, type C1 is amendable to therapeutic intervention. Hum Mol Genet. 2018;27:2076–2089. doi: 10.1093/hmg/ddy112. [DOI] [PMC free article] [PubMed] [Google Scholar] 296.Erickson RP, Fiorenza MT. A hopeful therapy for Niemann-Pick C diseases. Lancet. 2017;390:1720–1721. doi: 10.1016/S0140-6736(17)31631-8. [DOI] [PubMed] [Google Scholar] 297.Fukaura, M., Ishitsuka, Y., Shirakawa, S., Ushihama, N., Yamada, Y., Kondo, Y., Takeo, T., Nakagata, N., Motoyama, K., Higashi, T., Arima, H., Kurauchi, Y., Seki, T., Katsuki, H., Higaki, K., Matsuo, M., and Irie, T. (2021) Intracerebroventricular Treatment with 2-Hydroxypropyl-beta-Cyclodextrin Decreased Cerebellar and Hepatic Glycoprotein Nonmetastatic Melanoma Protein B (GPNMB) Expression in Niemann-Pick Disease Type C Model Mice. Int J Mol Sci 22 [DOI] [PMC free article] [PubMed] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Data Availability Statement Not applicable Articles from Lipids in Health and Disease are provided here courtesy of BMC ACTIONS View on publisher site PDF (1.8 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Introduction Overview of SL Catabolism SL-related LSDs: Sphingolipidoses Pathophysiology of Sphingolipidoses Therapeutic Approaches to Sphingolipidoses Conclusions Acknowledgments and Funding Abbreviations Authors’ contributions Availability of Data and Materials Declarations Footnotes References Associated Data Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
190663
https://artofproblemsolving.com/wiki/index.php/Circumradius?srsltid=AfmBOorh5xCdn9BFvjZBKkuiDK8aA1NZdUjq38nkjEKWIk8wNzJ-QA5o
Art of Problem Solving Circumradius - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Circumradius Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Circumradius The circumradius of a cyclicpolygon is the radius of the circumscribed circle of that polygon. For a triangle, it is the measure of the radius of the circle that circumscribes the triangle. Since every triangle is cyclic, every triangle has a circumscribed circle, or a circumcircle. Contents [hide] 1 Formula for a Triangle 2 Proof 3 Formula for Circumradius 4 Circumradius, bisector and altitude 5 Euler's Theorem for a Triangle 6 Proof 7 Right triangles 7.1 Theorem 8 Equilateral triangles 9 If all three sides are known 10 If you know just one side and its opposite angle 11 See also Formula for a Triangle Let and denote the triangle's three sides and let denote the area of the triangle. Then, the measure of the circumradius of the triangle is simply . This can be rewritten as . Proof We let , , , , and . We know that is a right angle because is the diameter. Also, because they both subtend arc . Therefore, by AA similarity, so we have or However, remember that . Substituting this in gives us and then simplifying to get and we are done. Formula for Circumradius Where is the circumradius, is the inradius, and , , and are the respective sides of the triangle and is the semiperimeter. Note that this is similar to the previously mentioned formula; the reason being that . But, if you don't know the inradius, you can find the area of the triangle by Heron’s Formula: Circumradius, bisector and altitude Circumradius and altitude are isogonals with respect bisector and vertex of triangle. Euler's Theorem for a Triangle Let have circumcenter and incenter .Then Proof See Right triangles The hypotenuse of the triangle is the diameter of its circumcircle, and the circumcenter is its midpoint, so the circumradius is equal to half of the hypotenuse of the right triangle. This results in a well-known theorem: Theorem The midpoint of the hypotenuse is equidistant from the vertices of the right triangle. The midpoint of the hypotenuse is the circumcenter of a right triangle. Equilateral triangles where is the length of a side of the triangle. If all three sides are known Which follows from the Heron's Formula and . If you know just one side and its opposite angle by the Law of Sines. (Extended Law of Sines) See also Inradius Semiperimeter Retrieved from " Category: Geometry Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
190664
https://www.frontiersin.org/journals/marine-science/articles/10.3389/fmars.2018.00481/full
Frontiers | Amnesic Shellfish Poisoning (ASP) and Paralytic Shellfish Poisoning (PSP) in Nigerian Coast, Gulf of Guinea Frontiers in Marine Science About us About us Who we are Mission and values History Leadership Awards Impact and progress Frontiers' impact Our annual reports Publishing model How we publish Open access Peer review Research integrity Research Topics FAIR² Data Management Fee policy Services Societies National consortia Institutional partnerships Collaborators More from Frontiers Frontiers Forum Frontiers Planet Prize Press office Sustainability Career opportunities Contact us All journalsAll articlesSubmit your researchSearchLogin Frontiers in Marine Science Sections Sections Aquatic Microbiology Aquatic Physiology Coastal Ocean Processes Coral Reef Research Deep-Sea Environments and Ecology Discoveries Global Change and the Future Ocean Marine Affairs and Policy Marine Biogeochemistry Marine Biology Marine Biotechnology and Bioproducts Marine Conservation and Sustainability Marine Ecosystem Ecology Marine Evolutionary Biology, Biogeography and Species Diversity Marine Fisheries, Aquaculture and Living Resources Marine Megafauna Marine Molecular Biology and Ecology Marine Pollution Ocean Observation Ocean Solutions Physical Oceanography ArticlesResearch TopicsEditorial board About journal About journal Scope Field chief editors Mission & scope Facts Journal sections Open access statement Copyright statement Quality For authors Why submit? Article types Author guidelines Editor guidelines Publishing fees Submission checklist Contact editorial office About us About us Who we are Mission and values History Leadership Awards Impact and progress Frontiers' impact Our annual reports Publishing model How we publish Open access Peer review Research integrity Research Topics FAIR² Data Management Fee policy Services Societies National consortia Institutional partnerships Collaborators More from Frontiers Frontiers Forum Frontiers Planet Prize Press office Sustainability Career opportunities Contact us All journalsAll articlesSubmit your research Frontiers in Marine Science Sections Sections Aquatic Microbiology Aquatic Physiology Coastal Ocean Processes Coral Reef Research Deep-Sea Environments and Ecology Discoveries Global Change and the Future Ocean Marine Affairs and Policy Marine Biogeochemistry Marine Biology Marine Biotechnology and Bioproducts Marine Conservation and Sustainability Marine Ecosystem Ecology Marine Evolutionary Biology, Biogeography and Species Diversity Marine Fisheries, Aquaculture and Living Resources Marine Megafauna Marine Molecular Biology and Ecology Marine Pollution Ocean Observation Ocean Solutions Physical Oceanography ArticlesResearch TopicsEditorial board About journal About journal Scope Field chief editors Mission & scope Facts Journal sections Open access statement Copyright statement Quality For authors Why submit? Article types Author guidelines Editor guidelines Publishing fees Submission checklist Contact editorial office Frontiers in Marine Science Sections Sections Aquatic Microbiology Aquatic Physiology Coastal Ocean Processes Coral Reef Research Deep-Sea Environments and Ecology Discoveries Global Change and the Future Ocean Marine Affairs and Policy Marine Biogeochemistry Marine Biology Marine Biotechnology and Bioproducts Marine Conservation and Sustainability Marine Ecosystem Ecology Marine Evolutionary Biology, Biogeography and Species Diversity Marine Fisheries, Aquaculture and Living Resources Marine Megafauna Marine Molecular Biology and Ecology Marine Pollution Ocean Observation Ocean Solutions Physical Oceanography ArticlesResearch TopicsEditorial board About journal About journal Scope Field chief editors Mission & scope Facts Journal sections Open access statement Copyright statement Quality For authors Why submit? Article types Author guidelines Editor guidelines Publishing fees Submission checklist Contact editorial office Submit your researchSearchLogin Your new experience awaits. Try the new design now and help us make it even better Switch to the new experience ORIGINAL RESEARCH article Front. Mar. Sci., 21 December 2018 Sec. Marine Pollution Volume 5 - 2018 | Amnesic Shellfish Poisoning (ASP) and Paralytic Shellfish Poisoning (PSP) in Nigerian Coast, Gulf of Guinea Medina Omo KadiriSolomon Isagba Department of Plant Biology and Biotechnology, Faculty of Life Sciences, University of Benin, Benin City, Nigeria This investigation is aimed at providing a baseline survey of the current status of the occurrence and spatio-temporal distribution of amnesic shellfish poisoning (ASP) and paralytic shellfish poisoning (PSP) in the Nigerian coast, Gulf of Guinea. The study applied the Jellett Rapid Test technique to algal samples collected from 8 states of South-south (SS) and South-west (SW) zones of coastal Nigeria, spanning the Bight of Bonny to the Bight of Benin, in the Gulf of Guinea, during the rainy and dry seasons, to screen for the presence of the human syndromes of ASP and PSP produced by domoic acid and saxitoxin, respectively. Classified as low, medium, high and highest, various levels of these syndromes were detected across the length of the Nigerian coast. Comparatively, the SW region had more syndromes (PSP and ASP) (64%) than the SS (36%) region of Nigerian coast. The prevalence of PSP (68%) was more than ASP (31%) in both zones with rainy season also recording higher (27%) ASP and PSP for SW than SS (12%) zone. Seasonal consideration revealed that more syndromes (ASP and PSP) were recorded in the rainy season compared to the dry season. With the confirmed presence, spatial and temporal distribution of ASP and PSP in the coastal waters of Nigeria, the need for regular monitoring of algal syndromes and toxins screening is advocated. Introduction A small percentage of algae produce toxins. These toxins cause harm to human, culminating in what is known as human health syndromes. Among the toxins synthesized are domoic acid (DA) and saxitoxin (Sxt) responsible for human events that form groups of undesirable signs, patterns and symptoms indicative of a specific disease or disorder termed syndromes. Domoic acid and saxitoxin induce algal syndromes, respectively, known as amnesic shellfish poisoning (ASP) and paralytic shellfish poisoning (PSP) Zaccaroni and Scaravelli (2008). Of all algal syndromes, ASP, PSP, occur world-wide (Gerssen et al., 2010). Amnesic shellfish poisoning is symptomatic of a pathological loss of short-term memory, dizziness, loss of balance, headache, disorientation, nausea, and vomiting (Lucas et al., 2005) while sensational vibration of the perioral area, respiratory distress, frequent and excessive bowel movements, nausea, slow and continued increase in grave paralysis, spiky responsiveness of the fingertips, headache, dizziness, fever, ataxia, protem blindness and eventual death via respiratory paralysis are signs of PSP (Van Dolah, 2000; Kadiri, 2011; Trainer and Hardy, 2015; Ajani et al., 2017). Domoic acid which is responsible for ASP, is classified as tricarboxylic amino acid on the basis of chemical structure, while saxitoxin which causes PSP, is a purine-derived heterocyclic guanidine compound Zaccaroni and Scaravelli (2008). They are both hydrophilic neurotoxic compounds affecting the exchange of information between the brain and the tendons (Arapov, 2013). Some effects of these toxins range from loss of short term memory to gastrointestinal disorders, diarrhea, vomiting, headache, abdominal cramps, loss of balance, nausea, dizziness, incomprehension and paralysis (Lucas et al., 2005; Ajani et al., 2017). In the in-land and coastal Nigerian waters, just like other coastal areas in the world, variation in climate, pollution and eutrophication resulting from environmental changes can cause an upsurge in toxin amounts by indirect initiation of massive bloom of algae Ajani et al. (2017), though Trainer and Hardy (2015) opined that they can also occur in pristine areas with little or no influence of nutrients input from anthropogenic activities. Shellfish transplantation and transportation of ballast water at the seaports can facilitate the in-flow of harmful/toxic, exotic species in and out of a country (Hallegraeff, 1998; Zhang and Dickman, 1999). Filter feeding shellfish ASP vectors, Mytilus edulis (blue mussels), M. galloprovincialis (black mussel) Ajani et al. (2017) and PSP vector, Saxidomus giganteus (clams) Lucas et al. (2005), zooplankton and herbivorous fishes ingest these algae and act as vectors to humans either directly (e.g., shellfish) or through further food web transfer of sequestered toxin to higher trophic level. Consumption of sea food contaminated with algal toxins result in sea food poisoning syndromes. Other associated human syndromes are diarrhetic shellfish poisoning (DSP), caused by a group of toxins represented by okadaic acid, neurotoxic shellfish poisoning (NSP), caused by brevetoxin and ciguatera fish poisoning (CFP) caused by ciguatoxin, azaspiracid shell fish poisoning (AZP) and clupeotoxin fish poisoning (CLP) (Wang, 2008; Ajani et al., 2017). Though the consequential effects of algal toxin in coastal water and the ecosystems are of global concern, nothing is known about ASP and PSP status and their monitoring in the coast of Nigeria. Previous studies in Nigeria specifically focus on algal taxonomy in specific water bodies within Nigerian coastline (Nwankwo, 1991; Ajuzie and Houvenaghel, 2009; Kadiri, 2011). This work represents the first comprehensive and extensive study covering the Nigerian coastline as well as the pioneer study on algal syndromes in Nigeria. The aim of the study is to investigate the occurrence and spatio-temporal distribution of ASP and PSP in the coastal areas (South-south and South-west) of Nigeria. The coastal communities eat a lot of shellfish and these shellfish are filter feeders, feeding on the toxic phytoplankton algae when present. The study therefore will examine the presence of the syndromes consequent upon the feeding on the prevalent phytoplankton toxic algae. Materials and Methods Study Area The study was carried out in the Atlantic Ocean in Gulf of Guinea, in the Bight of Bonny to the east and Bight of Benin to the west. The study area covers 20 stations selected across 10 locations to cover the entire coast (Cross River, Akwa Ibom, Rivers, Bayelsa, Delta, Ondo, Ogun, Lagos) Lekki, Bar beach and Badagry (Figure 1) of Nigeria (Kadiri and Isagba, 2016) lying between longitudes 3°24′′ and 8°19′′ E and latitude 4°58′′ and 6°24′′ N along the Nigerian coastline (Kadiri, 2002; Ajuzie and Houvenaghel, 2009). The stations Cross River, Akwa Ibom, Bayelsa and Delta are located in the South-south of Nigeria while Ondo, Ogun, Lagos- Lekki, Bar beach and Badagry are located in the South-west of Nigeria. Climatically, there are two main seasons in the area namely the rainy (wet) season spanning from May to October and dry season from November to April. The coastal area is humid with a mean average temperature of 24–32°C and an average annual rainfall ranging between 1,500 and 4,000 m (Kuruk, 2004). FIGURE 1 Figure 1. Coastal map of Nigeria showing study area. Sample Collection From March 2014 to February 2015, at 3 months intervals (March, June, October 2014 and February 2015), phytoplankton algal samples were collected from the surface with horizontal tows of 10μm mesh size plankton net tied to a moving boat for about 10 min and content transferred to clean sample containers. At each location, one sample was collected each from the ocean and adjoining water body. Algal Syndrome Screening The Jellett ASP/PSP rapid testing, an in vitro qualitative lateral flow screen diagnostic test to detect the presence or absence of ASP/PSP toxins in phytoplankton was applied following the manufacturer’s (Jellett Rapid Testing Ltd., 4654 Route #3, Chester Basin, Nova Scotia, Canada BOJ 1K0) instructional manual (Batch 40000-18 Feb 14-512). Phytoplankton concentrates were obtained by filtering 10 L of sea water through a plankton net of 10 micrometers. From these concentrated phytoplankton cells, 0.5 ml of 0.1 M acetic acid was added to 0.5 ml of phytoplankton cells in a clean vial, tightly capped and shaken 6 to 8 times. 0.4 ml of buffer was placed into this vial, and then 0.1 ml of phytoplankton cells was added and mixed. The mixture was placed into the sample well and the result read between 35 min and 1 h. In order to check for differences between seasons and location, the data were subjected to statistical analysis, using Chi square test using SPSS software. Results From this study, results showed the presence of the human syndromes of ASP and PSP. Syndrome detection was grouped and rated as low (25%) where only 1 syndrome was detected. Where 2 different syndromes either in same season or same syndrome type in different seasons were detected was rated medium (50%). A high (75%) syndrome detection rating was ascribed to situations where 3 syndromes (including 2 same syndromes at different seasons) were observed. The highest (100%) toxin detection rating corresponds to where 4 syndromes (including 2 of the same syndromes at different seasons) were found. Spatially, all of the locations across the coast recorded at least one detectable presence of a particular syndrome (Figure 2) from the South-south (SS) to the South-west (SW) coast of Nigeria. Low (25%) occurrence of syndromes were detected at 4 of the total of 10 locations, 3 of which were located at the SS (Akwa Ibom, Bayelsa, and Delta) and 1 at the SW (Ogun). FIGURE 2 Figure 2. Occurrence of algal syndromes (ASP and PSP) in different locations of Nigerian coast. Seasonal consideration revealed that Akwa Ibom and Bayelsa had PSP in the wet season while Delta and Ogun had PSP in the dry season. Similarly, 4 of the 10 locations had medium (50%) detectable syndromes-records of 2 syndromes (including 2 same syndromes at different seasons) each. Two of these locations were observed in the SS (Cross Rivers, Rivers) and 2 in the SW (Ondo and Lekki). Cross Rivers location had both PSP and ASP detected in the dry season. Rivers location had PSP detected both in the dry and wet seasons. Ondo location had PSP in the dry and ASP in the wet seasons. Lekki location had both syndromes (PSP and ASP) in the wet season. A high (75%) syndrome detection of 3 detectable syndromes record (including 2 same syndromes at different seasons) was observed in the SW Badagry location. Here, PSP and ASP were detected in the wet season and PSP reoccurred in the dry season. Also in the SW, the highest (100%) detection of 4 detectable syndromes record (both syndromes at both seasons) was observed only at the Bar Beach location. Figure 3 shows the seasonal percentage syndrome distribution across the zones, reflecting the frequency of ASP and PSP prevalence in both SS and SW zones. The SW region had higher (64%) preponderance of total (ASP + PSP) syndromes detected in both seasons compared to the SS region with a total of 36%. Within the SW, rainy (wet) season alone, 37% (of the 64% total) syndrome was recorded while the other 27% was recorded for dry season SW. Conversely, lower values were recorded in the SS rainy season with 12% (of the 36% total) syndrome detected and 24% for dry season SS. Figure 3 also reveals that rainy season has 100% ASP in the SW and 0% in SS and 50% PSP each for SW and SS. Also, dry season had 50% ASP each for SW and SS but a bit higher 53% PSP for SW and 47% for SS. Figure 4 shows the profile of ASP and PSP distribution between the SS and SW regions. From the results, PSP had higher profile than ASP in both the SS and SW regions. The PSP values were concurrently higher in the SW than the SS. Overall, PSP had 68% incidence (37% in the SW and 31% in SS) while ASP had 31% (26% in the SW and 5% in SS). In summary, the SW region contained more ASP and PSP syndromes than SS region. PSP generally has higher occurrence than ASP in both zones. Rainy season has higher ASP and PSP for SW than SS. FIGURE 3 Figure 3. Seasonal distribution of algal syndromes in Nigerian coast. FIGURE 4 Figure 4. Zonal (SS and SW) distribution of ASP and PSP in both dry and wet seasons. Statistically, for PSP, no significant relationship was found between seasons and locations where samples were collected, X 2 (1, N = 200) = 2.020, p = 0.155, whereas in the case of ASP, a significant relationship was found between seasons and locations where samples were collected, X 2 (1, N = 200) = 66.667, p< 0.001. Discussion From the SS to the SW, all sampled locations/zones had records of at least one syndrome of ASP or PSP (Figures 2, 3). The SW locations recorded more ASP and PSP syndromes than the SS locations in this study with the Bar beach and Badagry locations with highest occurrences. This may not be unconnected with the higher prevalence of toxin-producing species generally observed in the SW in contrast to the SS locations (Kadiri et al., 2016). Earlier study by Zendong et al. (2016) recorded substantial quantities of specific marine algal biotoxins in SW Lekki and Bar beach relative to SS Rivers and Akwa Ibom. The SW coastal Nigeria is reckoned to be of more elevated salinity than the SS (Zendong et al., 2016) with greater preponderance for toxic dinoflagellates to survive (Delmas et al., 1992; Zendong et al., 2016) whereas the salinity of SS is diluted by the Niger delta inflows. The high salinity of Bar beach and Lekki was also stressed by Ajuzie and Houvenaghel (2009). The result obtained in this study is in consonance with (Lucas et al., 2005) who reported ASP and PSP profiles with clear variations between regions, with the toxin content in the sample material obtained north of Aberdeen, the Scottish east coast, being lower than that in the remaining area under their investigation. The toxin profile obtained on the Scottish east coast was ascribed to the presence of different species or strains of toxic algae. Major routes through the seaports by which ballast waters with toxic species open in and out to other countries have been recognized as sources of toxin contamination (Zhang and Dickman, 1999; Doblin et al., 2004). It is interesting to note that the traffic of shipping activities in Nigeria is considerably a lot higher in the SW than in the SS region. In the northern and central California (2002 to 2007) and many other areas worldwide, ASP and PSP have been detected (Rositano et al., 2001). The presence of domoic acid, an ASP producer, was reported also in north and western European coastal zones (Lundholm et al., 1995; Dizer et al., 2001). In the Australian coast, Ajani et al. (2017) identified amidst other syndromes, PSP and ASP, describing them as major causes of worry, culminating in a huge loss of approximately AUD$23M in Tasmania in 2012. The higher prevalence of PSP in comparison to ASP across locations and zones could be explained by the fact that only one algal genus, Pseudonitzschia within the diatom group, is responsible for ASP while quite an assortment of different genera/species of the dinoflagellate group and blue-green algae are responsible for PSP. Pseudonitzschia delicatissima, P. multiseries, P. cuspidata, P. pungens, and P. australis are diatom species generally implicated in the biosynthesis of DA responsible for ASP (Ajani et al., 2017). Ever since the first report of PSP event (illness and deaths) near San Francisco, CA, United States, caused by Alexandrium catenella, members of three dinoflagellate genera namely Alexandrium, Gymnodinium, and Pyrodinium have been reported to be the major sources of PSP toxins (Shumway, 1990). Gymnodinium catenatum, Alexandrium catenella, A. acatenella, A. fundyense, A. minutum, A. tamarense, A. ostenfeldii, Pyrodinium bahamense (Ajani et al., 2017), Cylindrospermopsis raciborskii, Aphanizomenon flos-aquae, Lyngbya spp., Anabaena circinalis (Lucas et al., 2005; Bittencourt-Oliveira et al., 2015) all synthesize saxitoxin responsible for PSP in both marine and fresh water. Most of these species have also been identified from the Nigerian coast (Kadiri et al., 2016). The seasonal variation as observed in higher PSP/ASP in the rainy season relative to the dry season may have resulted from favorable conditions enhancing eutrophication, higher influx and proliferation of toxin producers brought by the rains in both SS and SW zones. The linkage of harmful algae to eutrophication has been documented and corroborated by other workers (Anderson et al., 2002). Coastal eutrophication or nutrient enrichment is invoked by high inorganic nutrients from river discharges (Hodgkiss and Lu, 2004) and this culminates in HABs (Wang et al., 2003; Imai et al., 2006). Nutrient enrichment in coastal areas arises from high-inorganic nutrients in freshwater runoff, sewage discharge, agricultural fertilizers, and nearby high-density coastal aquaculture (Qian and Liang, 1999). The phosphate load in the aquatic ecosystems could also be attributed to farming activities whereby farmers apply phosphate fertilizers in their farms, hence surface run-off from the farms could increase phosphate load in the river water. The syndromes ASP and PSP are characterized by several human health hazards. ASP is caused by Domoic acid, a water-soluble tricarboxylic amino acid and a potent glutamate receptor agonist. The symptomatic effects of ASP include gastrointestinal effects (e.g., nausea, vomiting, and diarrhea) and neurologic effects such as brain lesions, dizziness, disorientation, lethargy, seizures, short-term memory loss (amnesia), coma, and death (Quilliam and Wright, 1989; Hiolski et al., 2014). Hiolski et al. (2014) found at asymptomatic exposure, significant alteration of transcription genes for neurological function and development, as well as impairment of mitochondrial function. Paralytic shellfish poisoning is the oldest known intoxication and one of the most dangerous for humans, with a high rate of mortality. It is a worldwide distributed poisoning, with cases reported for North and South America, Europe, Africa and Asia (Zaccaroni and Scaravelli, 2008). It is reported that worldwide that 1,600 cases of PSP occur yearly (Rodrigue et al., 1990). Though PSP was more prevalent in temperate countries, it is now increasing in tropical regions (Rodrigue et al., 1990). Report of Rodrigue et al. (1990) citing epidemic occurrence of PSP, indicated that the symptoms-persistent headaches, memory loss and fatigue occurred for weeks in some instances in Guatemala. Paralytic shellfish poisoning is characterized by gastrointestinal and neurological symptoms, with nausea, vomiting, diarrhea, tingling or numbness around lips, gradual and more and more severe paralysis, respiratory difficulty, death in humans through respiratory paralysis (Kodama, 2000; Zaccaroni and Scaravelli, 2008). PSP syndrome is caused by a suite of heterocyclic guanidines collectively called saxitoxins (STXs) which are heat-or thermo-stable and water-soluble non-proteinaceous toxins. Saxitoxin (STX) is one of the few toxins which are produced by both marine and fresh water (cyanobacteria) algae. Saxitoxins are responsible for about 2000 human cases/year, with a mortality rate ranging from 15 to 50% (Van Dolah, 2000; Marcaillou-Le Baut et al., 2001). Trainer and Hardy (2015) substantiated the high risk to human health following the detection of these toxin-causing syndromes in diets, water, stomach contents and ecological samples. Similarly, Repavich et al. (1990) reported the demise of cattle and other mammals from acute and chronic effects of these toxins. Pulido (2016) also found bloom of toxic Pseudo-nitzschia from the west coast of North America causing ASP syndrome when crab and clams were contaminated, leading to the shutting down of many harvesting centers and caution issued by regulatory agencies to end users. Conclusion The algal syndromes ASP and PSP have been recorded at substantial levels in coastal areas of Nigeria. These syndromes have various symptomatic effects on humans. It is evident from this study that Nigeria, which currently has no established harmful algal bloom/toxic algal monitoring program, has dangerous harmful algal syndromes throughout the entire coast. Beyond this pioneer qualitative study which has established unequivocally, the presence of these syndromes and perhaps others yet to be investigated within the SW and SS Nigeria, there is absolute need for further research to quantify the actual toxin concentration in our freshwater and marine ecosystems. The need for urgent regular monitoring/monitoring programs of Nigerian coastal waters is therefore emphasized. Author Contributions MK was the principal investigator, conceptualized and executed the project. SI assisted in the work. Funding Tertiary Education Trust Fund was gratefully acknowledged for the National Research Fund (TETF/NRF 2009) grant provided for this study. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgments Denise Mukoro, Jeffrey Ogbegbor, Osasere Omoruyi and Timothy Unusiotame are also gratefully acknowledged in their assistance at sample collection. References Ajani, P., Harwoood, D. T., and Murray, S. A. (2017). Recent trends in marine phycotoxins from Australian coastal waters. Mar. Drugs 15, 33–53. doi: 10.3390/md15020033 PubMed Abstract | CrossRef Full Text | Google Scholar Ajuzie, C., and Houvenaghel, G. (2009). Preliminary survey of potentially harmful dinoflagellates in Nigeria’s coastal waters. Fottea 9, 107–120. doi: 10.5507/fot.2009.010 CrossRef Full Text | Google Scholar Anderson, D. M., Glibert, P. M., and Burkholder, J. M. (2002). Harmfulalgal blooms and eutrophication: nutrient sources, composition and consequences. Estuaries 25, 562–584. doi: 10.1007/BF02804901 CrossRef Full Text | Google Scholar Arapov, J. A. (2013). Review of shellfish phycotoxin profile and toxic phytoplankton species along Croatian coast of the Adriatic Sea. Acta Adriat. 54, 283–298. Google Scholar Bittencourt-Oliveira, M. C., Chia, A. M., de Oliveira, H. S. B., Cordeiro-Araújo, M. K., Molica, R. J. R., and Dias, C. T. S. (2015). Allelopathic interactions between microcystin- producing and non-microcystin-producing cyanobacteria and green microalgae: implications for microcystins production. J. Appl. Phycol. 27, 275–284. doi: 10.1007/s10811-014-0326-2 CrossRef Full Text | Google Scholar Delmas, D., Herbland, A., and Maestrini, S. Y. (1992). Environmental conditions which lead to increase in cell density of the toxic dinoflagellates Dinophysis spp in nutrient rich and nutrient-poor waters of the French Atlantic coast. Mar. Ecol. Prog. Ser. 89, 53–61. doi: 10.3354/meps089053 CrossRef Full Text | Google Scholar Dizer, H., Fischer, B., Harabawy, A. S. A., Hennion, M. C., and Hansen, P. D. (2001). Toxicity of domoic acid in the marine mussel Mytilus edulis. Aquat. Toxicol. 55, 149–156. doi: 10.1016/S0166-445X(01)00178-3 CrossRef Full Text | Google Scholar Doblin, M. A., Popels, L. C., Coyne, K. J., and Hutchins, D. A. (2004). Transport of the harmful bloom alga Aureococcus anophagefferens by ocean-going ships and coastal boats. Appl. Environ. Microbiol. 70, 6495–6500. doi: 10.1128/AEM.70.11.6495-6500.2004 PubMed Abstract | CrossRef Full Text | Google Scholar Gerssen, A., Pol-Hofstad, I. E., Poelman, M., Mulder, P. P. J., van den Top, H. J., and de Boer, J. (2010). Marine toxins: chemistry, toxicity, occurrence and detection, with special reference to the Dutch situation. Toxins 2, 878–904. doi: 10.3390/toxins2040878 PubMed Abstract | CrossRef Full Text | Google Scholar Hallegraeff, G. M. (1998). Transport of toxic dinoflagellates via ships’ ballast water: bioeconomic risk assessment and efficacy of possible ballast water management strategies. Mar. Ecol. Prog. Ser. 168, 297–309. doi: 10.3354/meps168297 CrossRef Full Text | Google Scholar Hiolski, E. M., Preston, S. K., Frame, E. R., Myers, M. S., Bammler, T. K., Beyer, R. P., et al. (2014). Chronic low-level domoic acid exposure alters gene transcription and impairs mitochondrial function in the CNS. Aquat. Toxicol. 155, 151–159. doi: 10.1016/j.aquatox.2014.06.006 PubMed Abstract | CrossRef Full Text | Google Scholar Hodgkiss, I. J., and Lu, S. H. (2004). The effect of nutrients and their ratios on phytoplankton abundance in Junk Bay. Hong Kong. Hydrobiologia 512, 215–229. doi: 10.1023/B:HYDR.0000020330.37366.e5 CrossRef Full Text | Google Scholar Imai, I., Yamaguchi, M., and Hori, M. (2006). Eutrophication and occurrences of harmful algalblooms in the Seto inland Sea. Japan. Plankton Benthos Res. 1, 71–84. doi: 10.3800/pbr.1.71 CrossRef Full Text | Google Scholar Kadiri, M. O. (2002). A spectrum of phytoplankton flora along salinity gradient in the Eastern Niger Delta area of Nigeria. Acta Bot. Hung. 44, 75–83. doi: 10.1556/ABot.44.2002.1-2.6 CrossRef Full Text | Google Scholar Kadiri, M. O. (2011). Notes on harmful algae from Nigerian coastal waters. Acta Bot. Hung. 53, 137–143. doi: 10.1556/ABot.53.2011.1-2.12 CrossRef Full Text | Google Scholar Kadiri, M. O., and Isagba, S. (2016). “PCR and enzyme-linked immunosorbent assay of microcystin in the bights of bonny and Benin, Nigeria,” in Proceedings of the 2nd University of Benin Annual Research Day (UBARD) Conference, Benin, 449–452. Google Scholar Kadiri, M. O., Ogbebor, J. U., and Omoruyi, O. A. (2016). “Spatial distribution of some potentially harmful algae in coastal waters of Nigeria,” in Proceedings of the 2nd University of Benin Annual Research Day (UBARD) Conference, Benin, 363–366. Google Scholar Kodama, M. (2000). “Ecology, classification, and origin,” in Seafood and Freshwater Toxins: Pharmacology, Physiology and Detection, eds L. Botana and M. Dekker (New York, NY: Taylor & Francis), 125–150. Google Scholar Kuruk, P. (2004). Customary Water Laws and Practices: Nigeria. Available at: Google Scholar Lucas, B., Dahlmann, J., Erler, K., Gerdts, G., Wasmund, N., and Hummert, C. (2005). Overview of key phytoplankton toxins and their recent occurrence in the North and Baltic Sea. Environ. Toxicol. 20, 1–17. doi: 10.1002/tox.20072 PubMed Abstract | CrossRef Full Text | Google Scholar Lundholm, N., Skov, J., Pocklington, R., and Moestrup, O. (1995). Domoic acid, the toxic amino acid responsible for amnesic shellfish poisoning, now in Pseudonitzschia seriata (Bacillariophyceae) in Europe. Phycologia 33, 475–478. doi: 10.2216/i0031-8884-33-6-475.1 CrossRef Full Text | Google Scholar Marcaillou-Le Baut, C., Krys, S., and Bourdeau, P. (2001). “Syndromes observés et données épidémiologiques,” in Toxines D’algues Dans L’alimentation, eds J. M. Frémy and P. Lassus (Issy-les-Moulineaux: Ifremer), 371–399. Google Scholar Nwankwo, D. I. (1991). A survey of the dinoflagellates of Nigeria. armoured dinoflagellates of Lagos lagoon and associated tidal creeks. Niger. J. Bot. 4, 49–60. Google Scholar Pulido, O. M. (2016). Phycotoxins by harmful algal blooms (HABS) and human poisoning: an overview. Int. Clin. Pathol. J. 2:00062. doi: 10.15406/icpjl.2016.02.00062 CrossRef Full Text | Google Scholar Qian, H. L., and Liang, S. (1999). Study on the red tide in the Pearl River estuary and its near waters. Mar. Environ. Sci. 18, 69–74. Google Scholar Quilliam, M., and Wright, J. (1989). The amnesic shellfish poisoning mystery. Anal. Chem. 61, 105–106. doi: 10.1021/ac00193a745 CrossRef Full Text | Google Scholar Repavich, W. M., Sonzoggni, W. C., Standridge, J. H., and Wedepohl, R. E. (1990). Cyanobacteria (blue-green algae) in Wisconsin waters: acute and chronic toxicity. Water Res. 24, 225–231. doi: 10.1016/0043-1354(90)90107-H CrossRef Full Text | Google Scholar Rodrigue, D. C., Etzel, R. A., Hall, S., de Porras, E., Velasquez, O. H., Tauxe, R. V., et al. (1990). Lethal paralytic shellfish poisoning in Guatemala. Am. J. Trop. Med. Hyg. 42, 267–271. doi: 10.4269/ajtmh.1990.42.267 PubMed Abstract | CrossRef Full Text | Google Scholar Rositano, J., Newcombe, G., Nicholson, B., and Sztajnbok, P. (2001). Ozonation of algal toxins in four treated waters. Water Res. 35, 23–32. doi: 10.1016/S0043-1354(00)00252-9 PubMed Abstract | CrossRef Full Text | Google Scholar Shumway, S. E. (1990). A review of the effects of algal blooms on shellfish and aquaculture. J. World Aquac. Soc. 21, 65–104. doi: 10.1111/j.1749-7345.1990.tb00529.x CrossRef Full Text | Google Scholar Trainer, V. L., and Hardy, F. J. (2015). Integrative monitoring of marine and freshwater harmful algae in washington state for public health protection. Toxins 7, 1206–1234. doi: 10.3390/toxins7041206 PubMed Abstract | CrossRef Full Text | Google Scholar Van Dolah, F. M. (2000). Marine algal toxins: origins, health effects, and their increased occurrence. Environ. Health Perspect. 108, 133–141. doi: 10.1289/ehp.00108s1133 PubMed Abstract | CrossRef Full Text | Google Scholar Wang, D. (2008). Neurotoxins from marine dinoflagellates: a brief review. Mar. Drugs 6, 349–371. doi: 10.3390/md20080016 PubMed Abstract | CrossRef Full Text | Google Scholar Wang, H. K., Huang, L. M., Huang, X. P., Song, X. Y., Wang, H. J., Wu, N. J., et al. (2003). A red tide caused by Gyrodinium instriatum and its environmental characters in Zhujiang Riverestuary. J. Trop. Oceanogr. 22, 55–62. Google Scholar Zaccaroni, A., and Scaravelli, D. (2008). “Toxicity of sea algal toxins to humans and animals,” in Algal Toxins: Nature, Occurrence, Effect and Detection, eds V. Evangelista, L. Barsanti, A. M. Frassanito, V. Passarelli, and P. Gualtieri (Berlin: Springer Science + Business Media), 91–157. doi: 10.1007/978-1-4020-8480-5_4 CrossRef Full Text | Google Scholar Zendong, Z., Kadiri, M., Herrenknecht, C., Nezan, E., Mazzeo, A., and Hess, P. (2016). Algal toxin profiles in Nigerian coastal waters (Gulf of Guinea) using passive sampling and liquid chromatography coupled to mass spectrometry. Toxicon 114, 16–27. doi: 10.1016/j.toxicon.2016.02.011 PubMed Abstract | CrossRef Full Text | Google Scholar Zhang, F., and Dickman, M. (1999). Mid-ocean exchange of container vessel ballast water, 1: seasonal factors affecting the transport of harmful diatoms and dinoflagellates. Mar. Ecol. Prog. Ser. 176, 243–251. doi: 10.3354/meps176243 CrossRef Full Text | Google Scholar Keywords: harmful algae syndrome, amnesic shellfish poisoning, paralytic shellfish poisoning, domoic, saxitoxin Citation: Kadiri MO and Isagba S (2018) Amnesic Shellfish Poisoning (ASP) and Paralytic Shellfish Poisoning (PSP) in Nigerian Coast, Gulf of Guinea. Front. Mar. Sci. 5:481. doi: 10.3389/fmars.2018.00481 Received: 11 February 2018; Accepted: 29 November 2018; Published: 21 December 2018. Edited by: Rathinam Arthur James, Bharathidasan University, India Reviewed by: Yelda Aktan, Istanbul University, Turkey Chidambaram Sabarathinam, Annamalai University, India Copyright © 2018 Kadiri and Isagba. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Correspondence: Medina Omo Kadiri, [email protected]; [email protected] Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher. Frontiers' impact Articles published with Frontiers have received 12 million total citations Your research is the real superpower - learn how we maximise its impact through our leading community journals Explore our impact metrics Download article Download PDF ReadCube EPUB XML Share on Export citation EndNote Reference Manager Simple Text file BibTex 5,4K Total views 740 Downloads 7 Citations Citation numbers are available from Dimensions View article impact View altmetric score Share on Edited by Rathinam Arthur James Chair, School of Marine Sciences, Bharathidasan University, Trichy, India., India Reviewed by Sabarathinam Chidambaram Kuwait Institute for Scientific Research, Kuwait Yelda AKTAN Istanbul University, Türkiye Table of contents Abstract Introduction Materials and Methods Results Discussion Conclusion Author Contributions Funding Conflict of Interest Statement Acknowledgments References Export citation EndNote Reference Manager Simple Text file BibTex Check for updates People also looked at Large-Scale Sea Turtle Mortality Events in El Salvador Attributed to Paralytic Shellfish Toxin-Producing Algae Blooms Oscar Amaya, Rebeca Quintanilla, Brian A. Stacy, Marie-Yasmine Dechraoui Bottein, Leanne Flewelling, Robert Hardy, Celina Dueñas and Gerardo Ruiz Centromere and Pericentromere Transcription: Roles and Regulation … in Sickness and in Health Ksenia Smurova and Peter De Wulf Fast Detection of Diarrhetic Shellfish Poisoning Toxins in Mussels Using NIR Spectroscopy and Improved Twin Support Vector Machines Yao Liu, Fu Qiao, Lele Xu, Runtao Wang, Wei Jiang and Zhen Xu P- and N-Depletion Trigger Similar Cellular Responses to Promote Senescence in Eukaryotic Phytoplankton Sebastian D. Rokitta, Peter von Dassow, Björn Rost and Uwe John Risk Perception of Coastal Communities and Authorities on Harmful Algal Blooms in Ecuador Mercy J. Borbor-Córdova, Mireya Pozo-Cajas, Alexandra Cedeno-Montesdeoca, Gabriel Mantilla Saltos, Chippie Kislik, Maria E. Espinoza-Celi, Rene Lira, Omar Ruiz-Barzola and Gladys Torres Guidelines Author guidelines Services for authors Policies and publication ethics Editor guidelines Fee policy Explore Articles Research Topics Journals How we publish Outreach Frontiers Forum Frontiers Policy Labs Frontiers for Young Minds Frontiers Planet Prize Connect Help center Emails and alerts Contact us Submit Career opportunities Follow us © 2025 Frontiers Media S.A. All rights reserved Privacy policy|Terms and conditions Download article Download Download PDF ReadCube EPUB XML Policy documents (1) X (3) Mendeley (19) See more details We use cookies Our website uses cookies that are essential for its operation and additional cookies to track performance, or to improve and personalize our services. To manage your cookie preferences, please click Cookie Settings. For more information on how we use cookies, please see ourCookie Policy Cookies Settings Reject non-essential cookies Accept cookies Privacy Preference Center Our website uses cookies that are necessary for its operation. Additional cookies are only used with your consent. These cookies are used to store and access information such as the characteristics of your device as well as certain personal data (IP address, navigation usage, geolocation data) and we process them to analyse the traffic on our website in order to provide you a better user experience, evaluate the efficiency of our communications and to personalise content to your interests. Some cookies are placed by third-party companies with which we work to deliver relevant ads on social media and the internet. Click on the different categories' headings to change your cookie preferences. If you wish to learn more about how we use cookies, please see our cookie policy below. Cookie Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for our websites to function as they enable you to navigate around the sites and use our features. They cannot be switched off in our systems. They are activated set in response to your actions such as setting your privacy preferences, logging-in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Performance/Analytics Cookies [x] Performance/Analytics Cookies These cookies allow us to collect information about how visitors use our website, including the number of visitors, the websites that referred them to our website, and the pages that they visited. We use them to compile reports, to measure and improve the performance of our website, analyze which pages are the most viewed, see how visitors move around the site and fix bugs. If you do not allow these cookies, your experience will not be altered but we will not be able to improve the performance and content of our website. Targeting/Advertising Cookies [x] Targeting/Advertising Cookies These cookies may be set by us to offer your personalized content and opportunities to cooperate. They may also be used by social media companies we work with to build a profile of your interests and show you relevant adverts on their services. They do not store directly personal information but are based on unique identifiers related to your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm My Choices
190665
https://math.stackexchange.com/questions/3872241/about-spherical-coordinates
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams About spherical coordinates Ask Question Asked Modified 4 years, 11 months ago Viewed 1k times 1 $\begingroup$ I will post an image which believe it is essential to understand the question: See the image: Figure book Spherical coordinates $r, \theta, \phi$ are perfectly intuitive because the angles $\theta$ and $\phi$ correspond, respectively, to longitude and latitude on the surface of the Earth, and $r$ is the distance to the center of the Earth. I believe the notation is wrong, since theta is the angle wrt north and south axis, shouldn't be the reverse? spherical-coordinates Share asked Oct 19, 2020 at 14:35 LSSLSS 1144 bronze badges $\endgroup$ Add a comment | 2 Answers 2 Reset to default 2 $\begingroup$ The $(r,\theta,\phi)$ convention for three dimensional spherical polar coordinates is used in mathematics, as it follows naturally from the $(r,\theta)$ convention used for the two dimensional case. However, physicists for some bizarre reason prefer $(r,\phi,\theta)$. What makes it even more confusing is that the very order the coordinates are written is also sometimes switched. Some authors order it as radial, azimuthal, polar which seems reasonable whereas some authors use radial, polar, azimuthal instead. So even if the author says "I'll use the $(r,\phi,\theta)$ convention" it might not even mean what you think it does! So, if you're using spherical coordinates, it's best to explicitly state the coordinate transformation you're using. Here's a passage from some recent work of mine: I will also use the $\displaystyle ( r,\theta ,\phi )$ convention for spherical coordinates, as used by Wolfram Mathworld. Explicitly, the coordinate transformation is \begin{equation} \begin{bmatrix} x\ y\ z \end{bmatrix} =\begin{bmatrix} r\cos \theta \sin \phi \ r\sin \theta \sin \phi \ r\cos \phi \end{bmatrix} \end{equation} Suggested reading: Share answered Oct 19, 2020 at 14:45 K.defaoiteK.defaoite 14.3k44 gold badges2424 silver badges5454 bronze badges $\endgroup$ 1 $\begingroup$ Yes, this convection give the names to the angle that agree what i believe be the geographical convention, that is, phi will measure the latitude, and theta the longitude. So see my post image, where phi and theta are changed, with this i believe the author changed the angle figure, but remain its name in agreement with your link $\endgroup$ LSS – LSS 2020-10-19 14:50:49 +00:00 Commented Oct 19, 2020 at 14:50 Add a comment | 1 $\begingroup$ Which textbook is this taken from? If it is a mathematics textbook, the convention is that $\phi$ corresponds to latitude and $\theta$ to longitude, but if it is a physics textbook, then $\theta$ will usually correspond to latitude and $\phi$ to longitude. This sort of confusion is why I usually stay away from spherical coordinates. Share answered Oct 19, 2020 at 14:40 Joshua WangJoshua Wang 6,2531212 silver badges1919 bronze badges $\endgroup$ 2 $\begingroup$ It is latitude and longitude in the geographical sense? Because it is what is giving me trouble, but if is a definition that the angle theta, for example, is called latitude, ok.. It is from "Introduction to Tensor analysis" Pavel $\endgroup$ LSS – LSS 2020-10-19 14:45:03 +00:00 Commented Oct 19, 2020 at 14:45 $\begingroup$ It seems that you are reading a mathematics textbook, so, yes $\theta$ and $\phi$ correspond to geographical longitude and latitude ("prime meridian" is xz plane with positive x, and "north pole" is positive z-axis) $\endgroup$ Joshua Wang – Joshua Wang 2020-10-19 14:47:25 +00:00 Commented Oct 19, 2020 at 14:47 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions spherical-coordinates See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 3 Laplace's equation and separation of variables in spherical coordinates done rigorously Related 3 Converting between spherical coordinate systems 1 Determining new coordinates after a rotation of a sphere 3 Vector transform: Spherical coordinates in one frame to another, staying in spherical 1 Geodesics and polar coordinates. 1 Given two points on a unit sphere, how to express their angular difference in spherical coordinates? The surface temperature of the earth in spherical coordinates How do you calculate icosahedron vertex positions in spherical coordinates? 0 clamp Spherical coordinates to latitude/longitude 4 Why aren't all angles in (n-)spherical coordinates defined relative to the x-axis? Hot Network Questions Why is the definite article used in “Mi deporte favorito es el fútbol”? Can a state ever, under any circumstance, execute an ICC arrest warrant in international waters? Riffle a list of binary functions into list of arguments to produce a result ICC in Hague not prosecuting an individual brought before them in a questionable manner? Proving a certain Cantor cube is a complete metric space (by definition) - proof verification Is it ok to place components "inside" the PCB Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator How random are Fraïssé limits really? Analog story - nuclear bombs used to neutralize global warming How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? Identifying a movie where a man relives the same day The geologic realities of a massive well out at Sea How to home-make rubber feet stoppers for table legs? An odd question Bypassing C64's PETSCII to screen code mapping how do I remove a item from the applications menu Separating trefoil knot on torus The altitudes of the Regular Pentagon Storing a session token in localstorage How big of a hole can I drill in an exterior wall's bottom plate? Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Why do universities push for high impact journal publications? How to use cursed items without upsetting the player? Can a cleric gain the intended benefit from the Extra Spell feat? more hot questions Question feed
190666
https://www.hoy.com.py/ocio/rae/2025/06/06/mientras-y-mientras-que-usos-diferentes
Diario HOY | Mientras y mientras que, usos diferentes Skip to the content Menu Asu 21° $ Dólar c/v 6900/7050 Nacionales Espectáculos Deportes Política Dinero & Negocios Radios Videos Ocio Mundo RAE 6 de junio de 2025 13:23 Mientras y mientras que, usos diferentes Foto 1 de 1 La RAE explicó los usos de mientras y mientras que. Foto: Archivo Las palabras “mientras” y “mientras que” se usan para expresar temporalidad, pero también para indicar contraposición. La RAE recuerda cuál es la función de cada una. FacebookTwitterWhatsAppShare 6 de junio de 2025 13:23 En su espacio habilitado para realizar consultas semanales, la Real Academia Española (RAE)expuso las diferencias entre “mientras que” y “mientras”, ante la probabilidad de confusión y de uso igualitario. Según la explicación, para aludir a un valor temporal, lo indicado es usar solamente “mientras”. Ejemplo: Espere en línea mientras procesamos su pedido. En cambio, para expresar una contraposición se aconseja la locución conjuntiva “mientras que”. Caso hipotético: Muchos permanecerán en sus casas, mientras que otros optarán por salir. Lea también:¿Por qué Dios mediante y no mediante Dios? Existe otra posibilidad de uso de mientras que, pero también con un valor temporal. «Pongo a freír un poco de carne y cebolla, y pongo la mesa mientras que se cocinan». Este uso es propio del español clásico y medieval, pero permanece en muchos países de América. La sección RAE del diario HOY tiene como fin promover el buen uso del idioma español, con el sustento de lo que dicta la RAE, máxima autoridad de la lengua que, con el correr de los años, va cambiando algunas reglas y proponiendo adaptaciones, según la necesidad. Puede interesarle: Asimismo, a sí mismo y así mismo: todas correctas, pero diferentes Etiquetas: mientrasmientras queraeusoconjunciónusosdiferencias Déjanos tus comentarios en Voiz Leé también RAE La impresionante variedad de usos de los puntos suspensivos RAE La alusión correcta al día anterior a ayer: ¿anteayer o antes de ayer? RAE Funcionario público, caso puntual y más redundancias RAE “Ya está ya”, repetir otra vez y otras redundancias RAE ¿Qué hora es o qué horas son?: la puntillosa respuesta de la RAE RAE Claves rumbo al Mundial: Albirroja, con mayúscula y sin comillas RAE ¿Primero de octubre o uno de octubre? RAE Rally, su alternativa en español y algunas claves de redacción RAE Me duele mi cabeza, se lastimó su rodilla y otros errores gramaticales RAE Conciencia y consciencia: palabras casi gemelas, pero diferentes RAE 26 de septiembre de 2025 13:39 La impresionante variedad de usos de los puntos suspensivos Foto 1 de 1 Sede de la Real Academia Española. Son tres y nada más que tres (el suspenso no aumenta por agregar puntos), pero sus usos y funciones llegan a mucho más. Los puntos suspensivos pueden ser utilizados en diferentes contextos. Revisamos algunos. FacebookTwitterWhatsAppShare 26 de septiembre de 2025 13:39 Según la Ortografía de la lengua española, los puntos suspensivos son un signo de puntuación formado por tres puntos consecutivos (…), entre los que no debe dejarse espacio alguno. Indican siempre que falta algo para completar el discurso, es decir, señalan una suspensión o una omisión. Esa ausencia puede responder al deseo de quien escribe de dejar en suspenso el enunciado—con intención meramente enfática o para expresar ciertos estados de ánimo o actitudes del hablante con respecto a lo que dice—, o bien a la conveniencia o necesidad de omitir una secuencia de texto sin más. Como indicadores de modalidad, es decir, cuando aportan información sobre la actitud o intención del hablante en relación al mensaje, pueden presentarse los siguientes casos. a)Pausa que expresa duda, temor o vacilación: Te llaman del hospital… Espero que sean buenas noticias; Quería preguntarte… No sé…, bueno…, que si quieres ir conmigo a la fiesta. Lea también: Coser-cocer, cabo-cavo y otros parónimos b) Con el fin de crear expectación: Si yo te contara… Se usan también cuando se deja el enunciado incompleto y en suspenso por cualquier otro motivo: Fue todo muy violento, estuvo muy desagradable… No quiero seguir hablando de ello. c) A veces, sin que impliquen omisión, señalan la existencia de pausas que demoran enfáticamente el enunciado: Ser… o no ser… Esa es la cuestión. d) En los diálogos, señalan silencios significativos de los interlocutores: «—¿Eso era lo que me tenías que decir? —No, que la inmundicia serás vos y no la película. Y no me hables más. —Discúlpame. —… Igualmente, para indicar al lector que se omite una palabra o un grupo de palabras Además, para insinuar, evitando su reproducción, expresiones o palabras malsonantes o inconvenientes: ¡Qué hijo de… está hecho. Puede interesarle: Funcionario público, caso puntual y más redundancias La sección RAE del diario HOY tiene como fin promover el buen uso del idioma español, con el sustento de lo que dicta la RAE, máxima autoridad de la lengua que, con el correr de los años, va cambiando algunas reglas y proponiendo adaptaciones, según la necesidad. Etiquetas: puntos suspensivosgramáticacontextousofuncioness s Déjanos tus comentarios en Voiz RAE 24 de septiembre de 2025 10:02 Coser-cocer, cabo-cavo y otros parónimos Foto 1 de 1 La acción de unir con hilo y aguja se llama coser. Foto: Gentileza En español existen palabras muy parecidas en su escritura, pero que se diferencian por apenas una letra. A estos casos se los denomina parónimos y pueden causar confusión. Repasamos algunos ejemplos. FacebookTwitterWhatsAppShare 24 de septiembre de 2025 10:02 Una palabra es parónima de otra si se asemeja a ella en su etimología, su forma o sonido, según la Fundación Español Urgente. Deshecho: del verbo deshacer Desecho (sin h): de los sustantivos residuos y basura. Coser: unir con hilo, bordar. Cocer: cocinar Lea también:Funcionario público, caso puntual y más redundancias Cabo: m. Cada uno de los extremos de las cosas. Cavo: conjugación del verbo cavar en el tiempo Presente Indicativo, primera persona del singular. Consejo: recomendación Concejo: municipio, ayuntamiento, corporación municipal Siervo: esclavo Ciervo: animal Vaca: animal Baca: portaequipajes, sitio en la parte superior de las diligencias y demás coches de camino, donde podían ir pasajeros y se colocaban equipajes y otros efectos resguardados con una cubierta. Acechar: tr. Observar, aguardar cautelosamente con algún propósito. Asechar: tr. Poner o armar asechanzas. Puede interesarle: “Ya está ya”, repetir otra vez y otras redundancias La sección RAE del diario HOY tiene como fin promover el buen uso del idioma español, con el sustento de lo que dicta la RAE, máxima autoridad de la lengua que, con el correr de los años, va cambiando algunas reglas y proponiendo adaptaciones, según la necesidad. Etiquetas: cosercocercabocavoparónimosdeshechodesechosavosiervociervo Déjanos tus comentarios en Voiz RAE 22 de septiembre de 2025 13:22 “El más preferido”, una expresión redundante Foto 1 de 1 El chocolate blanco es el preferido de muchos. Foto: archivo. Decir “el más preferido” es una frase inapropiada, ya que contiene una redundancia. Revisamos por qué y la comparamos con otros casos similares. FacebookTwitterWhatsAppShare 22 de septiembre de 2025 13:22 La palabra “preferido” es equivalente a favorito, que significa predilecto o que se prefiere a todos los demás. Cuando hablamos de preferido,ya indicamos que nos referimos a algo o alguien que más gusta o agrada. También a algo que está por encima de todas las demás opciones. Preferido ya implica la máxima preferencia, por lo tanto, decir “el más preferido” es tan redundante como “el más mejor”,por lo tanto, la palabra “más”, no hace sino repetir la idea superlativa. También en RAE: La alusión correcta al día anterior a ayer: ¿anteayer o antes de ayer? Todas estas reglas se aplican también a “favorito”, pues, no se puede ser “el más favorito”, ya que la palabra misma trae consigo esa carga de la máxima preferencia. Lo apropiado es: el preferido, el favorito o, en todo caso, el que más gusta, el que más agrada. En el caso de mejor, lo apropiado es el mejor. Puede interesarle: “Ya está ya”, repetir otra vez y otras redundancias La sección RAE del diario HOY tiene como fin promover el buen uso del idioma español, con el sustento de lo que dicta la RAE, máxima autoridad de la lengua que, con el correr de los años, va cambiando algunas reglas y proponiendo adaptaciones, según la necesidad. Etiquetas: preferidofavoritoredundanciaejemplos Déjanos tus comentarios en Voiz Mientras y mientras que, usos diferentes La impresionante variedad de usos de los puntos suspensivos Coser-cocer, cabo-cavo y otros parónimos “El más preferido”, una expresión redundante Nacionales Deportes Espectáculos Política Editorial Vida Tecnología Dinero & Negocios Mundo Especiales HOY TV Radios Publicitar en Hoy Programación de Radios Política de Privacidad Nación Media Avda. Mariscal López 2948 casi McArthur. Asunción, Paraguay Teléfono +595 21 603 400 Lunes, 29 de septiembre de 2025 Lunes, 29.09.25 Clima en Asunción ahora 21° HUM 83% lunes, 29 de septiembre 19°MIN 34°MAX martes, 30 de septiembre 20°MIN 34°MAX miércoles, 01 de octubre 18°MIN 34°MAX jueves, 02 de octubre 19°MIN 34°MAX Cotizaciones 6900/7050 Dólar compra/venta 1280/1350 Real compra/venta 4.4/5.6 Peso Ar compra/venta 8200/8650 Euro compra/venta Secciones NacionalesEspectáculosPolíticaDeportesDinero & NegociosInvestigacionesEspecialesVideosRadios ✓ Thanks for sharing! AddToAny More…
190667
https://qubeshub.org/resources/browse?tag=resourcesqubesteachingreference,teachingmaterial,biology&sortby=date&limit=20&start=100
QUBES - Resources: All Skip to Main Content Resources Browse Resources Why OER? Submit a Resource Software Community Community Groups Faculty Mentoring Networks Partners Services About About Cite Us Contact Us Usage News & Activities News & Activities Blog Newsletters Events Terms of Use Terms of Use Code of Conduct Privacy Policy Copyright Submitting Content Usage Help Login Close You are here: Home/Resources/All Resources: All Submit a resource Find a resource Keyword or phrase 1. resourcesqubesteachingreference x 2. teachingmaterial x 3. biology x Title Published Type Mendelian Genetics, Probability, Pedigree, and Chi-Squared Statistics 0.0 out of 5 stars Teaching & Reference Material| 03 Oct 2015 | Contributor(s): Anne Brokaw, Michelle Garber-Talamo This resource has been updated - find the current version here: this classroom activity, students are introduced to the genetics of sickle cell disease by a short film The Making of the Fittest: Natural Selection in Humans . A classroom handout... 1. chi-squared 2. genetics 3. in-class activity 4. pedigrees 5. Presentation 6. Probability 7. problem set 8. Resources @ HHMI - BioInteractive 9. Resources @ QUBES - Teaching & Reference 10. Teaching material Mendelian Genetics, Probability, Pedigree, and Chi-Squared Statistics 0.0 out of 5 stars Teaching & Reference Material| 03 Oct 2015 | Contributor(s): Anne Brokaw, Michelle Garber-Talamo This resource has been updated - find the current version here: this classroom activity, students are introduced to the genetics of sickle cell disease by a short film The Making of the Fittest: Natural Selection in Humans . A classroom handout... 1. chi-squared 2. genetics 3. in-class activity 4. pedigrees 5. Presentation 6. Probability 7. problem set 8. Resources @ HHMI - BioInteractive 9. Resources @ QUBES - Teaching & Reference 10. Teaching material Exponential Growth by Bozeman Science 3.0 out of 5 stars Teaching & Reference Material| 03 Oct 2015 | Contributor(s): Paul Andersen Youtube video 1. Enrichment 2. population growth 3. Presentation 4. Reference material 5. Resources @ QUBES - Teaching & Reference 6. videos BioNumbers—the database of key numbers in molecular and cell biology 0.0 out of 5 stars Teaching & Reference Material| 03 Oct 2015 | Contributor(s): Ron Milo, Paul Jorgensen, Uri Moran, Griffin Weber, Michael Springer This resource has been updated - find the current version here: ( is a database of key numbers in molecular and cell biology—the quantitative properties of biological systems of... 1. Article 2. data 3. database 4. estimation 5. Numbers 6. Reference material 7. Resources @ BioNumbers 8. Resources @ QUBES - Teaching & Reference 9. Website Using Evolutionary Data in Developing Phylogenetic Trees: A Scaffolded Approach with Authentic Data 0.0 out of 5 stars Teaching & Reference Material| 01 Oct 2015 | Contributor(s): Kd Davenport, Kirsten Jane Milks, Rebecca Van Tassell Analyzing evolutionary relationships requires that students have a thorough understanding of evidence and of how scientists use evidence to develop these relationships. In this lesson sequence, students work in groups to process many different lines of evidence of evolutionary relationships... 1. Article 2. data analysis 3. Evolution 4. phylogeny 5. Reference material 6. Resources @ QUBES - Teaching & Reference 7. science practices 8. scientific argumentation 9. Tree thinking A Laboratory Class Exploring Microbial Diversity and Evolution Using Online Databases, the Biology Workbench, and Phylogenetics Software 0.0 out of 5 stars Teaching & Reference Material| 29 Sep 2015 | Contributor(s): Sarah Boomer, Kelly Shipley, Bryan Dutton, Daniel Lodge Students assemble and align bacterial datasets using DNA information downloaded from the National Center for Biotechnology Information website and Biology Workbench. Specifically, they compare unknown original DNA sequences (from, in our case, hot spring communities) to a backbone of... 1. biology workbench 2. in-class activity 3. lab protocol 4. microbiology 5. NCBI 6. phylogenetics 7. Resources @ QUBES - Teaching & Reference 8. Teaching material A Laboratory Class Exploring Microbial Diversity and Evolution Using Online Databases, the Biology Workbench, and Phylogenetics Software 0.0 out of 5 stars Teaching & Reference Material| 29 Sep 2015 | Contributor(s): Sarah Boomer, Kelly Shipley, Bryan Dutton, Daniel Lodge Students assemble and align bacterial datasets using DNA information downloaded from the National Center for Biotechnology Information website and Biology Workbench. Specifically, they compare unknown original DNA sequences (from, in our case, hot spring communities) to a backbone of... 1. biology workbench 2. in-class activity 3. lab protocol 4. microbiology 5. NCBI 6. phylogenetics 7. Resources @ QUBES - Teaching & Reference 8. Teaching material Introduction to Mathematical Modeling in Biotechnology, Cell Biology, and Physiology 0.0 out of 5 stars Teaching & Reference Material| 27 Sep 2015 | Contributor(s): Borbala Mazzag, Kamila Larripa This resource has been updated - find the current version here: course is a mathematical and computational exploration of five diverse areas of biology: human locomotion, gene sequence analysis, signal transduction... 1. course 2. gene sequence analysis 3. human locomotion 4. mathematical modeling 5. neuroanatomy 6. neurophysiology 7. Resources @ QUBES - Teaching & Reference 8. signal transduction 9. Teaching material Introduction to Mathematical Modeling in Biotechnology, Cell Biology, and Physiology 0.0 out of 5 stars Teaching & Reference Material| 27 Sep 2015 | Contributor(s): Borbala Mazzag, Kamila Larripa This resource has been updated - find the current version here: course is a mathematical and computational exploration of five diverse areas of biology: human locomotion, gene sequence analysis, signal transduction... 1. course 2. gene sequence analysis 3. human locomotion 4. mathematical modeling 5. neuroanatomy 6. neurophysiology 7. Resources @ QUBES - Teaching & Reference 8. signal transduction 9. Teaching material Data Analysis for the Life Sciences 0.0 out of 5 stars Teaching & Reference Material| 25 Sep 2015 | Contributor(s): Rafael A Irizarry, Michael I Love This resource has been updated - find the current version here: online stats book written completely in R!From the authors:"Data analysis is now part of practically every research project in the life sciences. In this... 1. biostatistics 2. Book 3. Manual 4. R 5. Reference material 6. Resources @ QUBES - Teaching & Reference 7. R Markdown 8. statistical analysis 9. Website The Analysis of Biological Data: textbook and website 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Michael Whitlock, Dolph Schluter The Analysis of Biological Data is a new approach to teaching introductory statistics to biology students. To reach this unique audience, Whitlock and Schluter motivate learning with interesting biological and medical examples; they emphasize intuitive understanding; and they focus on real data.... 1. Book 2. Reference material 3. Resources @ QUBES - Teaching & Reference 4. Website Performing a Population Viability Analysis from Data Students Collect on a Local Plant 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Noah Chamey, Sydne Record This resource has been updated - find the current version here: two lab periods, students collect demographic data on perennial plants and then use these data in a matrix model to perform population viability analyses. During the first lab,... 1. data analysis 2. in-class activity 3. lab protocol 4. labs 5. matrix model 6. R 7. Resources @ QUBES - Teaching & Reference 8. Resources @ TIEE 9. Teaching material Performing a Population Viability Analysis from Data Students Collect on a Local Plant 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Noah Chamey, Sydne Record This resource has been updated - find the current version here: two lab periods, students collect demographic data on perennial plants and then use these data in a matrix model to perform population viability analyses. During the first lab,... 1. data analysis 2. in-class activity 3. lab protocol 4. labs 5. matrix model 6. R 7. Resources @ QUBES - Teaching & Reference 8. Resources @ TIEE 9. Teaching material Global Temperature Change in the 21st Century 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Daniel R. Taub, Gillian S. Graham This resource has been updated - find the current version here: Ecological QuestionHow might global temperature change during the 21st century? How might these changes vary geographically, seasonally, and depending upon future human... 1. abiotic environment 2. climate change 3. global change 4. in-class activity 5. Resources @ QUBES - Teaching & Reference 6. Resources @ TIEE 7. Teaching material Global Temperature Change in the 21st Century 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Daniel R. Taub, Gillian S. Graham This resource has been updated - find the current version here: Ecological QuestionHow might global temperature change during the 21st century? How might these changes vary geographically, seasonally, and depending upon future human... 1. abiotic environment 2. climate change 3. global change 4. in-class activity 5. Resources @ QUBES - Teaching & Reference 6. Resources @ TIEE 7. Teaching material Exploring the Population Dynamics of Wintering Bald Eagles Through Long-Term Data 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Julie Beckstead, Alexandra Lagasse, Scott Robinson This resource has been updated - find the current version here: Ecological QuestionHow does a bald eagle population change over time at a winter migratory stopover and which factors influence its abundance?This activity has two different aspects to... 1. case study 2. conservation biology 3. endangered species 4. in-class activity 5. migration ecology 6. population ecology 7. Resources @ QUBES - Teaching & Reference 8. Resources @ TIEE 9. Teaching material Exploring the Population Dynamics of Wintering Bald Eagles Through Long-Term Data 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Julie Beckstead, Alexandra Lagasse, Scott Robinson This resource has been updated - find the current version here: Ecological QuestionHow does a bald eagle population change over time at a winter migratory stopover and which factors influence its abundance?This activity has two different aspects to... 1. case study 2. conservation biology 3. endangered species 4. in-class activity 5. migration ecology 6. population ecology 7. Resources @ QUBES - Teaching & Reference 8. Resources @ TIEE 9. Teaching material Changes in Lake Ice: Ecosystem Response to Global Change 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Robert E. Bohanan, Marianne Krasny, Adam Welman This resource has been updated - find the current version here: QuestionIs there evidence for global warming in long term data on changes in dates of ice cover in three Wisconsin Lakes?This activity uses ice cover records from three lakes in... 1. climate change 2. in-class activity 3. Interpreting Data 4. Resources @ QUBES - Teaching & Reference 5. Resources @ TIEE 6. Teaching material Changes in Lake Ice: Ecosystem Response to Global Change 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Robert E. Bohanan, Marianne Krasny, Adam Welman This resource has been updated - find the current version here: QuestionIs there evidence for global warming in long term data on changes in dates of ice cover in three Wisconsin Lakes?This activity uses ice cover records from three lakes in... 1. climate change 2. in-class activity 3. Interpreting Data 4. Resources @ QUBES - Teaching & Reference 5. Resources @ TIEE 6. Teaching material HHMI Teacher Guide: Math and Statistics 0.0 out of 5 stars Teaching & Reference Material| 23 Sep 2015 | Contributor(s): Paul Strode, Ann Brokaw This resource has been updated - find the current version here: include measures of average (mean, median, and mode), variability (range and standard deviation), uncertainty (standard error and 95% confidence interval), Chi-square analysis, student... 1. guide 2. Manual 3. Reference material 4. Resources @ HHMI - BioInteractive 5. Resources @ QUBES - Teaching & Reference 6. statistics Results 101 - 120 of 238 Display # Start Prev 2 3 4 5 6 7 8 9 10 11 Next End Find a resource Use the sorting or filtering options to sort results and/or narrow down the list of resources. Use the 'Search' to find specific resources by title or description. Popular Tags 1 Hour Advanced Biochemistry bioinformatics biology climate change data science ecology education environmental science Evolution Extended Project genetics microbiology modeling molecular biology More than 1 hour R Resources @ QUBES - Teaching & Reference statistics Click a tag to see only resources with that tag. Resources Resources Collections Software Community Community Groups Faculty Mentoring Networks Partners Services Our Services About About Us QUBES History Contact Us News & Activities Blog Newsletter Events Help Forgot username? Lost password? Knowledge Base Report a Problem Subscribe to our newsletter Honey Pot: Please leave blank. Facebook Bluesky Linkedin Instagram Copyright 2025 QUBES. Powered by HUBzero, a Purdue project QUBES is supported by the National Science Foundation and other funding agencies Terms of Use Code of Conduct Privacy Policy Close Home Resources Browse Resources Why OER? Submit a Resource Software Community Community Groups Faculty Mentoring Networks Partners Services About About Cite Us Contact Us Usage News & Activities News & Activities Blog Newsletters Events Terms of Use Terms of Use Code of Conduct Privacy Policy Copyright Submitting Content Usage Search
190668
https://www.khanacademy.org/science/ap-chemistry-beta/x2eef969c74e0d802:acids-and-bases/x2eef969c74e0d802:molecular-structure-of-acids-and-bases/v/factors-affecting-acid-strength
Factors affecting acid strength (video) | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Explore Khanmigo Math: Pre-K - 8th grade Math: Get ready courses Math: High school & college Math: Multiple grades Test prep Science Computing Reading & language arts Economics Life skills Social studies Partner courses Khan for educators Select a category to view its courses Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Help us do more We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever. Select gift frequency One time Recurring Monthly Yearly Select amount $10 $20 $30 $40 Other Give now By donating, you agree to our terms of service and privacy policy. Skip to lesson content AP®︎/College Chemistry Course: AP®︎/College Chemistry>Unit 8 Lesson 4: Molecular structure of acids and bases Factors affecting acid strength Science> AP®︎/College Chemistry> Acids and bases> Molecular structure of acids and bases © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement Factors affecting acid strength AP.Chem: SAP‑9 (EU), SAP‑9.F (LO), SAP‑9.F.1 (EK) Google Classroom Microsoft Teams About About this video Transcript The relative strength of an acid can be predicted based on its chemical structure. In general, an acid is stronger when the H–A bond is more polar. Acidity is also greater when the H–A bond is weaker and when the conjugate base, A⁻, is more stable.Created by Jay. Skip to end of discussions Questions Tips & Thanks Want to join the conversation? Log in Sort by: Top Voted dddangilanc 4 years ago Posted 4 years ago. Direct link to dddangilanc's post “I thought as bond polarit...” more I thought as bond polarity increases difference in en when bond is made between the two elements shouldn't bond strength also increase? Answer Button navigates to signup page •1 comment Comment on dddangilanc's post “I thought as bond polarit...” (7 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Laura Murphy 2 years ago Posted 2 years ago. Direct link to Laura Murphy's post “How does having two reson...” more How does having two resonance structures make the molecule more stable? Wouldn't a molecule with only one possible structure be more stable because it doesn't have to delocalize the charge and create half-bonds? What does being stable mean? Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 2 years ago Posted 2 years ago. Direct link to Richard's post “Being stable in a chemist...” more Being stable in a chemistry context means matter has minimal energy and in unreactive, compared to unstable particles which have higher energy and eagerly react with other particles. Molecules with nonzero formal charges on the atoms have more potential energy compared to molecules with no formal charges (0 formal charge) on their atoms. A negative formal charge for example means there is an excess of electrons which means greater electron-electron repulsions which translates to greater potential energy and therefore less stability. If a molecule is able to distribute that excess formal charge through resonance structures, it means that each atom has less potential energy concentrated onto itself. For the acetate ion at the end, the resonance structures distribute that -1 formal charge to the two oxygen atoms so in the resonance hybrid they each have a -1/2 charge instead (less charge is more stable). Hope that helps. Comment Button navigates to signup page (7 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more lennyarms 2 years ago Posted 2 years ago. Direct link to lennyarms's post “wouldn't a higher bond po...” more wouldn't a higher bond polarity make the overall bond strength stronger? so the acid would be weaker like for HF the bond is very polar but HF is a weak acid Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Elijah Daniels 2 years ago Posted 2 years ago. Direct link to Elijah Daniels's post “Bond strength doesn't rea...” more Bond strength doesn't really depend on the polarity of the bond. It more so greatly depends on other factors such as bond dissociation energy, number of bonds, and bond length. HF is a weak acid for a different reason (I also questioned why it was a weak acid when I took Gen Chem in college). It's mainly due to the size of F, and the electronegativity of F. Fluorine is relatively speaking a very small atom, yet it is so electronegative and wants electrons desperately. Because F is small, the F- ion has a high charge density, which is what leads it to not be stable (recall that a low charge density is more stable and a high charge density is less stable). This idea paired with how electronegative F is leads HF to not be a strong acid. It doesn't dissolve well because F needs the help of H to share some of the charge density (even though F wants H's electrons. Talk about a double standard..) Comment Button navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more jaehyung.jason a year ago Posted a year ago. Direct link to jaehyung.jason's post “at 1:37. why the more sta...” more at 1:37 . why the more stable A- is the more likely HA will donate proton? Is it because all molecule want to reach their the most stable form? Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard a year ago Posted a year ago. Direct link to Richard's post “Yep, pretty much. If matt...” more Yep, pretty much. If matter has the ability to be more stable, it will take that opportunity. If the products of an acid-base reaction are stable, then an acid-base reaction is more likely to occur. Hope that helps. Comment Button navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Nahom 4 years ago Posted 4 years ago. Direct link to Nahom's post “I thought that acetic aci...” more I thought that acetic acid and other organic acids are generally regarded as weak acids. Answer Button navigates to signup page •Comment Button navigates to signup page (0 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 4 years ago Posted 4 years ago. Direct link to Richard's post “They are weak acids, but ...” more They are weak acids, but in this video they're just comparing weak acids to other weak acids. Being referred to as stronger in this case is relative, i.e. being considered the strongest of the weak acids. Hope that helps. Comment Button navigates to signup page (5 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more crazylionheart 4 months ago Posted 4 months ago. Direct link to crazylionheart's post “At 4:58, Jay compares hyp...” more At 4:58 , Jay compares hypochlorous acid, chlorous acid, and perchloric acid. In chlorous acid and perchloric acid, why aren't the additional oxygens double bonded? I thought that Cl (because it can use the additional d orbitals) can form different number of bonds, but only 1, 3, 5, and 7, not 2 or 4. Also, Jay said that perchloric acid was the strongest because it had the most electronegative atoms bonded to it, which I suppose has an effect, but (supposing the oxygens are double bonded) I thought that the main reason for its effectiveness was because after the H+ has been donated the other three oxygens perform resonance. Please help! Thank you. P.S. I noticed that the oxygens portrayed had only one bond and three electron pairs, but they didn't a -1 formal charge. Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 3 months ago Posted 3 months ago. Direct link to Richard's post “Jay presents one of the p...” more Jay presents one of the possible resonance structures for chlorous and perchloric acid. The Lewis structures which Jay has for those two are valid structures; they all have the correct number of valence electrons and the connections are correct. We can of course get the other resonance structures by relocating some of the oxygens’ lone pairs to form double bonds to the chlorine. The difference between these resonance structures is whether or not the chlorine has an expanded octet, and also the presence of formal charges. In the structures which Jay presents both the chlorines and oxygen atoms have complete octets which grants some degree of stability. However, these structures have formal charges on those atoms too. In chlorous acid, the chlorine will have a 1+ formal charge, and the terminal oxygen will have a 1- formal charge. In perchloric acid, the chlorine will have a 3+ formal charge, and the terminal oxygens will each have a 1- formal charge. The presence of these formal charges, particularly in perchloric acid’s instance, results in instability for these structures. Now when we inspect the resonance structures of these acids, where the terminal oxygens form double bonds to the chlorines, the results are swapped. The chlorines now have to have expanded octets to accommodate those double bonds. And chlorine can do this because it has access to d orbitals. However, it’s less stable than if it did not exceed its octet. But when we calculate the formal charges, all atoms have 0 formal charge, which is very stable. Given this, chlorous and perchloric acid both have resonance structures, but they are of different stabilities. The resonance structures where the terminal oxygens form double bonds are more stable and so would be the major contributors. Whereas the structures Jay has shown here are minor contributors because of the formal charges. Concerning the strength of the chlorine oxyacids though, they have two factors affecting their relative strengths. The presence of an increasing number of oxygen atoms does draw a greater amount of electron density away from the O-H bond thus making the hydrogen more acidic. This being the inductive affect which Jay mentioned. The other factor is the one you mentioned where the conjugate bases of these acids are resonance stabilized. The greater number of oxygens allows the conjugate base to delocalize the negative charge to a greater region and thus creates a more stable conjugate base. The more stable a conjugate base, the stronger the acid. Both of these factors contribute to the trend in acidity for these oxyacids. Like Jay mentioned, acid strength is usually a combination of several factors at work. Jay mentioned that he deliberately left out the formal charges of the Lewis structures at 5:00 . Hope that helps. 2 comments Comment on Richard's post “Jay presents one of the p...” (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more richa.parmar 3 months ago Posted 3 months ago. Direct link to richa.parmar's post “For the oxyacids, would t...” more For the oxyacids, would their conjugate bases be very reactive and unstable making them weak acids? Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 3 months ago Posted 3 months ago. Direct link to Richard's post “A less stable conjugate b...” more A less stable conjugate base does decreases the acidity of an acid in general. So this effect is not limited to just oxyacids. Also, not all oxyacids are weak. An oxyacid is simply an acid which contains at least one oxygen atom. So perchloric and nitric acid would be considered oxyacids, but are also considered strong acids. Hope that helps. Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Eliza 4 months ago Posted 4 months ago. Direct link to Eliza's post “Wait, so more resonance s...” more Wait, so more resonance structures means a molecule is more stable? I would have thought it was the other way around 🤔 Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 4 months ago Posted 4 months ago. Direct link to Richard's post “A molecule which is able ...” more A molecule which is able to delocalize its electrons over a larger portion of the molecule will be more stable than a molecule which is not able to delocalize its electrons. Electrons are a source of negative charge and concentrating them all together results in greater potential energy. Being able to spread that negative charge out over a larger space lowers a molecule’s potential energy and makes it more stable. A molecule is able to delocalize its electrons through resonance, and is said to be resonance stabilized if it does so. The more resonance structures a molecule has, the more places the electrons can be delocalized to and spread out the charge. Hope that helps. Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more thomas.sun26 5 months ago Posted 5 months ago. Direct link to thomas.sun26's post “What about factors that a...” more What about factors that affect base strength?? Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 4 months ago Posted 4 months ago. Direct link to Richard's post “It’s essentially the same...” more It’s essentially the same as with acid strength. A base’s strength primarily depends on the stability of its conjugate acid. A strong base will have a stable conjugate acid (with weak acidity), and a weak base will have a less stable conjugate acid. Hope that helps. Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more suminkwon1234 6 months ago Posted 6 months ago. Direct link to suminkwon1234's post “is this content tested on...” more is this content tested on the ap chem exam? Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 6 months ago Posted 6 months ago. Direct link to Richard's post “Of course.” more Of course. Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Video transcript [Instructor] Factors that affect asset strength include bond polarity, bond strength, and conjugate base stability. Let's think about a generic acid, HA, that donates a proton to water to form the hydronium ion H3O plus, and the conjugate base to HA, which is A minus. First let's consider the polarity of the bond between H and A. If A is more electronegative than H, A withdraws electron density. So we could draw an arrow pointing towards the A, right, as the electrons and the bond between them are pulled closer to the A. As the electronegativity of A increases, there's an increase in the polarity of the bond. As the bond polarity increases, more electron density is drawn away from the H, which makes it easier for HA to donate a proton. Therefore, in general, an increase in the polarity of the HA bond, means an increase in the strength of the acid. Next, let's think about the factor of bond strength. And let's consider the strength of the bond between H and A. The weaker the bond, the more easily the proton is donated. Therefore, in general, a decrease in the bond strength, means an increase in the strength of the acid. The stability of the conjugate base, can also affect the strength of the acid. The more stable the conjugate base, the more likely the asset is to donate a proton. So if you think about that for HA, the conjugate base is A minus. And the more stable A minus is, the more likely HA will donate a proton in solution. Therefore, in general, the more stable the conjugate base, the stronger the acid. So let's go ahead and write here an increase in the stability of the conjugate base, means an increase in the strength of the acid. Even though acid strength is usually due to all three of these factors, bond polarity, bond strength, and conjugate base stability, when we look at examples, we're only gonna consider one or two factors that are the main contributors to acid strength. Let's look at the binary acids from group 7A on the periodic tables, that's hydrofluoric acid, hydrochloric acid, hydrobromic acid, and hydro ionic acid. As we go down the group from fluorine to chlorine, to bromine, to iodine, there's an increase in the strength of the acid. So out of these four, hydro ionic acid, is the strongest. The main factor determining the strength of the binary acids in group 7A, is bond strength. Looking at values for bond enthalpy, allows us to figure out the strengths of these bonds. For example, the HF bond, has a bond enthalpy of 567 kilojoules per mole. While the HI bond, has a bond enthalpy of 299 kilojoules per mole. The lower the value for the bond enthalpy, the easier it is to break the bond. And because bond enthalpy decrease as we go down the group, that means there's a decrease in bond strength. A decrease in bond strength, means it's easier for the asset to donate a proton. Therefore, we see an increase in the strength of the asset as we go down the group. Next, let's compare the strengths of some oxyacids. These oxyacids all have the general formula XOH, where X is a halogen. The acidic proton, is the proton that's directly bonded to the oxygen. And if an oxyacid donates its proton to water, that forms the hydronium ion H3O plus, and the conjugate base to the oxyacid. For these three oxyacids, the halogens are iodine, bromine, and chlorine. And as we go from iodine to bromine, to chlorine in group 7A on the periodic table, that's an increase in the electro negativity. So chlorine is the most electronegative out of these three halogens. And as we go up the group in our halogens, there's an increase in the strength of the acid. So hypochlorous acid is the strongest of the three. The main factor determining the strength of these oxyacids, is the bond polarity, which is affected by the electronegativity of the halogen. So the polarity of this oxygen, hydrogen bond, is affected by the presence of the halogen. As the electronegativity of the halogen increases, the halogen is able to withdraw more electron density away from the right side of the molecule. That increases the polarity of the OH bond, it makes it easier to donate this proton. Therefore, as the electronegativity of the halogen increases, the acidity of the oxyacid increases. This effect of the electronegative atom increasing the acidity, is often referred to as the inductive effect. Let's compare hypochlorous acid to two other oxyacids, and notice how I've left the formal charges off of these acids just so we can focus on general structure. Notice how the acidic proton is directly bonded to the oxygen and all three of these oxyacids. And in all three of these oxyacids, the oxygen is directly bonded to a chlorine. Notice what happens to the structure as we move to the right. Comparing chlorous acid to hypochlorous acid, chlorous acid has an additional oxygen bonded to the chlorine. And looking at perchloric acid, instead of only one additional oxygen, there are three additional oxygens. So as we move to the right, we're increasing in the number of oxygens bonded to the chlorine. Oxygen is a very electronegative element. So as we move to the right, we're increasing in the number of electronegative atoms in the acid. And as the number of electronegative atoms increases, more electron density is pulled away from the acidic proton, which increases the polarity of the oxygen hydrogen bond. So bond polarity increases as we move to the right, which predicts an increase in the strength of the acid as we move to the right. And that's what we observe experimentally, perchloric acid is the strongest of the three. In reality, all three factors affect the strength of the acid. However, for simplicity sake, we could just say that increasing bond polarity, is the main factor for the increasing acid strength in these oxyacids. Carboxylic acids are a group of assets that all contain a carboxyl group. A carboxylic group consists of carbon, oxygen, oxygen and hydrogen. So if we look at acetic acid, I'll circle the carboxyl group on acetic acid, and I can also circle the carboxyl group on formic acid. The acidic proton in the carboxylic acid, is the one that's directly bonded to the oxygen in the carboxyl group. One reason why this proton is acidic, is because of the presence of this oxygen in the carboxyl group. This oxygen hydrogen bond is already polarized, but the presence of another electronegative atom, further increases the polarity of the oxygen hydrogen bond. Increasing the bond polarity, makes it more likely to donate the proton, which increases the acidity. For carboxylic acids, it's also important to consider the stability of the conjugate base. When acetic acid donates its proton, it turns into its conjugate base, which is the acetate anion. Notice that the oxygen that used to be bonded to the proton, now has a negative formal charge on it. There are two possible resonance structures that you can draw for the acetate anion. The first is with the negative charge on this oxygen. And then we could draw another resonance structure with a negative charge on the other oxygen. In reality, neither resonant structure is a perfect representation of the acetate anion. And we need to think about a hybrid of these two resonant structures. In the hybrid, the negative charge isn't on one of the oxygens, that one negative charge, is spread out or de-localized over two, over the two oxygens. So it's like one oxygen has negative one half, and the other oxygen has negative one half. This de-localization of the negative charge, stabilizes the conjugate base. And the more stable the conjugate base, the stronger the acid. Therefore, the stability of the conjugate base also affects the acidity of the carboxylic acid. So because carboxylic acids have conjugate bases that are resonant stabilized, carboxylic acids like acetic acid and formic acid, are more acidic. Creative Commons Attribution/Non-Commercial/Share-AlikeVideo on YouTube Up next: Quiz 1 Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
190669
https://www.sciencedirect.com/topics/mathematics/central-angle
Skip to Main content Sign in Chapters and Articles You might find these chapters and articles relevant to this topic. Calculus II.C Trigonometric Expressions A final tricky but important limit must be mentioned. Students in elementary school learn to measure angles in degrees. When the angle x is so measured, then one finds that and that in general This awkward number would permeate computations made in calculus, were it not for a uniformly adopted convention. Radians, rather than degrees, are used to measure angles. The central angle of a circle that intercepts an arc of length equal to the radius (Fig. 5) is said to have a measure of one radian. The familiar formula for the circumference of a circle tells us that there will be 2π radians in a full circle of 360°, hence that π radians = 180°. Using radians to measure the angle x, it turns out that (8) For this reason, radians are always used to measure angles in calculus and in engineering applications that depend on calculus. View chapterExplore book Read full chapter URL: Chapter Railway Engineering VII.J Track Geometry Rail lines are composed of straight track (tangents) and curves, arcs of circles, to which the straight portions are tangent. Curves have the obvious purpose of permitting changes in direction and extend from the point of curve (PC or TC) to the next tangent point (PC or CT). In the United States curves are usually designated by the degree of curve, that is, the amount of central angle subtended by a chord of 100 ft (or an arc of 100.007 ft). Some rapid transit and most foreign railroads use the radius as the designation. A convenient relation between the degree and the radius is R = 5730/D, from which D = 5730/R. To offset a train's centrigugal force in rounding a curve, the outer rail is superelevated above the inside rail by an amount e = 0.0007 DV2 in., where D is the degree of curve and V is the speed in miles per hour (Fig. 14). This e value is the equilibrium superelevation; that is, the car weight is equally distributed on both rails. However, because trains run at different speeds on the same track, a certain amount of unbalanced superelevation is permitted whereby ec = 0.0007DV2 − u, where ec provides a comfortable speed elevation with an imbalance of u inches. The value of u is normally taken as 3 in., but it is frequently reduced to 2 or even 1 in. where slow-moving heavy haul trains operate. A stable condition prevails when the resultant of car weight and centrifugal force falls within the middle third of the interrail space; otherwise, there is an unstable, possibly derailing, situation. Curvature thus has a limiting effect on speed. The maximum speed permitted by FRA rules is given by the equation, based on 3 in. of imbalance, Vmax = [(Ea + 3)/0.007D]1/2, where ea is the actual superelevation and D is the degree of curve. To avoid an abrupt change from level track to full superelevation and from zero to full lateral acceleration, and to provide a smooth transition from tangent to full curvature, an easement or transition curve is placed at each end of the simple curve. The cubic parabola is used in which the degree of spiral increases directly with the distance along the spiral until the full degree of curve is attained at the spiral to curve (SC). The rate of change in degrees per 100-ft station is k, so Ls, the spiral length in feet, is equal to 100 D/k. Additional geometry is given in Fig. 15. View chapterExplore book Read full chapter URL: Chapter Analytical, Approximate-Analytical and Numerical Methods in the Design of Energy Analyzers 8 Correlation Method for Seeking the Conditions of Higher-Order Angular Focusing The order of focusing is one of the most important characteristics in electron optics. As applied to electron lenses, this parameter determines the value of spherical aberration, while for dispersive energy-and mass-analyzers, it reflects the degree of a certain contradiction between the achievement of high transmittance and resolution. The numerical method for seeking the conditions of higher-order angular focusing outlined in this section (see also Trubitsyn, 2000) can be considered as a development of the method presented in Gorelik (1986) and Gorelik, Protopopov, and Trubitsyn (1988). The essence of the method consists of the following details. After a charged particle has passed through a dispersive or focusing field, the ordinate y(k, α) of the point of intersection of its planar trajectory with a straight line perpendicular to the Ox-axis and located at a distance x from the source of charged particles can be expanded into the Taylor series: (72) where α is the angle of particle's emission from the source, Δα = (α-α0), and k is the vector, whose components represent the distance x, the particle energy E, and the dimensions and electrode potentials of an electron-optical system. In further calculations, k is omitted. In accordance with the Taylor series, the condition of Nth-order focusing for a given k can be expressed as vanishing relevant partial derivatives with respect to the angle α: (73) In accordance with the method proposed in Gorelik et al. (1988), the function y(α) can be represented as (74) where R(α) = yc(α) + xc(α) × t(α), t(α) = tg(γ), γ(α) and xc(α), yc(α) are, respectively, the angle relative to the axis 0x and coordinates of the particle escape from the area with a nonzero field gradient, which corresponds to the time moment of crossing the output electrode plane. In accordance with Eq. (74), the condition of first-order focusing y′(α) = 0 takes the form (75) or (76) The second derivative y″(α) vanishes if (77) and, with regard to the expression for x in Eq. (76), the condition of second-order focusing can be written as (78) After that, the function (79) is introduced and a search for the second-order focusing conditions is performed by solving the equation (80) relative to α = α0 and determining the points of focusing (81) When seeking the focusing conditions numerically, the required functions are calculated on a discrete set of input angles αi, i = 1, 2, …, L (the amount of 20 ÷ 50 trajectories is commonly sufficient) located in a previously selected range of [αmin, αmax] and interpolated for α ≠ αi. The derivatives are calculated according to the formulas of numerical differentiation. It follows from Eqs. (74) and (76) that the condition of N th-order focusing is fulfilled if (82) where n = 2, 3, …, N. However, numerical calculation of higher derivatives (n > 2) leads to large errors, so the function δn(α) cannot be calculated with the desired accuracy. Experience shows this method to be effective in seeking the conditions of second-order focusing, but hardly suitable for constructing the focusing conditions of higher orders. This raises the problem of extending this method to the search of focusing conditions having an order higher than 2. Eq. (82) says that in the case of Nth-order focusing for 2≤ m ≤ N, (83) and for m = n – 1, Eq. (82) may be rewritten as (84) where n = 2, 3,…, N. Eq. (79) determines the derivative F′(α): (85) It follows from Eq. (82) that F′(α0) = 0 in the case of third-order focusing. The penultimate equation gives the equality (86) According to Eq. (84), in the case of third-order focusing, (87) while for fourth-order focusing, F′′(α0) = 0 [see Eq. (82)]. Similar considerations show that the focusing condition of the (N + 2)th order appears as (88) Let us expand the function F(α) in the vicinity of α0: (89) Taking into consideration Eq. (88), in the case of focusing order (N + 2), we obtain (90) Since the function F(α) is calculated from Eq. (79) numerically, it represents a superposition of true values and certain noise. Therefore, this function should be treated as a random function dependent on a nonrandom argument α. Denote S(α) as the function determined by Eq. (90) as follows: (91) We also consider S(α0) as a random function with a negligibly low noise level and the nonrandom multiplier 1/(N + 1)! F(N + 1)(α0). It is rather obvious that in the case of (N + 2)th-order of focusing, the functions F(α) and S(α) are correlated. To evaluate the degree of dependence between the cross sections of two random functions, the normalized cross-correlation function is commonly used (Bendat & Piersol, 1971). Since multiplication of a random function by the nonrandom factors does not change the normalized cross-correlation function, it is enough to explore the correlation between F(α) and S(α) = (α-α0)N + 1 in order to evaluate the correlation dependence between the functions F(α) и S(α). In the case of discrete change of the argument α = α1, α2, …, αL and zero shift between the functions Fi = F(αi) and Si = S(αi) = (α i − α0)N + 1, i = 1,2, …, L, the normalized cross-correlation function is defined as (92) where , , , , . The higher the degree of correlation between the functions Fi and Si is, the closer the function ρ0 is to the unit. Here, we consider ρ0 as a function of the parameter N that determines the power of the relevant polynomial. The approach that we suggest for seeking the conditions of higher-order focusing assumes the following. The angle α0 of second-order focusing is determined from Eq. (80). After that, the correlation between the function F(α) and the power function S(α) = (α-α0)m + 1, with parameter m taking sequentially the values m = 0, 1, …, M, is estimated using Eq. (92). Here, M is the upper limit of the search, which is selected in accordance with a particular problem under consideration. Then we determine the value N (0 ≤ N ≤ M) for which ρ0(N) = max{ρ0(0), ρ0(1), …, ρ0(M)}. The proximity ρ0(N) to 1 indicates the cross-correlation of F(α) and S(α) = (α-α0)N + 1; i.e., the presence of the focusing of the (N + 2)th order. In practice, it suffices to consider M = 10 ÷ 20. To avoid operations with large numbers, the function S(α) = (α-α0)N + 1 should be normalized by the factor (αmax-αmin)N + 1. It follows from Eq. (90) that, at high orders of focusing, the function F(α) is close to zero in a fairly wide range of angles α. Therefore, a noise in F(α) may result in the appearance of the false roots when solving the equation F(α) = 0. In such cases, in order to calculate the α0 value more precisely, it is necessary to evaluate the maximum of the cross-correlation function ρ0(α0,N) in two variables—the central angle of focusing and the polynomial degree. Next, we present the results of testing this method by means of a model admitting an analytical solution. As has been established (Zashkvara, Korsunskii, & Kosmachev, 1966), cylindrical mirrors possess the property of second-order angular focusing near . The trajectory analysis of cylindrical mirror conducted in accordance with the procedure described by Trubitsyn (1995b) indicates that the second-order focusing at α0 ≈ 42° may also be implemented if the fringe field in a real device is corrected via three pairs of adjusting rings. Herewith, the corresponding value of the normalized cross-correlation function for F(α) and (α-α0)m + 1 attains its maximum at m = 0 and equals ρ0 = 0.99. The energy analyzer with three cylindrical electrodes (Franzen & Taaffe, 1983) provides third-order focusing at the central angle α0 ≈ 40°. The correlation function ρ0 reaches its maximum value of 0.97 for m = 1 in the case of numerical integration of the charged particle trajectories in an analytically defined electrostatic field. The focus point's coordinates coincide with the coordinates that have been determined analytically by Franzen and Taaffe. Trajectory analysis of the spherical mirror ensuring ideal angular focusing (Zashkvara, Yurchak, & Bylinkin, 1988) at α = 90°, as well as the estimate ρ0 > 0.95 that we obtained for the correlation function at the polynomial degree N + 1 = 25 (this is maximal polynomial degree used in our calculations), confirm the efficiency of the abovementioned technique in exploring the electron-optical systems with a high focusing level. View chapterExplore book Read full chapter URL: Book series2015, Advances in Imaging and Electron PhysicsVictor S. Gurov, ... Andrey A. Trubitsyn Chapter Analytical, Approximate-Analytical and Numerical Methods in the Design of Energy Analyzers 2015, Advances in Imaging and Electron PhysicsVictor S. Gurov, ... Andrey A. Trubitsyn 9.2.2 Systems with Parallel Generatrices The potential distribution in conical systems with separated vertices has no analytic expressions and should be calculated numerically. The trajectory analysis of the systems under consideration (Figure 23), having been performed for the case of the point source location on the symmetry axis, indicates the opportunity of implementing the mode of second-order focusing in a broad range of boundary and initial conditions. In Baranova, Dyakova, and Yavor (1988b), an approximate analysis of the focusing properties of conical systems with parallel generatrices has been conducted. The conditions of first-order focusing in the case of the considerable distance of the particle source to the cone's vertices are investigated in detail. In particular, it is shown that, at E/V = const, the angle β0 of first-order focusing decreases as the half-angle θ0 increases. Figure 24 represents the dependencies of the second-order focusing angle β0 on the relative distance Δz/g between the source and the inner cone's vertex, where g is the distance between the cone's vertices. Reducing the distance Δz/g causes a rather sharp increase in angle β0, which also occurs when reducing the angular opening of the cones or increasing the relative energy E/V. These features may be useful in designing analyzers capable of providing the desired geometry in the experiment. When the source is moved away at the distance Δz (being much greater than g), the second-order focusing angle β0 tends toward 30°, which corresponds to the data given by Baranova et al. (1988b). The calculation shows that relative linear dispersion depends very little on variations of defining parameters and lies within the range of 1–1.5. The calculation results obtained for the idealized systems (extended cones) reveal the relationship between the parameters providing the second-order focusing and may serve as a criterion for choosing the initial approximations for energy analyzer designs. As an example, Figure 25 represents a scheme of a three-stage analyzer design with the central angle that provides the mode of second-order focusing at E/V = 1. The first (labeled 1 in the figure) and last (3) device's stages are conical, while the intermediate lens system (2) ensures focusing and transportation of electron beams between stages 1 and 3. According to the results of numerical calculations, the relative energy resolution of the device at the FWHM level of instrumental function (Figure 26) is 0.8% at the luminosity of 3%. The energy analyzer of this type can be used in the creation of a specialized module for electron spectroscopy of the surface, with a built-in large source of probing (particularly X-ray) radiation. View chapterExplore book Read full chapter URL: Book series2015, Advances in Imaging and Electron PhysicsVictor S. Gurov, ... Andrey A. Trubitsyn Review article Boundary node method based on parametric space for 2D elasticity 2013, Engineering Analysis with Boundary ElementsJ.H. LvY. MiaoH.P. Zhu 2 Parameter mapping An approximation scheme based on the parametric coordinate is proposed to fully eliminate geometry error as follows. There is no geometry error for straight lines. Hence, only curve boundaries should be considered. For convenience, an elliptic arc is taken for example to show the process of parameter mapping. Suppose an elliptic arc at arbitrary location of the global cartesian coordinate (x,y) as shown in Fig. 1, the first step is to choose a proper local coordinate of which the origin coincides with the center of the ellipse. The local coordinate should coincide with the parametric coordinate system used in CAD packages. In the local coordinate, arbitrary point P on the elliptic arc is given by (1) where a and b are the axis length of the ellipse, is the deflection angle relative to the -axis along the counter-clockwise direction, which is defined by (2) where s is the parameter coordinate, , and and are the initial deflection angle and the central angle, respectively. Substituting Eq. (2) into (1), one can get the parameter forms in the local coordinate, i.e. (3) The global coordinates of point P specified in the local coordinate system are given by (4) where is the vector describing the position of the origin of the local axes and the geometric transformation matrix is given by (5) where , are the orthogonal unit vectors specifying the directions of , , respectively. After the coordinate transformation, the coordinates of an arbitrary point on the elliptic arc can be obtained accurately, as the same as the outward normal and Jacobian. No geometry error will be introduced during the parameter mapping, and the later integrations are carried out directly in the parametric space other than over approximate boundary, which is distinct with the conventional BNM. View article Read full article URL: Journal2013, Engineering Analysis with Boundary ElementsJ.H. LvY. MiaoH.P. Zhu Chapter Analytical, Approximate-Analytical and Numerical Methods in the Design of Energy Analyzers 2015, Advances in Imaging and Electron PhysicsVictor S. Gurov, ... Andrey A. Trubitsyn 9.1.2 Energy Analyzer with Angular Resolution Simultaneous registration of energy and angular characteristics of secondary electron flows excited by synchrotron radiation allows for the performance of fundamental research in the physics and chemistry of the surface of solids, associated with the problem of adsorption and related phenomena. This includes studying the electronic structure of atoms and molecules adsorbed on the surface and exploring the changes in the energy structure of electronic states of the near-surface layers in solids, as well as acquiring information on localization of atoms on the surface. The progress in this field of science largely depends on the engineering capabilities of relevant scientific instruments, particularly on the parameters of energy analyzers. Kover et al. (1983) proposed an efficient scheme of a spectrometer with energy/angular resolution, the essence of which is shown in Figure 17. A point source located on the z-axis of a dispersion analyzer emits a fan-shaped flow of charged particles, the median plane of which is perpendicular to the z-axis (α = π/2). After passing through an electrostatic field, the flow forms an image that is displayed on the position-sensitive detector (PSD). Due to the dispersion properties of the analyzer, the coordinates of the image points depend on the flow energy and the polar angle φ of each electron's emission from the sample. The output aperture of the analyzer “separates” from the output flow the electrons belonging to a certain band of initial energies. The instant analysis of the output flow relative to the polar angle, which is carried out with the use of PSD, and sequential rotation of the probing sample around the normal, allow for exploring the distribution of photoelectrons, whose energies fit the energy settings in the polar angle range of − π/2 to π/2 and in the azimuthal angle range of zero to 2π. This scheme has been employed by a number of researchers in designing analyzers with various configurations of electrodes (see, for example, Varga, Tokesi, & Rajta, 1995 and Leckey, 1987). However, the complexity of the proposed structures raises some doubt as to the possibility of their wide application in practice, so that the problem of development and design of an analyzer with a simple configuration of electrodes, capable of providing the energy dispersion and angular focusing of the flows of secondary particles with an input angle α close to 90°, still remains very urgent. The most simple and easy-to-manufacture axisymmetric geometric shapes are cylinder and ring. Figure 18 represents an electron-optical scheme of the box-type device being a combination of these shapes. This arrangement ensures fourth-order angular focusing with nonzero energy dispersion. The fourth-order focusing serves as a criterion of high quality of the energy analyzer, since in this case the contradiction in simultaneous reaching the high levels of luminosity and resolution is considerably weakened compared to the case of lower-order focusing. In the design under consideration, the ratio E/V is 0.655, the ratio of radii of the exterior and interior cylinders is rb/ra = 10/5 = 2, the distance between the left edge of the analyzer and the source is 0.152rb. The central angle α0 of the second-order focusing and geometric position of the image depend on the initial electron energy E and the source position on the z-axis. The analyzer's instrumental function is shown in Figure 19. Energy resolution at the instrumental function's half-height constitutes 0.1% for the range of input angles α from 88° to 92° and the output aperture radius 0.002ra. The luminosity amounts to 3.5%, given that only the upper half of the instrument is operating. Another remarkable and unique feature of the proposed scheme is its planar focusing region representing a set of second-order annular focuses that correspond to electrons with different initial energies (see Figure 20, displaying only the upper part of the axial cross section). This feature allows the analyzer to operate in the spectrographic mode with integral angular collection in the range of angles φ from − π/2 to π/2; herewith the PSD with selectivity along the annular image's radius can be used as a multichannel registration system. Thus, this device allows implementation of two independent modes of registering the photoelectron spectra: (i) with angular resolution and (ii) with integral collection of photoelectrons having the emission angles φ in the range from − π/2 to π/2 and α in the range of 90° ± , 2°. Both modes may be implemented in a single device with independent operation of its lower and upper halves. Numerical calculations show that small deviations in the geometry of electrodes and initial conditions do not cause any significant changes in the device characteristics. View chapterExplore book Read full chapter URL: Book series2015, Advances in Imaging and Electron PhysicsVictor S. Gurov, ... Andrey A. Trubitsyn Chapter Railway Engineering 2003, Encyclopedia of Physical Science and Technology (Third Edition)William W. Hay II Location II.A Economic Location A railroad location is defined by its horizontal and vertical position. The economics of location derive from the relations between distance (length of line), curvature, grades, and motive power characteristics. Any of several economic tests may be used to evaluate these relations, but the rate-of-return method has the advantages of traditional use and of indicating whether an attractive rate of return can be anticipated. Thus, the basic location formula is r = (R − E)/C, where R is the expected revenue, E is the operating cost, C is the construction cost (investment), and r is the rate of return. The rate of return must be sufficient to cover investment interest and to yield a profit in the 20–30% range. For projects with a known and limited life, the annual cost of the discounted present worth tests can be used and, for public projects, the benefit–cost ratio. Whatever test is used, the revenues, operating expenses, and construction costs of alternatives must be determined. II.B Traffic (Revenue) Sources A projected new line should be supported by a traffic and economic survey. Much construction has been for access to coal and iron deposits. The output of those mines establishes the traffic volume and revenue, subject to market fluctuations. For more general traffic, the number and character of traffic sources on line will be a factor. Adding circuitry to a route to reach a traffic source adds to distance and to the operating costs as discussed below. For an existing line the volume and revenue are known, but they can be affected by contemplated changes in service or routing, mergers, traffic agreements, rate structures, or changes in industrial patterns and geography. II.C Effects of Distance Distance is the length of line. In the mid-1980s a mile of track costs $100,000 to $200,000 or more. Right-of-way (land) and grading costs vary with the terrain. The number of sidings, bridges, tunnels, grade crossings, yards, and buildings vary somewhat with distance and introduce costs that can only be established for a particular location. The accepted unit of measure for distance is the cost per 1000 gross ton miles (1000 GTM). On tangent level track, the operating cost arises from overcoming train resistance, that is, the resistance encountered by the locomotive to the forward movement of the train and its contents. Distance costs also include fuel, wages of train and enginemen, and a proportionate cost of maintaining locomotives, rolling stock, track, signals, communications, and other facilities for traffic movement. A cost per 1000 GTM adequate for the purpose is obtained by dividing total operating expenses by total 1000 GTMs. These data can be obtained from an existing line's accounts or borrowed from a similarly placed carrier. A total for the United States can be calculated with published data from the Federal Railroad Administration (FRA), the Interstate Commerce Commission, or the Association of American Railroads (AAR). The cost per 1000 GTM for the United States in 1978 was $10.40. The gross costs of distance will be (gross tons/1000) × distance hauled × cost per 1000 GTM. The longer the length of line, the greater will be the effects of distance. The effects of curvature and of rise and fall can be included in distance by incorporating equivalent curve and rise-and-fall miles in the computation. This is explained later. In comparing costs for alternative lines of different lengths, it must be noted that not all costs are attributed to the additional miles of one line over the other. The total cost for one line is computed as a base, but only the costs of the additional (or lesser) miles in the other alternative(s) are considered. Not all costs that entered into the base line calculation will be incurred by the additional miles of the alternative. Yards, sidings, stations, bridges, and other structures occur only occasionally and, for only a mile or so of distance, may not occur at all. Thus, only a percentage of the base cost per mile would apply. That percentage is 28.9 for differentials of less than 1 mile, 34.8 for distances of 1–10 miles, and 35.0 for distances over 10 miles. If the differential distance affects the wages of train and enginemen, those percentages become, respectively, 34.1, 43.9, and 50.5. II.D Train Resistance Train resistance is the resistance to forward movement of a train traveling at constant speed on tangent level track in still air. The locomotive must exert a tractive force or effort sufficient to overcome train and other resistances (defined later). The greater the train resistance, the more energy must be expended to move the train, giving a direct relation between train resistance and operating costs. The industry has generally placed reliance on an equation developed by D. W. Davis, Jr., which, with appropriate coefficients derived from field tests, relates flexibility in speed, car weight, and car dimensions. Based primarily on earlier experiments by Dr. Edward C. Schmidt, it has had an accuracy suitable for most railroad purposes up to 50 miles per hour (mph). The Davis equation has the form where Ru is the unit resistance in pounds per ton of train or car weight; n is the number of car axles, usually four; w is the weight in tons per axle; V is the speed in miles per hour; A is the cross-sectional area of the car or locomotive in square feet (80–90 for cars, 80–120 for locomotives); b is a coefficient reflecting flange friction and shock, sway, and concussion, 0.045 for freight cars, 0.03 for passenger cars and locomotives; and C is the drag coefficient, which varies from 0.0005 for freight cars to 0.0024 for locomotives (0.0017 if streamlined) and 0.00034 for passenger cars. The value of C depends on the shape of the front end and the length and smoothness of the vehicle's exterior; A and C are sometimes combined in one coefficient. Total car and train resistance is the product of the unit resistance and the weight of the car or train. It is significant that unit resistance decreases with increased car weight. The Davis equation is included in the American Railway Engineering Association's “Manual for Railway Engineering (Fixed Properties).” To account for the characteristics of more recent car types and designs, the AREA manual also shows the equation where Ru, n, w, and V are as before; W is the total weight in tons; and K is the air resistance or drag coefficient, which is 0.07 for conventional equipment, 0.0935 for container cars, and 0.16 for trailers on flatcars. An attempt to make the Davis equation even broader in application is the adjusted Davis equation found in Hay's “Railroad Engineering” in the form Radj = K′RD, where RD is the regular Davis value and K′ is an adjustment factor, which is 1.00 for pre-1950 equipment, 0.085 for conventional post-1950 equipment, 0.95 for loaded container cars, 1.05 for trailers on flatcars, 1.20 for empty covered automobile rack cars, 1.30 for loaded auto rack cars, and 1.90 for empty auto rack cars. An average value for train resistance is 4–10 lb/ton; 6 lb/ton is a frequently used value. Eight pounds per ton has been used in Section II.F. For streamlined high-speed equipment, primarily passenger cars, a revision of the air resistance term in the Davis equation has been developed by Dr. A. I. Totten. Values based on front-end configuration and car length and smoothness are given in Hay's “Railroad Engineering.” Studies are being conducted by the AAR, the University of Illinois, individual railroads, and others to update the Davis equation in terms of current track and equipment. Track stiffness is found to decrease resistance, but rail surface irregularities, such as corrugations, cause an increase. Much importance is being attached to air resistance. II.E Other Resistance Resistance is also encountered from several other sources. II.E.1 Locomotive Resistance In addition to overcoming the resistance of a train, a locomotive must overcome its own train resistance. An electric or diesel-electric locomotive can be considered as one or more additional cars and the Davis equation applied with appropriate coefficients for size, weight, and axle arrangement. For steam locomotives, the total resistance in pounds is usually taken as 20 times the weight on drivers in tons. II.E.2 Starting Resistance Additional tractive force is needed at starting to overcome inertia and warm the car bearings. For cars with roller bearings, the resistance is roughly equivalent to a Davis value where V = 0, but in cold weather or with friction bearings a higher value is in order, usually 18 lb/ton. When a train has been standing for a long period in subzero temperatures, a resistance of 30–50 lb/ton can be encountered. On such occasions, train tonnages may be reduced. II.E.3 Wind Resistance The Davis equation applies to still-air situations. A head wind velocity can be added to the velocity in the air resistance term or subtracted for a following or tail wind. In actual operation, both train and wind change direction. Side winds tend to force the wheel flanges away from the wind direction against the opposite rail. The worst condition is an opposing quartering wind, which exerts both a retarding and a flange pressure force. The AREA presents a state-of-the-art formula in which L = 8.25a sin β KAV2, where L is the wind resistance in pounds; β is the angle between the train and wind direction; A is the projected frontal area in square feet; V is the train velocity in miles per hour; and a = Ww/WL, with Ww being wind velocity and WL being train velocity in miles per hour. The usual practice in areas of prevailing winds is to reduce the train tonnage. Curvature, grades, and acceleration have enough significance to warrant separate comment. II.F Curves and Curve Resistance Trains encounter additional resistance in going around curves. This is due primarily to flange friction against the gauge corner of the outside rail by the outside lead wheel of each truck and to slippage across the head of the rail by all wheels. The inner wheels may also press against the inside rail head. Values of 0.50–1.00 lb/ton per degree of curve have been used. The AREA recommends 0.80 lb. Thus, an 8000-ton train on a 2° curve would experience 8000 × 0.80 × 2 or 12,800 lb of curve resistance in addition to the train resistance previously described. The effects of curve lubrication are discussed in Section V.E.1. For a given divergence between two tangents (see curve geometry in Section VII.J), the cost varies inversely as the length of curve and directly with the central angle between the tangents. Curve resistance acts on the entire train and, like train resistance, is evaluated in cost per 1000 GTM. An average value for train resistance of 8 lb/ton is equivalent to the resistance of a 10° curve (10 × 0.8 = 8.0). The equivalent curve-mile would be a 10° curve 1 mile long having a central angle of 528°. The summation of the degrees of central angle over a line segment divided by 528 gives the equivalent curve-miles for that segment. But only a portion of costs that are specifically due to curvature should be considered. The costs of yard operation, interlocking plants, or traffic solicitation have little to do with curvature. By evaluating curve-related costs alone, it is found that only 27.24% of total costs per 1000 GTM can be attributed to curve resistance. Curve costs for a route with 2600° of central angle would be (2600/528) × cost per 1000 GTM × 0.2724. As shown in the next section, curve resistance can be expressed in terms of the equivalent gradient and combined with grade resistance in one quantity. II.G Grade Resistance By using only the component of height that tends to move a car down an incline, grade resistance is found to have a value of 20 lb/ton per percent of grade (Fig. 1). An 8000-ton train on a 0.40% grade would experience a total grade resistance of 8000 × 20 × 0.04 or 64,000 lb. This must be added to all other resistance when determining the total amount to be overcome by the locomotive. The effect on train size and operation will be discussed in Section III.F. II.H Rise and Fall The ascents and descents in a rail line are sources of resistance and cost. If the average value for train resistance of 10 lb/ton is assumed, it becomes the same as the resistance offered by a 0.50% grade. Such a grade rises 26.4 ft/mile, from which 24.6 ft of rise and corresponding fall represent one rise-and-fall mile. Rise and fall, as indicated in Fig. 2, can be taken as half the sum of rise and fall in both directions. The additional rise-and-fall equivalent miles can be added to the distance in determining total 1000 GTM. But not all 1000 GTM costs are attributable to rise and fall. The percentage depends on the severity of the grades, which are customarily grouped into three categories. Class A grades require no change in locomotive throttle setting or brake application and can be ignored as a cost item. Class B grades require some slight throttle increase in going over a crest but require no brake application. The costs of such grades are 2.63% of total 1000 GTM costs. Class C grades, requiring the use of throttle and brakes, involve 8.10% of 1000 GTM costs plus a 3.00% factor if the rise and fall is on a ruling or a helper grade (Section III.F). II.I Undulating Profiles Rise and fall may appear in a profile as many closely spaced sags and crests. Such an undulating profile poses problems in train operation. When a long train is stretched over a series of sags and crests, part of the train is moving downgrade and part is moving upgrade. In a sag, the rear portion tends to run into the head portion. A sudden change in throttle position or a quick brake application can cause a break in two or even a derailment. The locating engineer should strive to avoid such situations. Operating personnel must learn to adapt to those that are in the profile. II.J Compensation for Curvature When a curve coincides with a grade, the combined resistance can be sufficient to reduce train speed or cause the train to stall. The resistive effects of curvature can be combined with grade resistance by developing an equivalent grade for curve resistance. If the 0.80 lb/ton per degree is divided by the resistance of a 1% grade, that is, 20 lb/ton, the equivalent resistive grade of a 1° curve is 0.04%. The combined resistance of a 1.20% grade on a 2° curve would be 2 × 0.04 + 1.20 or 1.28%. To maintain an effective 1.20% grade throughout, the grade can be compensated for curvature; that is, the actual profile grade is reduced by 0.08% to a 1.12% grade throughout the length of the curve, then continues at the initial 1.20% rate. There will be some increase in length of the grade to reach a given summit. The AREA has recommended a full 0.04% compensation, but some railroads have reduced the gradient by only 0.035 or even 0.030% when the curve is so short that the entire train is not on the curve at the same time. View chapterExplore book Read full chapter URL: Reference work2003, Encyclopedia of Physical Science and Technology (Third Edition)William W. Hay Chapter The Euclidean Heritage 2006, Geometry and Its Applications (Second Edition)Walter Meyer The Theory of Parallel Lines in a Plane Lines in a plane that do not meet are called parallel. Are there any such things? It is one thing to make a definition, for example, that a hoofed animal with one twisting horn is a unicorn, but it is another thing to find an example of what you have defined. Let's put the question a little more precisely. Suppose we have a line L and a point P off of L. Many lines pass through P and lie in the plane of P and L. How many never cross L? Our first approach to this problem is to show how we can construct one such parallel to L through P. This means the answer to the question cannot be zero as long as we subscribe to our axiom set. Our construction is based on dropping and erecting perpendiculars, as described in the previous section. Construction of a Parallel to Line L through Point p 1. : Drop a perpendicular from P to L. Call this perpendicular line M. 2. : Erect a perpendicular to M at P Call this line N. THEOREM 2.9 Given a line L and point P off it, if we construct line N as just described, then N is parallel to L (Figure 2.23). PROOF We will give an indirect proof, in which we assume that the lines are not parallel and show that this leads to a conclusion that contradicts a previous theorem. This contradiction forces us to conclude that the lines are parallel. Suppose the lines cross at a point R. Then triangle PQR has an exterior angle α measuring 90°, and one of its remote interior angles, β, also measures 90°. This contradicts the exterior angle inequality (Theorem 2.2). (Figure 2.23 shows a difficulty in illustrating indirect proofs. We made the assumption that the lines cross, later shown to be false, but we want to illustrate the assumed crossing in order to help visualize the proof. The picture we draw is bound to be unbelievable.) So now we know that there is at least one parallel to L through P. Could there be more than one? Our intuition tells us that the answer is no. Consequently, it is quite natural to try to prove this. Euclid and his predecessors, working more than 2000 years ago, were unable to find such a proof. As a result, Euclid assumed as an axiom what he couldn't prove. His axiom about parallels was equivalent to our more modern version. AXIOM 9: Euclid's Parallel Axiom Given a point P off a line L, there is at most one line through P parallel to L. Geometers who followed in the footsteps of Euclid were never happy assuming this assertion, because they hoped it would be possible to prove it as a theorem from the other axioms. One reason we might be uneasy with this axiom is that we prefer not to assume something with which we have little experience. For example, in Figure 2.23 imagine placing a line N′ through P, which makes an angle of 89.999999999999999° to . This would be barely distinguishable from the perpendicular line N. Even though you might believe that N′ will eventually cross line L, to see if it really does you might have to follow the constructed line for miles. This is not the kind of experiment we do very often. Even if you did carry out such an experiment, you might get it wrong; you could draw the line (or stretch the string or follow the light ray) inaccurately and your answer would be useless. Because geometers were uneasy with assuming Euclid's parallel axiom and were by and large unable to substantiate it with accurate experiments, they tried very hard to prove the assertion from the other axioms so that its status could be changed from axiom to theorem. This endeavor went on for more than 2000 years — surely one of the longest-running stories in mathematics and science. Finally, in the nineteenth century, geometers were able to prove that no such proof could ever be devised. (A more detailed account of this fascinating story is found in the next chapter and in books on the foundations of geometry.1) Following Euclid, in this section and the next we assume Euclid's parallel axiom. We turn now to the question of how we can determine whether two lines are parallel. Given two lines L and M (Figure 2.24), we call a third line a transversal if it cuts both lines. At each crossing point we have four angles. Two of them are said to be interior since they lie between the lines L and M. In Figure 2.24, α3 and α4 are interior at the crossing with M and β1 and β2 are interior at the crossing with L. The other angles are not interior. Interior angles, one at L and the other at M, are called alternate interior angles if they lie on opposite (alternate) sides of the transversal. In Figure 2.24, α3 and β2 have the alternate interior relationship. Likewise α4 and β1 are alternate interior angles. Two angles made by a transversal across lines L and M are said to be corresponding angles if (a) : one is made by L and the transversal and the other by M and the transversal, (b) : the angles are on the same side of the transversal, and (c) : one is between L and M and the other is not. For example, a1 and β1 are corresponding. We have labeled the angles in Figure 2.24 so that angles with the same subscript are corresponding. THEOREM 2.10 (a) : If two lines L and M (Figure 2.25) are parallel and cut by a transversal, then alternate interior angles (for example, a and β) are congruent. (b) : If two lines are cut by a transversal and alternate interior angles are congruent, then the lines are parallel. PROOF First we prove part (a), where the lines are given to be parallel. Let C be the midpoint of transversal . From C, drop a perpendicular to line L, meeting it at B. Now extend this perpendicular back till it reaches M at D. (Can you explain why there will be a crossing of the perpendicular with M?) What angle does make with M? Although we have labeled this a right angle in Figure 2.25, this needs proof. We know by Theorem 2.9 that if we were to draw a perpendicular to at D we would obtain a parallel to L. But this has to be line M, since M is parallel to L and Euclid's parallel axiom says there cannot be two parallels to L through D. Because M is perpendicular to , ∠BDE is a right angle. We can now assert that triangles ABC and EDC are congruent by AAS. (The right angles at D and B are congruent, the vertical angles at C are congruent, and CE = CA by construction.) Thus, α = β, since these are corresponding parts of the congruent triangles. This concludes the proof of part (a). We leave the proof of part (b) to the reader as an exercise. Notice (Figure 2.24) that when two angles are corresponding, then an angle that is vertical to the one which is not interior is alternate interior to the other. This suggests that Theorem 2.10 has consequences for corresponding angles. Here are some questions you should think about: If two lines are parallel and cut by a transversal, what can you say about corresponding angles? If two lines are cut by a transversal, what information about a pair of corresponding angles would allow you to conclude that the lines are parallel? In many examples and applications of geometry, we encounter four-sided figures where the opposite sides are parallel. Such figures are called parallelograms. The most important theorem about parallelograms is that their “opposite parts” are congruent. Here is a more precise statement and the proof. THEOREM 2.11 If ABCD is a parallelogram (Figure 2.26), then: (a) : Opposite sides are congruent. (b) : Opposite angles are congruent. PROOF For simplicity we will just show m∠A = m∠C. (The same type of proof can be adapted to show m∠B = m∠D.) Likewise, we only show B C = AD. The standard technique for proving parts congruent in geometric figures is to show that they are corresponding parts of congruent triangles. The parallelogram has no triangles to start with, so we introduce some by drawing a diagonal . We show that triangles ABD and CDB are congruent by ASA: m∠1 = m∠2 since these are alternate interior angles of the parallel lines and ; m∠3 = m∠4 since these are alternate interior angles of the parallel lines and ; and, finally, is common to both triangles. Now ASA gives us the congruence we want. ∠A and ∠C are corresponding parts and therefore have the same measure. Likewise, and are corresponding parts, so AD = BC. THEOREM 2.12 A quadrilateral ABCD (Figure 2.26) where 1. : no side intersects another side except at a vertex and 2. : opposite sides are congruent will have its opposite sides parallel (i.e., it will be a parallelogram). PROOF Triangle ABD is congruent to triangle CDB by SSS (with the correspondence A → C, B →D, D →B), so These are alternate interior angles of lines and , cut by transversal . By part (b) of Theorem 2.10, the congruence of these angles implies that is parallel to . In a similar way we show m∠3 = m∠4, so is parallel to . There is something about this proof we have left as a challenge for you to think about. Why do we need condition 1 of the hypothesis? And what does it mean for a quadrilateral to have a side that intersects another side? (Draw an example, or look at Exercise 17 of Section 1.3.) We do not use condition 1 in the proof, so why not delete it? Or could it be that the theorem does not hold for self-intersecting quadrilaterals? If so, we should have used condition 1, and our proof is not rigorous. We come, finally, to one of the most central theorems of Euclidean geometry, one that says nothing at all about parallelism in its statement but that cannot be proved without somehow using Euclid's parallel axiom. In this theorem, the term angle sum means the sum of the measures of the three angles of the triangle. THEOREM 2.13 The angle sum of any triangle ABC is 180° (Figure 2.27). PROOF Place a line L through B, parallel to . The idea of the proof is to show that adding the measures of the angles of the triangle is the same as adding the measures of the angles at B: m∠1 + m∠2 + m∠3. By Theorem 2.10, Thus m∠A + m∠B + m∠C = m∠1 + m∠2 + m∠3 = 180°. We leave the proof of the following useful theorem as an exercise. It follows partly from the previous theorem and by consideration of isosceles triangles. THEOREM 2.14 (a) If P, Q, and R lie on the circumference of a circle and the center C lies on (in this case we say ∠PQR is inscribed in a semicircle), then m∠PQR = 90°. (b) If P and R are fixed points, then the locus of all points Q where m∠PQR = 90° consists of all points on the circle whose diameter is , but not including P and R. APPLICATION: The Circumference of the Earth Eratosthenes (280–195 B.C.), one of the leading mathematicians of antiquity, was able to apply Theorem 2.10 to estimate the circumference of the earth. Figure 2.28 shows his method. This figure shows a circular cross section of the earth through a longitude circle that happens to contain two cities, Alexandria (A) and Syene (S). Eratosthenes began with the fact that the distance from Alexandria to Syene is 5000 stadia, which is about 500 miles. He allegedly made this measurement of circular arc by following a camel caravan from one city to the other and counting the steps taken by one particular camel. Then he needed to know what fraction of the whole way around the earth that 500 miles is. As we now describe, Eratosthenes found that is 1/50 of the whole circumference, so he got his answer by multiplying 500 miles by 50 to obtain 25,000 miles, remarkably close to the modern estimate. That fraction Eratosthenes needed is related to the measure θ of the central angle as follows: (2.5) Then it remained to estimate θ. Of course, this couldn't be done directly since we cannot get to the center of the earth to make the measurement. To work around this, Eratosthenes started with the fact that on a certain day of the year, at a certain time of day, the sun shone directly down a vertical well in Syene. This was determined by noticing that no part of the water at the bottom was in shadow. Since the well was vertical, it also meant that the beam of light illuminating the bottom would, if extended, pass through C, the center of the earth. At Alexandria, there was a vertical pole. Because it was vertical, if we could somehow extend its line, this line would also pass through the center of the earth. Eratosthenes then made a few approximations. He thought of the sun as a point and the light beams from the sun to the well and the pole as lines. Because these lines cross at the sun, they are not parallel, but Eratosthenes wanted to think of them as parallel anyhow. The justification for this is that the sun is so far away that the directions of these lines are almost the same. On the assumption of parallelism, we have that θ = ϕ [part (a) of Theorem 2.10]. After making this assumption, all Eratosthenes had to do was to measure ϕ at the pole in Alexandria — an easy task that yielded the value 7.2° — and plug that value in as θ in the fraction θ/360 he is looking for. Thus, he obtained (2.6) (2.7) This is amazingly close to the modern estimate of 24,860 miles. View chapterExplore book Read full chapter URL: Book2006, Geometry and Its Applications (Second Edition)Walter Meyer Chapter Effective conductivity of fibrous composites with cracks on interface 2020, Applied Analysis of Composite MediaPiotr Drygaś, ... Wojciech Nawalaniec Abstract Consider a unidirectional fiber-reinforced composite with cracks/voids on interface of components. The location and size of debonding regions on the boundary of fibers have a strong effect on the transverse effective conductivity. The previous mathematical models were based on the investigations of local fields. The effective properties were estimated by the smeared out perfect and imperfect contact on the boundary by considering a uniform imperfect boundary condition. We propose a mathematical model where perfect and imperfect contact regions are randomly distributed on interface. First, such a random distribution is arbitrary fixed in a section perpendicular to fibers with one crack per fiber. Next, it is assumed that this distribution is statistically homogeneous along the fibers. An analytical formula for the transverse effective conductivity is derived when the representative section contains dilute non-overlapping circular inclusions. It is based on the exact solution to the mixed -linear problem for a disk. The explicit influence of cracks on the conductivity is expressed in terms of the central angle which spans the cracks. Using the obtained formula we introduce debonding coefficients which characterizes the macroscopic flux in composites. View chapterExplore book Read full chapter URL: Book2020, Applied Analysis of Composite MediaPiotr Drygaś, ... Wojciech Nawalaniec Chapter Particles at Fluids Interfaces and Membranes 2001, Studies in Interface SciencePeter A. Kralchevsky, Kuniaki Nagayama 9.1 ORIGIN OF THE “CAPILLARY CHARGE” IN THE CASE OF SPHERICAL INTERFACE Spherical interfaces and membranes can be observed frequently in nature, especially in various emulsion and biological systems [1-3]. As a rule, the droplets in an emulsion are polydisperse in size, and consequently, the liquid films intervening between two attached emulsion drops have in general spherical shape . It is worthwhile noting that some emulsions exist in the form of globular liquid films, which can be of W1/O/W2 or O1/W/O2 type (O = oil, W = water), see e.g. Ref. . If small colloidal particles are bound to such spherical interfaces (thin films, liposomes, membranes, etc.) they may experience the action of lateral capillary forces. The spherical geometry provides some specific conditions, which differ from those with planar interfaces or plane-parallel thin films. For example, in the case of closed spherical thin film it is important that the volume of the liquid layer is finite. In addition, the capillary force between two diametrically opposed particles, confined in a spherical film, is zero irrespective of the range of the interaction determined by the characteristic capillary length q−1 As already discussed, the particles attached to an interface (thin film, membrane) interact through the overlap of the perturbations in the interfacial shape created by them. This is true also when the non-disturbed interface is spherical; in this case any deviation from the spherical shape has to be considered as an interfacial perturbation, which gives rise to the particle “capillary charge”, see Section 7.1.3 above. The effect of gravity is negligible in the case of spherical interfaces (otherwise the latter will be deformed), and consequently, it is not expected the particle weight to cause any significant interfacial deformation. Then a question arises: which can be the origin of the interfacial perturbations in this case? Let us consider an example depicted in Fig. 9.1a: a solid spherical particle attached to the surface of a spherical emulsion drop of radius R0. Such a configuration is typical for the Pickering emulsions which are stabilized by the adsorption of solid particles and have a considerable importance for the practice [6-10]. The depth of immersion of the particle into the drop phase, and the radius of the three-phase contact line, rc, is determined by the value of the contact angle α (Fig. 9.1a). The pressure within the drop, PI, is larger than the outside pressure PII because of the curvature of the drop surface. The force pushing the particle outside the drop (along the z-axis) is (9.1) on the other hand, the force pushing the particle inside the drop is due to the outer pressure and the drop surface tension resolved along the z-axis (Fig. 9.1a): (9.2) Here θc, is a central angle: sinθc = rc/R0. At equilibrium one must have Fin = Fout; then combining Eqs. (9.1) and (9.2) one obtains the Laplace equation PI – PII = 2σ/R0 which is identically satisfied for a spherical interface. Thus we arrive at the conclusion that the force balance Fin = Fout is fulfilled for a spherical interface. The same conclusion can be reached in a different way. The configuration of a spherical particle attached to an emulsion drop must have rotational symmetry. It is known that for an axisymmetric surface intersecting the axis of revolution the Laplace equation, Eq. (2.24), has a single solution: sphere (gravity deformation negligible). If a second particle is attached to the drop surface it can acquire the same configuration as that in Fig. 9.1a; only the radius of the spherical surface will slightly increase due two the volume of the drop phase displaced by the second particle. In other words the force balance Fin = Fout is fulfilled for each separate particle and the drop surface remains spherical. Moreover, if there is no deviation from the spherical shape, then lateral capillary force between the particles cannot appear. Hence, if aggregation of particles attached to the surface of such emulsion drop is observed, it should be attributed to other kind of forces. After the last ‘negative’ example, let us consider another example, in which both deformation and lateral capillary forces do appear. Pouligny and co-authors [12-14] have studied the sequence of phenomena which occur when a solid latex microsphere is brought in contact with an isolated giant spherical phospholipid vesicle. They observed a spontaneous attachment (adhesion) of latex particles to the vesicle, which is accompanied by complete or partial wetting (wrapping) of the particle by lipid bilayer(s). In fact, the membrane of such a vesicle can be composed of two or more lipid bilayers. As an example, in Fig. 9.1b we present a configuration of a membrane consisting of two lipid bilayers; the particle is captured between the two bilayers. The observations show that such two captured particles experience a long range attractive force . There are experimental indications that in a vicinity of the particle the two lipid bilayers are detached (Fig. 9.1b) and a gap filled with water is formed between them . The latter configuration resembles that depicted in Fig. 7.1f, and consequently, the observed long range attraction could be attributed to the capillary immersion force . Similar configurations can appear also around particles, which are confined in the spherical film intervening between two attached emulsion droplets (Fig. 9.2), or in the globular emulsion films like those studied in Ref. . In these cases the interfacial deformations are related to the confinement of the particles within the film. Looking for an example in biology, we could note that the cytoskeleton of a living cell is a framework composed of interconnected microtubules and filaments, which resembles a “tensegrity” architectural system composed of long struts joined with cables, see Refs. [16,17]. Moreover, inside the cell a gossamer network of contractile microfilaments pulls the cell’s membrane toward the nucleus in the core . In the points where the microfilaments are attached to the membrane, concave “dimples” will be formed, see Fig. 9.3a. On the other hand, at the points where microtubules (the “struts”) touch the membrane, the latter will acquire a “pimple”-like shape, see Fig. 9.3b. Being deformations in the cell membrane, these “dimples” and “pimples” will experience lateral capillary forces, both attractive and repulsive, which can be employed to create a more adequate mechanical model of a living cell and, hopefully, to explain the regular “geodesic forms” which appear in some biological structures . Other example can be a lipid bilayer (vesicle) containing incorporated membrane proteins, around which some local variation in the bilayer thickness can be created. The latter is due to the mismatch in the thickness of the hydrophobic zones of the protein and the bilayer. The overlap of such deformations can give rise to a membrane-mediated protein-protein interaction . A peculiarity of this system, which is considered in Chapter 10 below, is that the hydrocarbon core of the lipid bilayer exhibits some elastic properties and cannot be treated as a simple fluid [19,20]. Coming back to simpler systems, in which lateral capillary Forces can be operative, we should mention a configuration of two particles (Fig. 9.4b), which are confined in a liquid film wetting a bigger spherical solid particle. The problem about the capillary forces experienced by such two particles has been solved in Ref. . The developed theoretical approach, which is applicable (with possible modifications) also to the other systems mentioned above, is described in the rest of the present chapter. View chapterExplore book Read full chapter URL:
190670
https://artofproblemsolving.com/wiki/index.php/Inequality?srsltid=AfmBOorB01g_q-xR-4wosHu3lZ84G6FC4k9wj_kttpbhxDmQ0eTqj3Ak
Art of Problem Solving Inequality - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Inequality Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Inequality The subject of mathematical inequalities is tied closely with optimization methods. While most of the subject of inequalities is often left out of the ordinary educational track, they are common in mathematics Olympiads. Contents [hide] 1 Overview 2 Solving Inequalities 2.1 Linear Inequalities 2.2 Polynomial Inequalities 2.3 Rational Inequalities 3 Complete Inequalities 4 List of Theorems 4.1 Introductory 4.2 Advanced 5 Problems 5.1 Introductory 5.2 Intermediate 5.3 Olympiad 6 Resources 6.1 Books 6.1.1 Intermediate 6.1.2 Olympiad 6.2 Articles 6.2.1 Olympiad 6.3 Classes 6.3.1 Olympiad 7 See also Overview Inequalities are arguably a branch of elementary algebra, and relate slightly to number theory. They deal with relations of variables denoted by four signs: . For two numbers and : if is greater than , that is, is positive. if is smaller than , that is, is negative. if is greater than or equal to , that is, is nonnegative. if is less than or equal to , that is, is nonpositive. Note that if and only if , , and vice versa. The same applies to the latter two signs: if and only if , , and vice versa. Some properties of inequalities are: If , then , where . If , then , where . If , then , where . Solving Inequalities In general, when solving inequalities, same quantities can be added or subtracted without changing the inequality sign, much like equations. However, when multiplying, dividing, or square rooting, we have to watch the sign. In particular, notice that although , we must have . In particular, when multiplying or dividing by negative quantities, we have to flip the sign. Complications can arise when the value multiplied can have varying signs depending on the variable. We also have to be careful about the boundaries of the solutions. In the example , the value does not satisfy the inequality because the inequality is strict. However, in the example , the value satisfies the inequality because the inequality is nonstrict. Solutions can be written in interval notation. Closed bounds use square brackets, while open bounds (and bounds at infinity) use parentheses. For instance, ![Image 49: $x \in 3,6)$ means . Linear Inequalities Linear inequalities can be solved much like linear equations to get implicit restrictions upon a variable. However, when multiplying/dividing both sides by negative numbers, we have to flip the sign. Polynomial Inequalities The first part of solving polynomial inequalities is much like solving polynomial equations -- bringing all the terms to one side and finding the roots. Afterward, we have to consider bounds. We're comparing the sign of the polynomial with different inputs, so we could imagine a rough graph of the polynomial and how it passes through zeroes (since passing through zeroes could change the sign). Then we can find the appropriate bounds of the inequality. Rational Inequalities A more complex example is . Here is a common mistake: The problem here is that we multiplied by as one of the last steps. We also kept the inequality sign in the same direction. However, we don't know if the quantity is negative or not; we can't assume that it is positive for all real . Thus, we may have to reverse the direction of the inequality sign if we are multiplying by a negative number. But, we don't know if the quantity is negative either. A correct solution would be to move everything to the left side of the inequality, and form a common denominator. Then, it will be simple to find the solutions to the inequality by considering the sign (negativeness or positiveness) of the fraction as varies. We will start with an intuitive solution, and then a rule can be built for solving general fractional inequalities. To make things easier, we test positive integers. makes a good starting point, but does not solve the inequality. Nor does . Therefore, these two aren't solutions. Then we begin to test numbers such as , , and so on. All of these work. In fact, it's not difficult to see that the fraction will remain positive as gets larger and larger. But just where does , which causes a negative fraction at and , begin to cause a positive fraction? We can't just assume that is the switching point; this solution is not simply limited to integers. The numerator and denominator are big hints. Specifically, we examine that when (the numerator), then the fraction is , and begins to be positive for all higher values of . Solving the equation reveals that is the turning point. After more of this type of work, we realize that brings about division by , so it certainly isn't a solution. However, it also tells us that any value of that is less than brings about a fraction that has a negative numerator and denominator, resulting in a positive fraction and thus satisfying the inequality. No value between and (except itself) seems to be a solution. Therefore, we conclude that the solutions are the intervals ![Image 78: $(-\infty,-5)\cup\frac{3}{2},+\infty)$. For the sake of better notation, define the "x-intercept" of a fractional inequality to be those values of that cause the numerator and/or the denominator to be .To develop a method for quicker solutions of fractional inequalities, we can simply consider the "x-intercepts" of the numerator and denominator. We graph them on the number line. Then, in every region of the number line, we test one point to see if the whole region is part of the solution. For example, in the example problem above, we see that we only had to test one value such as in the region , as well as one value in the region ![Image 83: $(-\infty,-5]$]( and ![Image 84: $\frac{3}{2},+\infty)$; then we see which regions are part of the solution set. This does indeed give the complete solution set. One must be careful about the boundaries of the solutions. In the example problem, the value was a solution only because the inequality was nonstrict. Also, the value was not a solution because it would bring about division by . Similarly, any "x-intercept" of the numerator is a solution if and only if the inequality is nonstrict, and every "x-intercept" of the denominator is never a solution because we cannot divide by . Complete Inequalities A inequality that is true for all real numbers or for all positive numbers (or even for all complex numbers) is sometimes called a complete inequality. An example for real numbers is the so-called Trivial Inequality, which states that for any real , . Most inequalities of this type are only for positive numbers, and this type of inequality often has extremely clever problems and applications. List of Theorems Here are some of the more useful inequality theorems, as well as general inequality topics. Introductory Arithmetic Mean-Geometric Mean Inequality Cauchy-Schwarz Inequality Titu's Lemma Chebyshev's Inequality Geometric inequalities Jensen's Inequality Nesbitt's Inequality Rearrangement Inequality Power mean inequality Triangle Inequality Trivial inequality Schur's Inequality Advanced Aczel's Inequality Callebaut's Inequality Carleman's Inequality Hölder's inequality Radon's Inequality Homogenization Isoperimetric inequalities Maclaurin's Inequality Muirhead's Inequality Minkowski Inequality Newton's Inequality Ptolemy's Inequality Can someone fix that Ptolemy's is in Advanced? Problems Introductory Practice Problems on Alcumus Inequalities (Prealgebra) Solving Linear Inequalities (Algebra) Quadratic Inequalities (Algebra) Basic Rational Function Equations and Inequalities (Intermediate Algebra) A tennis player computes her win ratio by dividing the number of matches she has won by the total number of matches she has played. At the start of a weekend, her win ratio is exactly . During the weekend, she plays four matches, winning three and losing one. At the end of the weekend, her win ratio is greater than . What's the largest number of matches she could've won before the weekend began? (1992 AIME Problems/Problem 3) Intermediate Practice Problems on Alcumus Quadratic Inequalities (Algebra) Advanced Rational Function Equations and Inequalities (Intermediate Algebra) General Inequality Skills (Intermediate Algebra) Advanced Inequalities (Intermediate Algebra) Given that , and show that . (weblog_entry.php?t=172070 Source) Olympiad See also Category:Olympiad Inequality Problems Let be positive real numbers. Prove that (2001 IMO Problems/Problem 2) Resources Books Intermediate Introduction to Inequalities Geometric Inequalities Olympiad Advanced Olympiad Inequalities: Algebraic & Geometric Olympiad Inequalities by Alijadallah Belabess. The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities by J. Michael Steele. Problem Solving Strategies by Arthur Engel contains significant material on inequalities. Inequalities by G. H. Hardy, J. E. Littlewood, G. Pólya. Articles Olympiad Inequalities by MIT Professor Kiran Kedlaya. Inequalities by IMO gold medalist Thomas Mildorf. Classes Olympiad The Worldwide Online Olympiad Training Program is designed to help students learn to tackle mathematical Olympiad problems in topics such as inequalities. See also Mathematics competitions Math books Retrieved from " Categories: Algebra Inequalities Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
190671
https://bmcpsychiatry.biomedcentral.com/articles/10.1186/s12888-025-07202-7
Typesetting math: 100% Skip to main content BMC Psychiatry Download PDF Research Open access Published: Predictors of extrapyramidal side effects among patients taking antipsychotic medication at Mekelle psychiatry units, Northern Ethiopia, 2023: unmatched case-control study Welu Abadi Gebru1, Gebregziabher Kidanemariam Asfaw2, Kenfe Tesfay Berhe3,4, Tesfaye Derbie Begashaw1, Hiwot Gebrewahid Reta2 & … Hagos Tsegabrhan Gebresilassie3 BMC Psychiatry volume 25, Article number: 837 (2025) Cite this article 740 Accesses 7 Altmetric Metrics details This article has been updated Abstract Background Schizophrenia is one of the most disruptive of neuropsychiatric disorders, affecting around 1% of the world’s population. Antipsychotic medications have been the backbone of schizophrenia treatment for the past 50 years. Extrapyramidal side effects of antipsychotic medication are a major problem which is associated with various factors. However, there is a dearth of evidence about the predicting factors for extrapyramidal side effects. Objective To determine the predictors of extrapyramidal side effects among all patients taking antipsychotic medication at Mekelle Psychiatry units, Northern Ethiopia, 2023. Methodology A Case-control study design was employed with a total of 201 study subjects (67cases and 134 controls). A systematic random sampling technique was employed to select the required study subjects. Extrapyramidal Side Effects were measured by the Simpson-Angus Scale, Abnormal Involuntary Movement Scale (AIMS), and the Barnes Akathisia Rating Scale (BARS) scale. The data were analyzed using Statistical Package for Social Sciences (SPSS) version 22. Bivariate and multiple logistic regression analyses were performed to determine between the independent and dependent variables. The significant independent predictor was declared at a 95% confidence interval and P-value of less than 0.05. Result Among the study subjects the modifiable factors significantly associated with EPS were; being female (AOR = 0.140, 95% CI: 0.042–0.465, p = 0.001),being single (AOR = 3.084, 95% CI: 0.569–16.727, p = 0.006), perceived stigma (AOR = 0.165, 95% CI: 0.038–0.708, p = 0.015), having mental illness history (AOR = 6.316, 95% CI: 2.026–19.692), p = 0.001), combination first generation antipsychotic drugs (AOR = 0.095,95% CI: 0.010–0.877, p = 0.038), Khat chewing practice/ behavior (AOR = 4.033, 95% CI: 1.120-14.531, p = 0.033) and history of alcohol use and currently drink alcohol (AOR = 6.213, 95% CI: 1.375–28.079, p = 0.018). Conclusion and recommendation Our study revealed, being female, single, stigma, combination first-generation antipsychotic drug, having mental illness history, Khat chewing practice/ behavior and Alcohol intake in the last 3 months were significant factors of Extrapyramidal Side Effects. Psychiatric professionals should be assessing the predictors of Extrapyramidal Side Effects especially, combination of first-generation antipsychotic drugs, substance use with antipsychotic drugs and management comorbid diagnosis routinely is strongly recommended. Peer Review reports Introduction Psychosis represents one of the most profound forms of mental illness, characterized by significant disruptions in behavior, disorganized thinking, impaired understanding, and a loss of insight. Individuals may experience positive symptoms (such as hallucinations and delusions), negative symptoms (like emotional flatness and social withdrawal), and cognitive impairments, including difficulties with working memory, slowed information processing, and challenges in understanding social cues . Schizophrenia is one of the most debilitating neuropsychiatric conditions, impacting around 1% of the global population . Schizophrenia is a prevalent mental health disorder marked by a complex and often severely disruptive set of symptoms that affect cognition, emotions, perception, and overall behavior . Schizophrenia ranks among the top ten leading causes of long-term disability worldwide, impacting approximately 1% of the global population . Antipsychotic (AP) medications—while essential for managing schizophrenia—often lead to serious and common extrapyramidal side effects (EPSE). These movement-related adverse reactions can emerge just a few days after beginning treatment. Despite this, AP drugs have remained the cornerstone of schizophrenia therapy for over fifty years . Moreover, acute extrapyramidal symptoms (EPS) represent a complex clinical phenomenon, encompassing a range of syndromes such as Parkinsonism, akathisia, acute dystonia, and dyskinesia. While the advent of clozapine and other second-generation antipsychotics has led to a reduction in the frequency of these side effects, EPS remains a persistent and significant concern in the treatment of schizophrenia . Historically, first-generation antipsychotics, introduced in the mid-20th century, demonstrated inconsistent efficacy in managing schizophrenia symptoms and were frequently associated with extrapyramidal side effects (EPS), including acute dystonia, akathisia, Parkinsonism, and tardive dyskinesia. Furthermore, positron emission tomography (PET) studies have indicated that effective symptom control typically requires 60–70% antagonism of dopamine D2 receptors, whereas exceeding this threshold—particularly reaching 75–80% blockade—is strongly linked to the onset of acute EPS. Currently, five major extrapyramidal syndromes are recognized: Parkinsonism, akathisia, acute dystonia, tardive dyskinesia, and the potentially life-threatening neuroleptic malignant syndrome [6, 7]. Dystonia is marked by involuntary, intermittent, or sustained muscle contractions, often resulting in abnormal postures or movements. While it can develop after prolonged use of antipsychotic medications, it may also emerge shortly after treatment initiation. Notably, over half of acute dystonia cases occur within the first 48 h of antipsychotic use, with approximately 90% arising within the first four days. This condition commonly affects cranial, pharyngeal, cervical, and axial muscles, leading to symptoms such as oculogyric crisis, jaw stiffness, tongue protrusion, torticollis, laryngeal and pharyngeal spasms, dysarthria, dysphagia, and in severe cases, respiratory difficulty, cyanosis, or opisthotonus . Akathisia is a common and distressing adverse effect associated with antipsychotic medications. Individuals affected by this condition typically exhibit intense restlessness and an uncontrollable urge to move, often accompanied by a subjective sense of inner tension, anxiety, and unease. Clinically, it is characterized by increased motor activity that includes repetitive, purposeless, and stereotyped movements, which can significantly impair quality of life and treatment adherence . The prevalence of akathisia among extrapyramidal side effects ranges widely from 5 to 36.8%. It affects approximately 10–20% of patients treated with second-generation (atypical) antipsychotics, which is notably lower compared to the 20–52% incidence observed with first-generation (typical) antipsychotic medications . Akathisia may persist throughout the course of antipsychotic treatment but typically resolves upon discontinuation of the medication. Its prevalence among extrapyramidal side effects varies widely, ranging from 5 to 36.8%. Specifically, akathisia occurs in about 10–20% of patients treated with second-generation antipsychotics, which is substantially lower than the 20–52% incidence reported with first-generation agents . Drug-induced Parkinsonism is characterized by the classic triad of symptoms: bradykinesia, muscle rigidity, and tremor, with postural tremor being more prevalent than resting tremor. Among patients receiving antipsychotic treatment, the prevalence of Parkinsonism is estimated to be around 15% . Tardive dyskinesia is characterized by involuntary, choreoathetoid movements affecting the orofacial area, limbs, trunk, and respiratory muscles. This condition can develop in any patient undergoing antipsychotic treatment, typically emerging after months or even years of continuous medication use. Unlike other extrapyramidal symptoms, tardive dyskinesia may persist despite discontinuation of antipsychotics and, in some cases, can be irreversible . Among adults using typical antipsychotic drugs long-term, tardive dyskinesia develops at a rate of about 5% per year, with the cumulative risk rising to 25–30% annually in elderly patients. However, the incidence of tardive dyskinesia is notably lower in those treated with atypical antipsychotics, indicating a reduced risk compared to typical agents . Neuroleptic malignant syndrome is a potentially fatal condition that arises in patients who exhibit extreme sensitivity to the extrapyramidal side effects of antipsychotic medications . Patients experiencing extrapyramidal side effects (EPS) suffer considerable adverse impacts on their health-related quality of life. These include poorly controlled mental illness, suboptimal treatment outcomes, impaired daily functioning, increased disability and morbidity, higher rates of hospitalization, as well as significant social and financial burdens. EPS also leads to greater utilization of medical resources, diminished overall quality of life, and negatively affects patients’ responses to antipsychotic treatment, which can result in uncontrolled illness, increased risk of complications, and higher mortality rates. Although various studies have identified numerous predictors of EPS among patients on antipsychotic medications—such as female gender, older age, potent dopamine D2 receptor blockade, use of first-generation antipsychotics, polypharmacy, prolonged treatment duration, concurrent use of anticholinergic or antiparkinsonian drugs, previous history of EPS, poor insight, lack of family support, longer illness duration, stigma, substance use, alcohol consumption, smoking, and genetic variability—there remains a significant gap in the comprehensive assessment and management of these side effects. Specifically, in Ethiopia, there is a notable lack of research focusing on the predictors of EPS among patients receiving antipsychotic therapy, underscoring the urgent need for studies that can provide critical insights to improve clinical care and patient outcomes. Therefore, this study is helpful to mental health professionals for routine assessment, designing effective prevention and intervention methods for extrapyramidal side effects. To determine the predictor of extrapyramidal side effects among patients with taking antipsychotic medications. Provides baseline information for further research studies. Extrapyramidal side effects (EPS) are serious adverse reactions that result from excessive antagonism of dopamine D2 receptors by antipsychotic medications, particularly in the substantia nigra and striatum regions of the brain. EPS significantly affect drug efficacy, treatment adherence, and the social functioning of individuals with schizophrenia. Although the use of second-generation antipsychotics (SGAs) has been associated with a lower incidence of EPS, these side effects remain a major clinical concern. EPS continue to contribute to reduced quality of life, increased stigma, poor medication compliance, and higher relapse rates among patients . A study conducted in China explored the determinants of antipsychotic-induced EPS among patients with schizophrenia in real-world clinical settings. The study included 679 individuals diagnosed with schizophrenia, of who 204 developed EPS while 475 did not. The findings revealed that 126 patients (18.41%) experienced drug-induced parkinsonism, 33 patients (4.8%) developed akathisia, and 23 patients (3.3%) showed signs suggestive of tardive dyskinesia (TD) . Similarly, a cross-sectional study conducted among institutionalized patients with psychotic disorders in Central Estonia reported that 31.3% of the participants experienced neuroleptic-induced akathisia, 23.2% developed neuroleptic-induced parkinsonism, and 32.3% presented with neuroleptic-induced tardive dyskinesia . A study from Britain revealed that between 50% and 70% of individuals with schizophrenia experience at least one serious adverse effect related to antipsychotic therapy . Among these serious adverse effects, the annual incidence of parkinsonism ranged from 37 to 44%, while akathisia and tardive dyskinesia were reported in 26–35% and 8–10% of cases, respectively . Beyond extrapyramidal symptoms, individuals with schizophrenia commonly encounter additional adverse effects linked to antipsychotic use. These include weight gain, increased drowsiness, difficulty sleeping, sexual dysfunction, dry mouth, constipation, urinary issues, and episodes of dizziness . Findings from a study in Germany identified several risk factors for the development of extrapyramidal symptoms (EPS). These included the selection of certain second-generation antipsychotics—clozapine posed the lowest risk, while risperidone was linked to the highest—as well as higher drug dosages, a previous history of EPS, and the presence of comorbid illnesses. Moreover, all cases of tardive dyskinesia (TD) occurred in older patients compared to the overall study group . A study conducted in Nigeria reported that the prevalence of tardive dyskinesia (TD) was 14.5% among female patients and 7% among male patients. However, other research found no significant difference in TD prevalence between sexes . Among the various types of extrapyramidal side effects, the prevalence of akathisia ranges from 5 to 36.8%. Akathisia is observed in about 10–20% of patients receiving newer-generation antipsychotics, which is notably lower than the 20–52% prevalence reported with typical antipsychotic medications . Akathisia may continue throughout the course of antipsychotic treatment but typically resolves once the medication is discontinued . The prevalence of Parkinsonism among patients receiving antipsychotic medications is estimated at around 15%. Parkinsonism is generally regarded as a reversible condition that typically resolves within four months. However, in some instances, it may persist for six to eighteen months, and approximately 15% of cases of antipsychotic-induced parkinsonism are reported to become persistent . In Ethiopia, the prevalence of antipsychotic-induced movement disorders has been reported as 46.4% for neuroleptic-induced parkinsonism, 28.6% for neuroleptic-induced akathisia, and 11.9% for neuroleptic-induced tardive dyskinesia . A study conducted in Jimma assessed the prevalence and associated factors of tardive dyskinesia (TD) among psychiatric patients receiving first-generation antipsychotics. The study found that TD prevalence was 14.6%, with a range between 10.76% and 18.4%. Several factors showed significant associations with the development of TD. Patients older than 45 years were more than four times as likely to develop extrapyramidal side effects compared to those younger than 30 years (AOR 4.5; 95% CI: 9.7–20.4). Current cigarette smoking—defined as having smoked at least once in the past month—was linked to a 1.4-fold higher risk of TD compared to non-smokers during that period (AOR 1.4; 95% CI: 2.6–7.8). Additionally, patients receiving chlorpromazine-equivalent doses greater than 400 mg/day were 6.5 times more likely to develop TD compared to those on doses between 50 and less than 100 mg/day (AOR 6.5; 95% CI: 2.6–26.8). All of these factors were found to have a statistically significant association with the occurrence of TD induced by first-generation antipsychotics . Several factors have been significantly linked to an increased risk of extrapyramidal symptoms (EPS) in patients receiving antipsychotic medications. These include demographic characteristics such as female gender and older age; pharmacological factors like the strong D2 receptor antagonism of certain antipsychotics, the use of first-generation agents, and antipsychotic polytherapy; as well as longer duration of treatment. In addition, medical conditions such as brain atrophy, diabetes, and substance use disorders, along with genetic variations, have been associated with a higher likelihood of developing EPS . Factors significantly associated with the development of tardive dyskinesia (TD) include older age, female sex, and the presence of brain damage, higher cumulative doses of neuroleptic medications, longer duration of antipsychotic exposure, and the occurrence of drug-induced Parkinsonism during the early stages of neuroleptic therapy. Additionally, a primary psychiatric diagnosis of an affective disorder and the use of substances such as alcohol have been linked to an increased risk of TD [8, 17, 28]. Extrapyramidal motor symptoms significantly affect the effectiveness of antipsychotic treatment, patient adherence to medication, and the social functioning of individuals with schizophrenia. Although the broader use of second-generation antipsychotics (SGAs) has been linked to a lower incidence of EPS, these symptoms remain a major clinical concern. EPS continue to contribute to reduced quality of life, increased stigma, poor adherence to antipsychotic therapy, and a higher risk of relapse . Extrapyramidal motor symptoms have a great influence on the compliance of the patients towards the antipsychotic medications leading to failure of the treatment. Hence the extrapyramidal side effects to be properly diagnosed and appropriately treated so that there is increased compliance and efficacy of the medications . Extrapyramidal side effects have negative consequences in health-related quality of life, uncontrolled mental illness, poor treatment outcome, impaired functioning in daily life, disability, morbidity, increasing hospitalization and poor attitude on antipsychotic drugs . The most serious side effects of FGAs are neurological and largely restrained to the extrapyramidal motor system. FGAs remain the most commonly prescribed medications in many parts of the world especially developing countries as they are considerably less expensive than newer antipsychotics drugs . Assessment and management of antipsychotic medication side-effects are considered essential to prevent negative physical health outcomes, to improve tolerability and promote medication adherence [19, 31]. However, previous work suggests that clinician knowledge and skill in the management of antipsychotic medication side-effects remain poorly developed . And also, to this effect, patients taking antipsychotic medications should be monitored regularly for adverse effects and managed accordingly [32, 33], This study will aim to determine the predictor of extrapyramidal side effects among patients with taking antipsychotic medications in Tigray, Ethiopia. Recent researches have shown that extrapyramidal side effects among patients taking antipsychotic drugs global challenge. EPS has a great impact on the drug efficacy, drug compliance and social ability of patients with schizophrenia . Factors that were significantly associated with EPS among patients who took antipsychotics drugs including socio-demographic factors, clinical factors, Substance use-related factors, antipsychotics use, and Psychosocial related factors like gender (female), age (elder), high D2 receptor antagonism effect of antipsychotics, usage of first-generation antipsychotics, poly-therapy of antipsychotics, longer duration of drug treatment, Perceived stigma, Social support, insight, illness such as brain atrophy, diabetes, and substance addiction, and genetic diversity are related to higher incidence of EPS . Influential studies have rightfully advocated for the study of higher-order cognitive impairments in schizophrenia, considering these impairments as a central pathway for understanding core elements of the illness’ underlying pathophysiology. The frequency of motor abnormalities among patients with schizophrenia range between 50 and 65% as compared to 5% in healthy controls. Abnormal motor functioning in patients with schizophrenia has been noted since the earliest systematic clinical characterizations of the illness. With the advent of typical antipsychotics, motor dysfunction in schizophrenia has increasingly been associated with their extrapyramidal side effects. However, in addition to the early characterization of motor dysfunction, evidence recommends that antipsychotic medications may serve to exacerbate the emergence of spontaneous motor disorders, rather than being the single underlying cause . Recent research shows about the association of Akathisia with socio-demography were inconsistent. Some of them reported that there was no significant difference in age between the Akathisia and non-Akathisia groups [32, 34, 35]. The relationship between Akathisia and sex has also been inadequately investigated . Some studies reported a higher prevalence of Akathisia among females and Younger age both sex [27, 31, 37]. Most epidemiological studies have not reported any sex differences in the vulnerability to Akathisia [33, 38, 39]. Recent research Patients who received antipsychotic poly-therapy were at higher risk of akathisia significant after controlling the influence of age, gender, level of education, level of psychotic symptoms, substance use comorbidities, and current administration of antidepressant, anticholinergic drugs, benzodiazepines, daily-administered antipsychotic dose. The combination of second generation antipsychotics was associated with a risk of akathisia compared to second-generation antipsychotics used in mono-therapy Another study, on the other hand, stated that “Akathisia tends to prevail in men” . UK journal article concluded that there was no significant gender difference for the development of drug-induced Parkinsonism but there are reports that stated the male to female ratio was 1:2 [16, 41]. According to a study in Chinese Compared with the non-EPS group and the EPS group patients are older, and they have a longer duration since first prescribed antipsychotics. The EPS group patients have higher frequency of atypical antipsychotics poly-therapy and typical and atypical antipsychotics poly-therapy or combined treatments with mood stabilizers and those antipsychotics with high D2 receptor antagonistic effect and illness duration are the risk factors of EPS . Antipsychotic drugs enhance recovery by controlling symptoms, improving quality of life, regaining basic life functioning, and preventing relapse among patients who taking antipsychotics drugs. Drug compliance in patients with schizophrenia is predicted by the patients’ attitudes towards medications. Negative attitude towards antipsychotic medication is common in clinical practice with the prevalence ranges from 7.5% to 46.7%. Up to 75% of those with a negative attitude has non-adherence to antipsychotic drugs, which results in a relapse. The prevalence of relapse due to non-adherence varies from 50 to 92% globally . The overall prevalence of antipsychotic polypharmacy were 28.2% from an institution based cross-sectional study conducted on 423 study subjects done to identify associated factors of antipsychotic polypharmacy among schizophrenia outpatients. Extrapyramidal side effects repeated psychiatric hospitalization, longer duration of treatment, and medications non-adherence that were significantly associated with antipsychotic polypharmacy . The risk factors associated with akathisia were poorly understood; it is noted more with high potency antipsychotics possibly due to the employment of higher doses and middle-aged women are at greatest risk [26, 44]. Akathisia accounts for 50% of extrapyramidal symptoms and it is one of the most common movement disorders caused by antipsychotics drugs . Factors that were significantly associated with drug-Induced Parkinsonism were high dose, high-potency drug use; elderly, female sex, hereditary susceptibility and Concurrent with tardive dyskinesia [25, 44]. Factors that were significantly associated with the occurrence of tardive dyskinesia include elderly patients, female patients with brain damage, dementia, mood disorders, longer duration of antipsychotic therapy, and use of anticholinergic drugs, antiparkinsonian drugs and history of the previous occurrence of extrapyramidal symptoms . Risk factors for the development of dystonia include primarily the duration of use antipsychotic drugs and the high dose of antipsychotics drugs, younger age, male gender, and mental retardation, positive family history of dystonia, previous dystonic reaction, recent cocaine and alcohol abuse . Medication non-adherence is one of the biggest problems, increasing re-hospitalization and long-lasting psychotic symptoms, and also EPS the most challenging aspect of treatment. Medication on-adherence can cause high rates of relapse within 5 years of recovery from the first episode. Lacks of adherence to medication treatment were associated with worsening of symptoms, poor prognosis, high costs and unnecessary adjustments in the medical prescriptions . Factors influencing non-adherence may be broadly categorized into factors related to the treatment, patient-related factors, health care, and socio-economic circumstances . Different studies show in Ethiopia that non-adherence to antipsychotics medication was varied from 26.5 to 47.9% [37, 43, 45, 46]. Prevalence of non-adherence was 41.0% among schizophrenia patients. Living in rural areas (adjusted odds ratio [AOR] = 2.07; 95% confidence interval [CI]: 1.31, 3.28), current substance use (AOR = 1.67; 95% CI: 1.09, 2.56), long duration of treatment (AOR = 2.07; 95% CI: 1.22, 3.50) and polypharmacy (AOR = 2.13; 95% CI: 1.34, 3.40) were found to be significantly associated with non-adherence . Factors that were associated with medication non-adherence among patients with schizophrenia include medication side effects, poverty, lack of family supports, durations of illness, stigma, substance use, alcohol consumption, and smoking. Thus, non-adherence remains a challenge for patients with psychiatric disorders and their health care providers, contributing to a substantial worsening of the disease, frequent relapse, increased mortality, re-hospitalization, and increased health care costs . Insight in mental health is the degree of the patient’s awareness and understanding of their attributions, feelings, behavior, and disturbing symptoms; self-understanding as well as the potential causes of psychiatric presentation. Insight is viewed as a multidimensional variable of the realization of the need for treatments, the ability to re-label unusual mental events as pathological and attribute appropriate causes for mental illness [48, 49]. Unawareness of having mental illness among the patients with schizophrenia viewed as an independent phenomenon rather than the secondary manifestation of schizophrenia symptoms . WHO international pilot study showed that 98% of patients with schizophrenia had a lack of insight and the systematic study revealed that between 50% and 80% of patients with schizophrenia had characterized by poor insight . The consequences of lack of insight are failure to recognize the need for treatment; leading to medication non-adherence and impacts on the patients, families, communities and affects the course of illness. Insight in the patient with schizophrenia has been significantly associated with depressive symptoms, positive and negative syndromes of schizophrenia, unemployed, relapse and r-hospitalization . A study done in Ethiopia found that attitude towards antipsychotic medications were Positive symptoms, negative symptoms, shorter (≤ 5 years) duration of illness, first-generation, having sedation and extra-pyramidal side effects were factors negatively associated with the attitude towards antipsychotic medication treatment. Insight to illness was a factor positively associated with the attitude towards antipsychotic medications . According to study done in Ethiopia found that insight was Age at first onset of illness, duration of treatments, depressive symptoms were inversely associated with mean insight score; whereas unemployed, positive and negative syndrome, previous hospitalization, >=2 episodes were positively associated with mean insight score . A study done in the Canada compound analysis revealed a small and positive effect size suggesting increased EPS in substance-abusing patients. Cocaine was associated with the largest effect size estimate. Dual diagnosis patients were more frequently males than single diagnosis patients . A study done in Ethiopia found that Khat, alcohol, and history of substance use factors that were significantly associated with Extrapyramidal side effects among patients taking antipsychotic drugs . Conceptual frame work According to the report of studies conducted before, several predicting factors had been found responsible for the development of extrapyramidal side effects of patients taking antipsychotic medications including socio-demographic factors, clinical factors, substance use-related factors, antipsychotics use, and psychosocial related factors. a conceptual framework is adopted from reviewing different literatures [3, 42, 52] as illustrated in (Fig. 1). Objectives General objective To determine the predictors of extrapyramidal side effects among all patients taking antipsychotic medication at Mekelle Psychiatry units, Northern Ethiopia, 2023. Specific objectives To assess predictors of dystonia and Parkinsonism Northern Ethiopia, 2023. To assess predictors of tardive dyskinesia, Northern Ethiopia, 2023. To assess predictors of akathisia, Northern Ethiopia, 2023. Methodology Study area and period This study was conducted at Mekelle town, psychiatric clinics (Ayder Comprehensive Specialized Hospital and Mekelle General Hospital), which is located in Tigray regional state 783 km away from Addis Ababa, the capital city of Ethiopia to the North. Mekelle has one Comprehensive Specialized Hospital (Ayder) and three other government hospitals which are Mekelle, Quiha and North command General Hospitals and 8 health centers. Ayder Comprehensive Specialized Hospital serving more than 9 million people come from Afar, Tigray, and Amhara; and has about 500 inpatient beds. It runs all the specialized/non-specialized hospital services including in an emergency, outpatient, and inpatient of all age groups and special facility services and substance rehabilitation centers. These all services also run by Mekelle General Hospital, a teaching hospital, except the rehabilitation. These hospitals also used as a research center for the College of Health Sciences, Mekelle University. Psychiatric services are given by psychiatrists, General Practitioner, MSc in mental health, BSc in psychiatric nursing, and clinical psychologists. More than 27 health professionals work in a psychiatry clinic, out of them, 2 psychiatrists, and 3 clinical psychologists. The current flow of psychiatric patients on average was 550 and 200 patients per month in Ayder Comprehensive Specialized Hospital and Mekelle General Hospital respectively. Ayder Comprehensive Specialized Hospital had 24 beds for inpatient services but Mekelle General Hospital has not inpatient service from monthly reports. The study period was conducted from March to April 2023. Study design An institutional-based unmatched case-control study was conducted. Population Source population All psychotic patients diagnosed by psychiatry professional taking antipsychotic medications in Mekelle town, psychiatric clinics, Tigray Northern Ethiopia. Study population Case All psychotic patients diagnosed by psychiatry professional taking antipsychotic medications sampled with extrapyramidal side effects among Mekelle town, psychiatric clinics, Northern Ethiopia 2023. Control All psychotic patients diagnosed by psychiatry professional taking antipsychotic medications sampled with extrapyramidal side effects among Mekelle town, psychiatric clinics, Northern Ethiopia 2023. Eligibility criteria Inclusion criteria Case Clients who develop EPSE diagnosed by psychiatry professional as having any patients taking antipsychotic medications by psychiatry professional using face to face interview methods, from charts records and objective assessment was employed and their age above 18 years old, patients attending more than one month follow up at the psychiatry clinic during the data collection period is included in the study. Control Clients who do not develop extrapyramidal side effects diagnosed by psychiatry professionals as having any patients taking antipsychotic medications by psychiatry professionals using face to face interview methods; from chart records and objective assessment was employed. Exclusion criteria Participants who are under 18 years of age were excluded from the study population as they are unable to consent to participate in the study. Those with severe illness, unable to hear or speak, and individuals who are not volunteer to participate in the study were ineligible to be included in the study. Sample size and sampling technique Sample size The sample size was calculated using open epi software proportion formula taking from a study conducted (By Taye H, Awoke T, et al. 2014) that one of the risk factors for extrapyramidal side effects was having a longer duration of illness (AOR = 2.45) and p = 40% . Other assumptions made during the sample size calculation were a 5% level of significance (α) and confidence interval (CI) of 95% (Za/2 = 1.96). Based on these assumptions, the sample size calculated as follows: Two-sided confidence level = 95%, power = 80%, ratio of cases to controls = 1:2 and Odds ratio = 2.45 (Table 1). Sampling technique A systematic random sampling method was employed and study participants were proportionally allocated in both psychiatric clinics. Since the ratio of the case to control was 1:2, every case 2 controls were selected. Proportional allocation was used to calculate the sample size from each hospital (Fig. 1): where: K = the number of hospital ni = sample size of the ith hospital. Ni = population size of the ith hospital n = Σni = total sample size. N = ΣNi = total population size. nACSH = (183/750) x 550 = 148 and nmekelle hospital = (183/750) x 200 = 53. Study variables Dependent variables Extrapyramidal side effects: Independent variables Socio-demographic factors [age, sex, ethnicity, educational status, occupation, income, marital status and residence]. Clinical factors [family history of mental illness, duration of illness, duration of treatment, onset of illness, type of antipsychotic, physical illness. type of illness, route of antipsychotics, number of admissions, uncontrolled illness, multidrug use, adherence to antipsychotic, number of episodes and over dosage]. Substance-related factors [alcohol use, cigarette use, khat use, cocaine, cannabis and caffeine]. Psychosocial related factors [social support, insight and stigma]. Operational definition Extrapyramidal side effects are drug-induced movement disorders that include acute and tardive symptoms. These symptoms include dystonia, akathisia, Parkinsonism, bradykinesia, and tremor, and tardive dyskinesia . Dystonia is characterized by intermittent or sustained muscle action. Movements vary from fleeting disturbance to maintained abnormal postures . Tardive dyskinesia the core sign of tardive dyskinesia is orofacial dyskinesia or the buccolinguomasticatory triad that consists of involuntary choreatic movements of the face, lips, tongue or jaw. Also, choreiform purposeless movements of the trunk and/or limbs are included . Bradykinesia/akinesia is a drug-induced extrapyramidal syndrome characterized by reduced facial expression, decreased spontaneity, apathy, loss of expressive gestures, and flattening of vocal inflection; notably, these are also among the negative symptoms of schizophrenic illness . Akathisia Sensation of inner restlessness, irritability, agitation, and violent outbursts . Parkinsonism the core features of drug-induced Parkinsonism include bradykinesia, rigidity, and tremor with a gradual onset over days or, more usually weeks . Simpson-Angus Scale (SAS) items or 2 (moderate) on one of the items. Simpson-Angus Scale (SAS) is a 10-item rating scale that has been used widely for assessment of neuroleptic-induced Parkinsonism (NIP) in clinical practice . Those who are found to score of 0–40. < 3 normal symptoms. 6 indicating a level of disorder for which treatment should be reconsidered. 12 requiring attention. 18 “ almost certainly “requiring modification of pharmacotherapy. Barnes Akathisia Rating Scale (BARS) characteristics of subjective in acute akathisia (1) perception of foreign but inner compulsion to move (2) lack of control over motor behavior (3) feeling of inhibition of purposeful actions (4) subjectively close or inseparable relationship between inner restlessness and restless each item has 3 scale = 12. 0. No distress. 2. 1. Mild. 3. 2. Moderate. 4. 3. Severe. Patients are considered to have met criteria for akathisia if they scored at least 2 (mild) on the Barnes Akathisia Rating Scale (BARS) global item . Abnormal Involuntary Movement Scale (AIMS) When they scored 2 (mild) on at least two Abnormal Involuntary Movement Scale (AIMS) items or 3 (moderate) on one AIMS item, they are diagnosed with suspected tardive dyskinesia (TD) (the diagnosis of TD requires more than two consecutive AIMS assessments . It rates the severity of abnormal movements from 0 to 4 0 = none, 1 = minimal (maybe extreme normal), 2 = mild, 3 = moderate, and 4 = severe. It is a valuable tool for clinicians who are monitoring the effects of long term treatment with neuroleptic medications and also for researchers studying the effects of these drugs. The abnormal involuntary movement scale test is given every three to six months to monitor the patient for the development of tardive dyskinesia . Social support Those who are found to score Oslo − 3 scales a score. 3–8 will be considered as having poor support 9–11 as having moderate support and a score 12–14 as having strong support Substance use Current use using at least one of a specific substance for non-medical purposes within the last three months (alcohol, khat, tobacco, other substance). Ever use of a substance using at least one of any specific substances for no-medical purpose at least once in a lifetime (alcohol, khat, tobacco, other substance . Past and current mental illness history previously and currently diagnosed with mental illness and weather treated in the past or currently on treatment. Medication non-adherence a patient on antipsychotic medications who scored ≥ 2 on MMAS (Morisky Medication Adherence Scale) will be considered as having non-adherence to antipsychotic medications . Data collection tools and procedures A structured interviewer-administered questionnaire was used which has 9 parts includes: (1) socio-demographic information, (2) bio-psychosocial: substance, and psychiatry disorder, (3) Simpson-Angus Scale (SAS), (4) Barnes Akathisia Rating Scale (BARS), (5) Abnormal Involuntary Movement Scale (AIMS), (6) Medication non-adherence will be measured by the Morisky scale 4 items a commonly used and Perceived Stigma Questionnaire. Also, the chart was reviewed to check the psychiatric and other medical illness diagnoses. A face-to-face interview and chart review was made using a structured questionnaire Data were collected by six psychiatry professionals who had previous experience of data collection. The principal investigators supervised the data collection process. Medication non-adherence was measured by the Morisky scale four items a commonly used, valid and reliable and the Morisky scale assesses patients’ forgetfulness about medicine intake, carelessness about taking medications, stopping the medication when feeling better, and stopping the medication when feeling worse . The Oslo 3-items social support scale (OSSS-3) was used to measure the strength of social support. The scores range from 3 to 14. This scale has been used in several studies, confirming the feasibility and predictive validity concerning psychological distress in Ethiopia settings. A score ranging between 3 and 8 is classified as poor support, a score between 9 and 11 as moderate support, and a score between 12 and 14 as strong support. These three items are considered to be the best predictors of mental health, covering different fields of social support. Data quality control Data collectors were trained for two days by the principal investigator on the study instrument, on the consent form, how to maintain ethical principles), and on the data collection procedure. a pre-test was conducted on a sample of 5% (8 controls and 4 cases) of the total study population in quiha General Hospital before 2 weeks of data collection and a common understanding was reached between the data collectors to avoid inter-rater variability. The pre-test questionnaires were not included in the analysis as part of the main study. Data collection was collected within 30 working days. Regular supervision by the principal investigator was carried out. During data collection, the questionnaire was checked for its completeness daily by data collectors, supervisors, and then by the investigator and then the incomplete data were discarded. The questionnaire was translated from English to Tigrigna language by expert and back translated to English to ensure consistency. This primary version was compared with the original English version to resolve inconsistencies. Data processing and analysis Data were coded and entered into Epi-data manager version 4.6.0.0 and exported to SPSS version 22 for analysis. Cross tabulation was done and descriptive statistics were computed which was presented using percentages, texts, and tables. A binary logistic regression model was used to test the association between independent and dependent variables. The strength of statistical association was measured by adjusted odds ratios (AOR) at 95% confidence intervals (CI). Variables with p-values ≤ 0.25 in the bivariate analysis were taken to multivariable logistic regression analyses to adjust for possible confounders and a p-value less than 0.05 was declared as statistically significant. Multicollinearity was checked using the variance inflation factors to assess the relationship of the independent variables between themselves (VIF = 1). Model fitness was also checked using Hosmer and Lemeshow goodness of fit. Results Socio-demographic characteristics While it is not always possible to achieve a 100% response rate, we had tried to apply strategies that can help to increase the response rate. The survey or questionnaire was easy to understand, with clear instructions and questions. We were Avoid using technical jargon or complex language that may confuse participants, Follow up with individuals who have not responded to the survey or study, Keep the survey or study as short and burden-free as possible, to reduce the time and effort required from participants. As result, from a total of 201 clients taking anti-psychotic medications were included in the study with 100% response rate. Out of them, 134 and 67 were controls and cases respectively. The mean age of the participants was 35.41, ranging from 18 to 75, (Standard. Deviation (SD) = 12.385). Seventy-three (54.5%) controls and 53 (79.1%) case were females. Forty-three (32.0%) controls and 23 (34.3%) cases were 18–27 years old. Fifty-two (38.8%) controls and 41 (61.1%) cases were single. An eighty-three (61.9%) controls and 47 (70.1%) cases were Orthodox by religion. and A one hundred six (79.1%) controls and 51 (76.1%) cases were Tigray by ethnicity. A Forty-six (34.3%) control and 38 (56.7%) cases were unemployed or jobless. 112 (83.5%) controls and 57 (85.0%) cases were live with family. The mean income of participants was 1526.17 per month, minimum and maximum were 0, 9000 respectively, and Std. Deviation = 1985.137. Regarding the educational status, this study found that 110 educated controls and 52 cases participants that had an elementary school and above. Of these, 37 (27.6%) controls and 22 (32.8%) cases related to education learned up to grade 8. Out of 52 (38.8) controls and 13 (19.4) cases had a job, 46 (34.3%) controls and 38 (56.7%) cases jobless. Ninety-seven (72.4%) and 37 (55.2%) participants were living in urban areas were controls and cases respectively (Table 2). Bio-psychosocial related factors Regarding of perceived Stigma 67 (50.0%) controls and 46 (68.7%) cases Participants feeling of inferiority with people and 69 (51.5%) controls and 44 (65.7%) cases had Participants feeling of avoidance with people (Table 3). Regarding medication adherence 84 (62.7%) controls and 54 (80.6%) cases had Participants status to stop taking pills when they feel better and 85 (63.4%) controls and 37 (55.2%) cases had Participants forgetfulness to take pills (Table 3). This study revealed that 45 (33.6%) controls and 29 (43.3%) cases had poor social support one or two had social support. 31 (23.1%) controls and 8 (11.9%) cases had moderate social support concerned and 26 (19.4%) controls and 18 (26.9%) cases had easy getting practical help from friends or family (Table 3). Clinical-related characteristics This study revealed that substance use in entire life 49 (36.6%) controls and 45 (67.1%) cases had a habit of chewing khat, 60 (44.8%) controls and 44 (65.7%) cases had a habit of drinking, and 23 (17.2%) controls and 16 (23.9%) cases had a habit of smoking cigarette. 14 (10.4%) controls and 12 (17.9%) cases had chewed cannabis. Regarding substance use kchat chewing practice during the last three months was found to be 28 (20.9%) controls and 23 (34.3%) cases chew khat in the last three months. 30 (22.4%) controls and 34 (50.7%) cases they drunken alcohol in the last three months. 13 (9.7%) controls and 12 (17.9%) cases had a smoke cigarette. 11 (8.2%) controls and 5 (7.5%) cases had chewed cannabis in the last three months (Table 4). Among those psychiatric patients who had comorbid and/or treated physical illness were 36 (26.9%) controls and 37 (55.2%) cases had comorbidity physical illness. 8 (6.0%) controls and 5 (7.5%) cases had diabetes mellitus. ten (7.5%) controls and 13 (19.4%) case had hypertension, 8 (6.0%) controls and 10 (15.0%) cases had TB, 12 (9.0%) controls and 5 (7.5%) cases had asthma comorbid with patients with mental disorders (Table 4). Regarding past mental illness history, 50 (37.3%) controls and 51 (76.11%) cases had a mental illness history similar to the current diagnoses. Regarding family mental illness, 84 (62.7%) controls and 58 (86.6%) cases had a family mental illness history. 19 (14.2%) controls and 21 (31.3%) cases had a family history of major depressive disorder. Regarding the onset of the illness age median age onset of the illness, participants were 27 years, minimum and maximum was 10, 77 respectively, and percentiles 25% =21 50% =27 and 75%=37. Regarding the duration of illness 71 (53.0%) controls and 53 (79.1%) cases were more than 5 years and the duration of treatment 46 (34.3%) controls and 44 (65.7%) cases were equal to/or more than 5 years on treatments. Regarding psychotropic 59 (44.0%) controls, and 31 (46.3%) cases were taken haloperidol. 50 (37.3%) controls and 25 (37.3%) cases were not controlled by psychotropic medications, Participants’ admission, 81 (60.4%) controls respondents and 36 (54.0%) cases respondents had no admitted history. the route of antipsychotic medications 109 (81.3%) controls and 41 (61.2%) cases were taken oral route (Table 4). Factors associated with extrapyramidal side effects among patients with taking antipsychotic drugs In bivariate logistic regression analysis patients’ female, single, stigma, drug adherence, physical illness like hypertension, tuberculosis, occupation [jobless], living area [rural], route of the medication, duration of treatment, comorbid mental illness, like major depression disorder, medication effectiveness, had past mental illness history, duration of the illness, family history of mental illness, type of antipsychotic drugs (first-generation antipsychotic drugs, and combination antipsychotic drug), khat used, alcohol drink, tobacco, and cannabis were found be significant predictors of EPS. After the above crude analysis, multivariable logistic regression was done for predicting extrapyramidal side effect among study participants adjusted for all possible candidates determinants pooled from bivariate logistic regression analysis, and Gender (female) is 0.140 times less likely to have Extrapyramidal Side Effects than males (AOR = 0.140, 95% CI: 0.0.042–0.465, p = 0.001) and marital status (single) is 3.084 times more likely to have Extrapyramidal Side Effects than married/divorced (AOR = 3.084, 95% CI: 0.0.569–16.727). On the other hand, educational status, age, and occupation were not significant factors in this study. This study showed that patients who had past mental illness history are 6.3 times more likely to have Extrapyramidal Side Effects than those had not past mental illness history (AOR = 6.316, 95% CI: 2.026–19.692), p = 0.001) and without had perceived Stigma because of mental illness is 0.165 times less likely to have of Extrapyramidal Side Effects than participants had perceived Stigma because of mental illness (AOR = 0.165, 95% CI: 0.038–0.708., p = 0.015). participants taking of first-generation antipsychotic drugs are 0.095 times less likely to have Extrapyramidal Side Effects than patients taking the combination of first-generation antipsychotic drugs and second-generation antipsychotic medications (AOR = 0.095, 95% CI: 0.010–0.877, p = 0.038). On the other hand, physical illness and comorbid diagnosis that is major depressive disorder and bipolar disorder comorbid with schizophrenia were not a significant factor in this study. This study revealed that regarding of used the substance in your life had a habit of chewed khat in your life experience are 4.0 times more likely to have Extrapyramidal Side Effects than Participants did not have a habit of chewing khat (AOR = 4.033, 95% CI: 1.120-14.531, p = 0.033) and Participants had a habit of drinking alcohol 6.2 times less likely to have Extrapyramidal Side Effects than had not a habit of drinking alcohol (AOR = 6.213, 95% CI: 1.375–28.079, p = 0.018) (Table 5). Comorbid mental illness includes major depressive disorder with a psychotic feature. Past mental illness include GAD, MDD, PTSD, social phobia, and SAD comorbid with GAD, OCD, GAD comorbid with MDD, comorbid with schizophrenia. Other psychotic disorders include schizophreniform disorder, brief psychotic disorder, and substance-induced psychotic disorder. Discussion The study has identified predicting factors in the development of EPSE among patients taking antipsychotic medications attending in Ayder Comprehensive Specialized Hospital and Mekelle Hospital, Northern Ethiopia. Being single, female gender, having a history of past mental illness, patient taking first-generation antipsychotic drugs and combination antipsychotic drug, patient who use khat and drunk alcohol were among the main predictors associated with the development of EPS. In this study marital status has showed anassociation for the development of EPSE. Married patients were less likely to develop EPSE than single, divorced and widowed patients. This could be due to lack of social support. This finding was in line with other studies done in China , Ethiopia, Gonder, and Jimma . This study showed that participants taking a combination of first-generation antipsychotic drugs—such as haloperidol with chlorpromazine, fluphenazine decanoate, or thioridazine—were at significantly higher risk for extrapyramidal side effects (EPSE). A similar finding was documented in cases where combinations of antipsychotics with high D2 receptor antagonism, low D2 receptor antagonism, and second-generation antipsychotic drugs were used . Moreover, a study conducted in China and Germany had documented a similar finding with the present study [23, 40]. This may be attributed to the polypharmacy of first-generation antipsychotic medications, particularly the use of multiple high-potency agents at high doses, which increases the risk of extrapyramidal side effects. Having a history of past mental illnesses such as schizophrenia, major depressive disorder with psychotic features, brief psychotic disorder, and bipolar disorder with psychotic features was not significantly associated with the development of extrapyramidal side effects (EPSE). This finding is consistent with studies conducted in China and in Ethiopia, specifically in Addis Ababa and Jimma [8, 17]. In addition, similar studies have documented that having a family history of mental illness is a significant predictor of extrapyramidal side effects (EPSE). However, our findings did not show such an association. This result is consistent with studies conducted in China and Ethiopia, particularly in Addis Ababa and Jimma [8, 17]. Participants who used alcohol and khat were more likely to experience extrapyramidal side effects (EPSE) than those who did not. Possible explanations include the negative impact of these substances on an individual’s internal state, which may contribute to increased cognitive disturbances and withdrawal symptoms. Beyond physical effects, substance use is also associated with long-term social and clinical consequences. Individuals who use substances often report more severe extrapyramidal symptoms than abstinent patients and are at greater risk of developing tardive dyskinesia. These substances may increase the risk of EPSE due to their interaction with antipsychotic medications. This finding is consistent with other studies conducted in Ethiopia , Individuals who drunkalcohol and use khatwere associated with extrapyramidal side effects. The finding implies that having alcohol drinking habits and khat use increases the risk of having extrapyramidal side effects. This could be due to the reason that most participants’ age was above 18–28 years old and above so that they were at risk to take a substance and follow up was not regular. This study also revealed perceived stigma and attitude towards treatment were the other factors which were identified to have extrapyramidal side effects that predictor of. This finding was in line with studies done in Ethiopia [2, 3, 42, 48,49,50]. Patients who did not perceive stigma were 0.01 times less likely to develop extrapyramidal side effects (EPSE) compared to those who perceived stigma. This finding suggests that perceived stigma may increase the risk of EPSE among patients taking antipsychotic medications. One possible explanation is that individuals who feel stigmatized might be less likely to communicate openly with healthcare providers, leading to poor adherence or unmonitored overuse of medications—such as taking higher doses than prescribed—which may increase the risk of developing EPSE. This explanation is supported by other studies examining the relationship between perceived stigma, medication adherence, and antipsychotic side effects [48,49,50]. In this study, age was not significantly associated with Extrapyramidal Side Effects but study done in Ethiopia,,Jimma patients who were in the age category ≥ 45 year were more than four times more likely to develop Extrapyramidal Side Effects as compared to those who were found in the age category of < 30 year (AOR 4.5, 95%CI: 9.7, 20.4). On the contrary, different results were reported by the studies done in America, China, Ethiopia, Addis Ababa, Gonder [8, 17, 25],.This might be due to the study design, sample size, age group, study area and Participants age. In addition to age, occupation (jobless) was not significantly associated with Extrapyramidal Side Effects and this finding was consistent with other studies done in America, China and Filipino [14, 17], but it was associated with a study done in Ethiopia, Addis Ababa, jimma. Regarding substance, this study revealed that smoking cigarettes was not significantly associated with extrapyramidal side effects but smoking cigarettes had statistically significant association with first-generation antipsychotics induced extrapyramidal side effect in Ethiopia, jimma . And also significant association with findings in America, Washington DC; shanghai; British, Ethiopia, Addis Ababa [21, 55]. Physical illness comorbid chronic medical illness like, hypertension, TB, DM, HIV, and cancer with the psychiatric disorder was not significantly associated with extrapyramidal side effects in this study but was associated with findings in China and America (17, 56]. This difference might be due to those excluded unable to communicate participants may have a comorbid physical illness and sample size. This study also revealed that homelessness or lack of family support was not significant factor with Extrapyramidal Side Effects. But was associated with finding study done in Nigeria, Ethiopia, Addis Ababa Mekelle, Jimma [49, 50, 55]. The finding shows that patients with taking antipsychotic drug, social support was found to be the main predictor of Extrapyramidal Side Effects among patients with taking antipsychotic drugs, with those reporting lower levels of social support reporting higher Extrapyramidal Side Effects scores among patients with taking antipsychotic drugs. The study also found that duration of the illness rather than duration since first prescribed antipsychotics not associated a risk factor of EPS, which mean that the longer duration of the illness and longer prescribed antipsychotics was no significant difference to increase the risk of for Extrapyramidal Side Effects than short duration of the illness, But was associated with finding was consistent with findings in china, Filipino, Ethiopia, Addis Ababa, and Jimma [14, 17, 25]. Limitation of the study Despite providing valuable baseline data, there are also some limitations encountered: Causal association between antipsychotic medication and side effects was not adequately supported or assessed by laboratory findings. Data on duration of illness was abstracted from patient chart. Variables like Khat chewing and others substances are by nature a sensitive issue and social desirability bias are unavoidable. In this study, only adult psychiatry patients were included, so it is difficult to generalize all psychiatry patients because those who were unable to communicate, children, and adolescents psychiatry patients; and decision incapacity consent were not included in the study. So, those excluded participants might be highly Extrapyramidal Side Effects. Conclusion and recommendation Conclusion Among patients with taking antipsychotic drugs, the modifiable factors associated with EPS occurrence includes being single, being female, stigma, type of antipsychotic drugs (first-generation antipsychotic drugs, and combination antipsychotic drug, khat used and alcohol drink. Using a multivariable model were associated factors of EPS, the above variables independently increased the risk of EPS. EPS prevention efforts also should be targeted towards these risk factors among patients with taking antipsychotic drugs. Recommendation Based on the study findings the following recommendations are forwarded. To psychiatry professionals Psychiatric professionals should assess patient Extrapyramidal Side Effects risk assessment routinely and should put the diagnosis of Extrapyramidal Side Effects if the client is having Extrapyramidal Side Effects so that every professional focuses on treatment besides the medication. Designing treatment guideline, increasing availability of drugs with minimal side effects and psycho-education on associated factors (e.g. khat use, alcohol consumption) is essential. It is also recommended psychiatric professionals to assess previous Extrapyramidal Side Effects and comorbid side effect. A combination of first-generation antipsychotic medication is one of the most common predictors of Extrapyramidal Side Effects. They should focus on prescribes combination first-generation antipsychotic medication and on managing and prevent a particular patient. Educate the family/caregivers of Extrapyramidal Side Effects patients with previous Extrapyramidal Side Effects and management of Extrapyramidal Side Effects to have closely followed up. Data availability The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Change history 17 September 2025 In the original publication, an additional affiliation has been added to Kenfe Tesfay Berhe. The article has been updated to rectify the error. Abbreviations ACSH: : Ayder Comprehensive Specialized Hospital AIMS: : Abnormal Involuntary Movement Scale AOR: : Adjusted Odds Ratio BARS: : Barnes Akathisia Rating Scale CI: : Confidence Interval COR: : Crude Odds Ratio DM : : Diabetes mellitus EPS : : Extrapyramidal Side Effects FGA : : First Generation Antipsychotic HIV/AIDS : : Human immune deficiency virus/acquired immune deficiency HRERC: : Health Research Ethics Review Committee MDD: : Major Depressive Disorder MMAS: : Morisky Medication Adherence Scale OCD : : Obsessive–Compulsive Disorder OSSS: : 3Oslo 3 Social Support Scale PTSD: : Post-Traumatic Stress Disorder SAD : : Social anxiety Disorder SAS: : Simpson-Angus Scale SGA : : Second Generation Antipsychotic SPSS: : Statistical Package for Social Science STI: : Sexually transmitted infection Syndrome TB: : Tuberculosis The UK: : United Kingdom UBACC : : University of California, San Diego Brief Assessment Of capacity to Consent WHO: : World Health Organization References Kirgaval RS, Revanakar S, Srirangapattna C. Prevalence of extrapyramidal side effects in patients on antipsychotics drugs at a tertiary care Center5. J Psychiatry. 2017;20(5):1–5. Google Scholar 2. Abboud R, Noronha C, Diwadkar VA. Motor system dysfunction in the schizophrenia diathesis: neural systems to neurotransmitters. Eur Psychiatry. 2017;44:125–33. Article PubMed CAS Google Scholar 3. Girma S, Abdisa E, Fikadu T. Prevalence of antipsychotic drug Non adherence and associated factors among patients with schizophrenia attending at Amanuel mental specialized hospital, addis ababa, ethiopia: institutional based cross sectional study. Health Sci J. 2017;11(4):521. 4. Tareke M, Tesfaye S, Amare D, Belete T, Abate A. Antipsychotic medication non-adherence among schizophrenia patients in central Ethiopia. South Afr J Psychiatry. 2018;24(1):123–30. 5. Peluso MJ, Lewis SW, Barnes TR, Jones PB. Extrapyramidal motor side-effects of first-and second-generation antipsychotic drugs. Br J Psychiatry. 2012;200(5):387–92. Article PubMed Google Scholar 6. Mas S, Gassó P, Ritter M, Malagelada C, Bernardo M, Lafuente A. Pharmacogenetic predictor of extrapyramidal symptoms induced by antipsychotics: multilocus interaction in the mTOR pathway. Eur Neuropsychopharmacol. 2015;25(1):51–9. Article PubMed CAS Google Scholar 7. Shirzadi AA, Ghaemi SN. Side effects of atypical antipsychotics: extrapyramidal symptoms and the metabolic syndrome. Harv Rev Psychiatry. 2006;14(3):152–64. Article PubMed Google Scholar 8. Mentzel TQ, Lieverse R, Bloemen O, Viechtbauer W, van Harten PN. High incidence and prevalence of drug-related movement disorders in young patients with psychotic disorders. J Clin Psychopharmacol. 2017;37(2):231–8. Article PubMed CAS Google Scholar 9. Kane JM, Fleischhacker WW, Hansen L, Perlis R, Pikalov IIIA, Assuncao-Talbott S. Akathisia: an updated review focusing on second-generation antipsychotics. J Clin Psychiatry. 2009;70(5):627. Article PubMed CAS Google Scholar 10. Miller CH, Fleischhacker WW. Managing antipsychotic-induced acute and chronic akathisia. Drug Saf. 2000;22(1):73–81. Article PubMed CAS Google Scholar Thanvi B, Treadwell S. Drug induced parkinsonism: a common cause of parkinsonism in older people. Postgrad Med J. 2009;85(1004):322–6. Article PubMed CAS Google Scholar 12. Ertugrul A, Demir B. Clozapine-induced tardive dyskinesia: a case report. Prog Neuropsychopharmacol Biol Psychiatry. 2005;29(4):633–5. Article PubMed Google Scholar 13. Haddad PM, Dursun SM. Neurological complications of psychiatric drugs: clinical features and management. Hum Psychopharmacology: Clin Experimental. 2008;23(S1):S15–26. Article Google Scholar 14. Go CL, Rosales RL, Caraos RJ, Fernandez HH. The current prevalence and factors associated with tardive dyskinesia among Filipino schizophrenic patients. Parkinsonism Relat Disord. 2009;15(9):655–9. Article PubMed Google Scholar 15. Ye M, Tang W, Liu L, Zhang F, Liu J, Chen Y, et al. Prevalence of tardive dyskinesia in chronic male inpatients with schizophrenia on long-term clozapine versus typical antipsychotics. Int Clin Psychopharmacol. 2014;29(6):318–21. Article PubMed Google Scholar 16. Divac N, Prostran M, Jakovcevski I, Cerovac N. Second-generation antipsychotics and extrapyramidal adverse effects. BioMed research international. 2014;2014. 17. Weng J, Zhang Y, Li H, Shen Y, Yu W. Study on risk factors of extrapyramidal symptoms induced by antipsychotics and its correlation with symptoms of schizophrenia. Gen Psychiatry. 2019;32(1). 18. Kanner AM. Management of psychiatric and neurological comorbidities in epilepsy. Nat Reviews Neurol. 2016;12(2):106. Article CAS Google Scholar 19. Li H, Yao C, Shi J, Yang F, Qi S, Wang L, et al. Comparative study of the efficacy and safety between Blonanserin and Risperidone for the treatment of schizophrenia in Chinese patients: A double-blind, parallel-group multicenter randomized trial. J Psychiatr Res. 2015;69:102–9. Article PubMed Google Scholar 20. Cloud LJ, Zutshi D, Factor SA. Tardive dyskinesia: therapeutic options for an increasingly common disorder. Neurotherapeutics. 2014;11(1):166–76. Article PubMed CAS Google Scholar 21. Goldberg JF, Ernst CL. Managing the side effects of psychotropic medications. American Psychiatric Pub; 2018. 22. Russo EB, Tyler VM. Handbook of psychotropic herbs: A scientific analysis of herbal remedies for psychiatric conditions. Routledge; 2015. 23. Salem H, Nagpal C, Pigott T, Lucio Teixeira A. Revisiting antipsychotic-induced akathisia: current issues and prospective challenges. Curr Neuropharmacol. 2017;15(5):789–98. Article PubMed PubMed Central CAS Google Scholar 24. López-Sendón J, Mena MA, de Yébenes G. Drug-induced parkinsonism. Exp Opin Drug Saf. 2013;12(4):487–96. Article Google Scholar 25. Taye H, Awoke T, Ebrahim J. Antipsychotic medication induced movement disorders: the case of Amanuel specialized mental hospital, addis ababa, Ethiopia. Am J Psychiatry Neurosci. 2014;2(5):76–82. Article Google Scholar 26. Abdeta T, Tolessa D, Tsega W. Prevalence and associated factors of tardive dyskinesia among psychiatric patients on First-Generation antipsychotics at Jimma university specialized hospital, psychiatric clinic, ethiopia: institution based on A Cross-Sectional study. J Psychiatry Psychiatric Disorders. 2019;3(4):179–90. Article Google Scholar 27. Wubeshet YS, Mohammed OS, Desse TA. Prevalence and management practice of first generation antipsychotics induced side effects among schizophrenic patients at Amanuel mental specialized hospital, central ethiopia: cross-sectional study. BMC Psychiatry. 2019;19(1):32. Article PubMed PubMed Central Google Scholar 28. Stroup TS, Gray N. Management of common adverse effects of antipsychotic medications. World Psychiatry. 2018;17(3):341–56. Article PubMed PubMed Central Google Scholar 29. Bachmann CJ, Lempp T, Glaeske G, Hoffmann F. Antipsychotic prescription in children and adolescents: an analysis of data from a German statutory health insurance company from 2005 to 2012. Deutsches Ärzteblatt International. 2014;111(3):25. PubMed PubMed Central Google Scholar 30. DeBattista C. Basic & clinical Pharmacology. New York: McGraw-Hill; 2018. Google Scholar 31. Morrison P, Meehan T, Stomski NJ. A ustralian case managers’ views about the impact of antipsychotic medication on mental health consumers. Int J Ment Health Nurs. 2015;24(6):547–53. Article PubMed Google Scholar 32. Parksepp M, Ljubajev Ü, Täht K, Janno S. Prevalence of neuroleptic-induced movement disorders: an 8-year follow-up study in chronic schizophrenia inpatients. Nord J Psychiatry. 2016;70(7):498–502. Article PubMed Google Scholar 33. Holder S, Edmunds A, Morgan S. Psychotic and bipolar disorders: antipsychotic drugs. FP Essentials. 2017;455:23–9. PubMed Google Scholar 34. Ayano G. First generation antipsychotics: pharmacokinetics, pharmacodynamics, therapeutic effects and side effects: a review. RRJChem. 2016;5(3):53–63. Google Scholar 35. Zhao YJ, Lin L, Teng M, Khoo AL, Soh LB, Furukawa TA, et al. Long-term antipsychotic treatment in schizophrenia: systematic review and network meta-analysis of randomised controlled trials. BJPsych Open. 2016;2(1):59–66. Article PubMed PubMed Central Google Scholar 36. for Clinical JMEC. First-Generation Versus Second-Generation Antipsychotics in Adults: Comparative Effectiveness. Agency for Healthcare Research and Quality (US); 2013. Comparative Effectiveness Review Summary Guides for Clinicians [Internet]. 37. Eticha T, Teklu A, Ali D, Solomon G, Alemayehu A. Factors associated with medication adherence among patients with schizophrenia in mekelle, Northern Ethiopia. PLoS ONE. 2015;10(3):e0120560. 38. Huang Y, Pan L, Teng F, Wang G, Li C, Jin L. A cross-sectional study on the characteristics of tardive dyskinesia in patients with chronic schizophrenia. Shanghai Archives Psychiatry. 2017;29(5):295. Google Scholar 39. De Hert M, Sermon J, Geerts P, Vansteelandt K, Peuskens J, Detraux J. The use of continuous treatment versus placebo or intermittent treatment strategies in stabilized patients with schizophrenia: a systematic review and meta-analysis of randomized controlled trials with first-and second-generation antipsychotics. CNS Drugs. 2015;29(8):637–58. Article PubMed Google Scholar 40. Berna F, Misdrahi D, Boyer L, Aouizerate B, Brunel L, Capdevielle D, et al. Akathisia: prevalence and risk factors in a community-dwelling sample of patients with schizophrenia. Results from the FACE-SZ dataset. Schizophr Res. 2015;169(1–3):255–61. Article PubMed CAS Google Scholar 41. Pazvantoğlu O, Şimşek ÖF, Aydemir Ö, Sarisoy G, Böke Ö, Üçok A. Factor structure of the subjective Well-being under neuroleptic treatment Scale-short form in schizophrenic outpatients: five factors or only one? Nord J Psychiatry. 2014;68(4):259–65. Article PubMed Google Scholar 42. Kassew T, Demilew D, Birhanu A, Wonde M, Liyew B, Shumet S. Attitude towards Antipsychotic Medications in Patients Diagnosed with Schizophrenia: A Cross-Sectional Study at Amanuel Mental Specialized Hospital, Addis Ababa, Ethiopia. Schizophrenia research and treatment. 2019;2019. 43. Tesfay K, Girma E, Negash A, Tesfaye M, Dehning S. Medication non-adherence among adult psychiatric out-patients in Jimma university specialized hospital, Southwest Ethiopia. Ethiop J Health Sci. 2013;23(3):227–36. PubMed PubMed Central Google Scholar 44. Kane JM, Barnes TR, Correll CU, Sachs G, Buckley P, Eudicone J, et al. Evaluation of akathisia in patients with schizophrenia, schizoaffective disorder, or bipolar I disorder: a post hoc analysis of pooled data from short-and long-term Aripiprazole trials. J Psychopharmacol. 2010;24(7):1019–29. Article PubMed CAS Google Scholar 45. Demoz Z, Legesse B, Teklay G, Demeke B, Eyob T, Shewamene Z, et al. Medication adherence and its determinants among psychiatric patients in an Ethiopian referral hospital. Patient Prefer Adherence. 2014;8:1329. PubMed PubMed Central Google Scholar 46. Endale Gurmu A, Abdela E, Allele B, Cheru E, Amogne B. Rate of nonadherence to antipsychotic medications and factors leading to nonadherence among psychiatric patients in Gondar University Hospital, Northwest Ethiopia. Advances in Psychiatry. 2014;2014. 47. Teferra S, Hanlon C, Beyero T, Jacobsson L, Shibre T. Perspectives on reasons for non-adherence to medication in persons with schizophrenia in ethiopia: a qualitative study of patients, caregivers and health workers. BMC Psychiatry. 2013;13(1):168. Article PubMed PubMed Central Google Scholar 48. Bitter I, Fehér L, Tényi T, Czobor P. Treatment adherence and insight in schizophrenia. Psychiatria Hungarica: Magyar Pszichiatriai Tarsasag Tudomanyos Folyoirata. 2015;30(1):18–26. Google Scholar 49. Lysaker PH, Vohs J, Hillis JD, Kukla M, Popolo R, Salvatore G, et al. Poor insight into schizophrenia: contributing factors, consequences and emerging treatment approaches. Expert Rev Neurother. 2013;13(7):785–93. Article PubMed CAS Google Scholar 50. Tariku M, Demilew D, Fanta T, Mekonnen M, Abebaw Angaw D. Insight and Associated Factors among Patients with Schizophrenia in Mental Specialized Hospital, Ethiopia, 2018. Psychiatry Journal. 2019;2019. 51. Gómez-Arnau. Ramírez J. Pharmacological and non-pharmacological correlates of acute akathisia in first-episode psychosis. 2016. 52. PAC A. Basic and clinical pharmacology. 2015. 53. Sykes DA, Moore H, Stott L, Holliday N, Javitch JA, Lane JR, et al. Extrapyramidal side effects of antipsychotics are linked to their association kinetics at dopamine D 2 receptors. Nat Commun. 2017;8(1):1–11. Article Google Scholar 54. Simpson G. Simpson-Angus extrapyramidal side effects scale [EPS]. In: Rush AJ Jr, First MB, Blacker D (eds) Task force for the handbook of psychiatric measures handbook of psychiatric measures. Washington, DC: American Psychiatric Association; 2000. pp. 163–164. Google Scholar 55. Poyurovsky M. Acute antipsychotic-induced akathisia revisited. Br J Psychiatry. 2010;196(2):89–91. Article PubMed Google Scholar 56. Woerner MG, Kane JM, Lieberman JA, Alvir J, Bergmann KJ, Borenstein M et al. The prevalence of tardive dyskinesia. J Clin Psychopharmacol. 1991;11(1):34–42. Article PubMed CAS Google Scholar 57. Abiola T, Udofia O, Zakari M. Psychometric properties of the 3-item Oslo social support scale among clinical students of Bayero university kano, Nigeria. Malaysian J Psychiatry. 2013;22(2):32–41. Google Scholar 58. Saitz R, Palfai TP, Cheng DM, Alford DP, Bernstein JA, Lloyd-Travaglini CA, et al. Screening and brief intervention for drug use in primary care: the ASPIRE randomized clinical trial. JAMA. 2014;312(5):502–13. Article PubMed PubMed Central Google Scholar Download references Acknowledgements I would like to thank Mekelle University to Ayder Comprehensive Specialized Hospital, Tigray Region Health Bureau and respected hospitals, study participants and data collectors for their contributions. Funding There no specific fund given to conduct this study, but Aksum University supported to duplicate data collection tools. Author information Authors and Affiliations Department of Psychiatry, College Health Science, Aksum University, Aksum, Ethiopia Welu Abadi Gebru & Tesfaye Derbie Begashaw 2. Department of Midwifery, College Health Science, Aksum University, Aksum, Ethiopia Gebregziabher Kidanemariam Asfaw & Hiwot Gebrewahid Reta 3. Department of psychiatry, College Health Science, Mekelle University, Mekelle, Ethiopia Kenfe Tesfay Berhe & Hagos Tsegabrhan Gebresilassie 4. Research Centre for Public Health, Equity and Human Flourishing Torrens University Australia, Adelaide, Australia Kenfe Tesfay Berhe Contributions WA: conceived and designed the study, analyzed the data and wrote the manuscript. GK, KT, TD, HG, and, HT involved in data analysis, drafting of the manuscript and advising the whole research paper and also were involved in the interpretation of the data and contributed to manuscript preparation. Similarly, all authors have read and approved the final version of the manuscript. Corresponding author Correspondence to Welu Abadi Gebru. Ethics declarations Ethical approval and consent to participate All procedures performed in the study were with the ethical standards of the institutional and/or national research committee and the Helsinki Declaration of 1964. Before the study began ethical clearance was obtained from the Institutional Health Research Ethics Review Committee (IHRERC) of the College of Health and Medical Sciences of Mekelle University with reference number of IHRERC /1643/2020. The college sent a letter of cooperation to public hospitals and a written and signed informed consent was obtained from the head of the institutions before starting the data collection. From all of the participants and their parents/legal guardians informed, voluntary, written, and signed consent was obtained that declare their agreement to participate in the study. For the minor participants informed, voluntary, written, and signed consent was obtained from their parents/guardian. The information from individual participants was kept confidential, their identity will not be shown and there will be no dissemination of the information without the respondent’s permission. A private room for an interview was prepared; those participants who reported Extrapyramidal Side Effects were immediately linked to the psychiatric outpatient department for further evaluation and management. Interviewers were trained to link participants found to be in physically risky conditions and/or in immediate need of counseling to psychologists and psychiatrists. During the data collection COVID-19 prevention protocol was taken action like wearing face masks, maintaining physical distance, and using hand sanitizers, which was being practiced by health professionals in the health care setting as a safety measure. Consent for publication Not applicable. Competing interests The authors declare no competing interests. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Gebru, W.A., Asfaw, G.K., Berhe, K.T. et al. Predictors of extrapyramidal side effects among patients taking antipsychotic medication at Mekelle psychiatry units, Northern Ethiopia, 2023: unmatched case-control study. BMC Psychiatry 25, 837 (2025). Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Provided by the Springer Nature SharedIt content-sharing initiative Keywords Antipsychotic Extrapyramidal side effects Schizophrenia
190672
https://www.quora.com/How-does-the-modulo-operator-work-and-why-does-it-return-negative-numbers-after-a-certain-value
Something went wrong. Wait a moment and try again. Modulo Operation Negative Numbers Operators (mathematics) Integer Division Basic Arithmetic Operatio... Math Operator Computer Arithmetic 5 How does the modulo operator (%) work and why does it return negative numbers after a certain value? Ad by CDW Corporation Need AI intuition without compromising compute power? Explore AMD Ryzen™ and Windows 11-powered x86 PCs from CDW to accelerate modern business objectives. John Thoits Author has 164 answers and 11.4K answer views · Apr 9 You must be using C or a related language. The basic use of the modulo operator is to calculate the remainder in integer division. Integer division will always return an integer result, and so the modulo operator can be used to get back out the fraction component of the division. There are two modes that can be used, depending on how you want to deal with negative numbers, but whichever mode you are using should affect both division and the modulo operator so that if a/b=c, and a%b=d, then cb+d=a. For positive number, cb will always be <= a, and so d will always be >= 0. For negative numbers You must be using C or a related language. The basic use of the modulo operator is to calculate the remainder in integer division. Integer division will always return an integer result, and so the modulo operator can be used to get back out the fraction component of the division. There are two modes that can be used, depending on how you want to deal with negative numbers, but whichever mode you are using should affect both division and the modulo operator so that if a/b=c, and a%b=d, then cb+d=a. For positive number, cb will always be <= a, and so d will always be >= 0. For negative numbers you have a choice between a mode where cb <= a, so that -3/2 = -2, and you can see that -22 <= -3, or a mode where cb >= a, so that -3/2 = -1. In the first mode, the modulo operator will always be >= 0 for negative values of a, because it has to bring cb “up” to the value of a in the expression cb+d, but in the second mode the modulo operator will always be <= 0 for negative values of a because now it's bringing cb “down” to the value of a in the expression cb+d. So, if you are running into the case of the modulo operator returning negative values when you reach a “high enough” number then I think now you have ran into a second problem, that signed integers on your CPU are finite values, and will overflow (or “roll over”) into negative values past the maximum representable positive integer. When this happens the CPU will set a register to indicate the condition occurred, and you might be able to configure your compiler to raise an exception when this happens, but almost always, in maybe 99.9% of all software built by compilers, this is ignored and you simply get an incorrect value that is negative because the result of your previous math exceeds the upper limit of the representable integer value for the size of your integer, which I would guess is either small, maybe 16 bits, or if it's 32 bits then you must be using multiplication to overflow the storage. So, to answer your question you are probably incrementing a 16-bit signed integer in a tight loop until it exceeds 32767 and overflows into a value of -32768, which then produces a negative result for your modulo operato, which is being used in the second mode that I described above. Related questions Why does the modulo operation not work with powers higher than three? How does the modulo operation work with negative numbers and why? Why is it that when a negative number is divided by a negative number, the answer becomes positive? What are imaginary numbers used for in the real world? Why are there numbers? Brian Overland Author has 31.3K answers and 103.4M answer views · 7y Related How does the modulo operation work with negative numbers and why? Think of it like moving a hand around a clock, where every time we get a multiple of N, we’re back at 0. So, take mod 3 (in C and Python, it’s n % 3)… Starting at N=0 and going forward, it’s 0,1,2,0,1,2,0,1,2… forever. So 1 % 3 = 1 2 % 3 = 2 3 % 3 = 0 4 % 3 = 1 5 % 3 = 2 etc. But you can start at ANY multiple of 3, including negative numbers. So… -6 % 3 = 0 -5 % 3 = 1 -4 % 3 = 2 -3 % 3 = 0 -2 % 3 = 1 -1 % 3 = 2 0 % 3 = 0 Basically, what you do is: Find the highest multiple of N equal or lower than the target number, X. Then ask, how much higher is X than that? So… what is -11 % 5?? The lowest multiple of 5 equal o Think of it like moving a hand around a clock, where every time we get a multiple of N, we’re back at 0. So, take mod 3 (in C and Python, it’s n % 3)… Starting at N=0 and going forward, it’s 0,1,2,0,1,2,0,1,2… forever. So 1 % 3 = 1 2 % 3 = 2 3 % 3 = 0 4 % 3 = 1 5 % 3 = 2 etc. But you can start at ANY multiple of 3, including negative numbers. So… -6 % 3 = 0 -5 % 3 = 1 -4 % 3 = 2 -3 % 3 = 0 -2 % 3 = 1 -1 % 3 = 2 0 % 3 = 0 Basically, what you do is: Find the highest multiple of N equal or lower than the target number, X. Then ask, how much higher is X than that? So… what is -11 % 5?? The lowest multiple of 5 equal or lower than -11 is -15. Therefore the answer is 4, beacuse -11 is 4 higher than -15. What if N is negative? The answer is that this changes the direction of the “hand on the clock.” So, if X % N is M, then X % -N is -(N-M), unless M is 0. In which case M is 0. For example, 1007 % 13 is 6. Therefore, 1007 % -13 must be -7, because it’s the “hand of the clock” moving back in the opposite direction, from 0, -1, -2, … to -12. Let’s take another. 11133 % 17 = 15. Therefore, we can predict that 11133 % -17 = -2. Again, it’s the “hand of the clock” moving back in the opposite direction, 0, -1, -2. If you want a “rule” for % N, where N is negative, here it is: Find the lowest multiple of N that is equal or higher than the target number, X. Then, what is the negative amount required to go down to X? It’s just the mirror of the rule for positive numbers. Promoted by Grammarly Grammarly Great Writing, Simplified · Aug 18 Which are the best AI tools for students? There are a lot of AI tools out there right now—so how do you know which ones are actually worth your time? Which tools are built for students and school—not just for clicks or content generation? And more importantly, which ones help you sharpen what you already know instead of just doing the work for you? That’s where Grammarly comes in. It’s an all-in-one writing surface designed specifically for students, with tools that help you brainstorm, write, revise, and grow your skills—without cutting corners. Here are five AI tools inside Grammarly’s document editor that are worth checking out: Do There are a lot of AI tools out there right now—so how do you know which ones are actually worth your time? Which tools are built for students and school—not just for clicks or content generation? And more importantly, which ones help you sharpen what you already know instead of just doing the work for you? That’s where Grammarly comes in. It’s an all-in-one writing surface designed specifically for students, with tools that help you brainstorm, write, revise, and grow your skills—without cutting corners. Here are five AI tools inside Grammarly’s document editor that are worth checking out: Docs – Your all-in-one writing surface Think of docs as your smart notebook meets your favorite editor. It’s a writing surface where you can brainstorm, draft, organize your thoughts, and edit—all in one place. It comes with a panel of smart tools to help you refine your work at every step of the writing process and even includes AI Chat to help you get started or unstuck. Expert Review – Your built-in subject expert Need to make sure your ideas land with credibility? Expert Review gives you tailored, discipline-aware feedback grounded in your field—whether you're writing about a specific topic, looking for historical context, or looking for some extra back-up on a point. It’s like having the leading expert on the topic read your paper before you submit it. AI Grader – Your predictive professor preview Curious what your instructor might think? Now, you can get a better idea before you hit send. AI Grader simulates feedback based on your rubric and course context, so you can get a realistic sense of how your paper measures up. It helps you catch weak points and revise with confidence before the official grade rolls in. Citation Finder – Your research sidekick Not sure if you’ve backed up your claims properly? Citation Finder scans your paper and identifies where you need sources—then suggests credible ones to help you tighten your argument. Think fact-checker and librarian rolled into one, working alongside your draft. Reader Reactions – Your clarity compass Writing well is one thing. Writing that resonates with the person reading it is another. Reader Reactions helps you predict how your audience (whether that’s your professor, a TA, recruiter, or classmate) will respond to your writing. With this tool, easily identify what’s clear, what might confuse your reader, and what’s most likely to be remembered. All five tools work together inside Grammarly’s document editor to help you grow your skills and get your writing across the finish line—whether you’re just starting out or fine-tuning your final draft. The best part? It’s built for school, and it’s ready when you are. Try these features and more for free at Grammarly.com and get started today! Peter Vanroose Ph.D in Mathematics, KU Leuven (Graduated 1989) · Upvoted by Anup Buchke , MS CS Computer Science & Mathematics, Arizona State University (2014) · Author has 1.1K answers and 2.1M answer views · 7y Related How does the modulo operation work with negative numbers and why? Before answering the question, first a little bit of background: Modulo operations essentially work on number “classes” instead of on numbers. A number class is a group of numbers, such that (1) every number falls in exactly one class, and (2) there are only a finite number of classes. Every class can be represented by any element of the class, at choice. Since a number cannot belong to two classes, this representation is unambiguous. So although modulo operations work on classes, they are written out as if they work on individual numbers. In your mind however, when dealing with modulo operations Before answering the question, first a little bit of background: Modulo operations essentially work on number “classes” instead of on numbers. A number class is a group of numbers, such that (1) every number falls in exactly one class, and (2) there are only a finite number of classes. Every class can be represented by any element of the class, at choice. Since a number cannot belong to two classes, this representation is unambiguous. So although modulo operations work on classes, they are written out as if they work on individual numbers. In your mind however, when dealing with modulo operations, you should always replace a number with its class! For example, the “numbers modulo 7” actually consist of the 7 classes of integer numbers such that two numbers belong to the same class if their remainder (after division by 7) is the same. Those 7 classes thus are: { 0, 7, 14, 21, 28, …, -7, -14, -21, … } => represented by e.g. 0 { 1, 8, 15, 22, 29, …, -6, -13, -20, … } => represented by e.g. 1 { 2, 9, 16, 23, 30, …, -5, -12, -19, … } => represented by e.g. 2 { 3, 10, 17, 24, 31, …, -4, -11, -18, … } => represented by e.g. 3 { 4, 11, 18, 25, 32, …, -3, -10, -17, … } => represented by e.g. 4 { 5, 12, 19, 26, 33, …, -2, -9, -16, … } => represented by e.g. 5 { 6, 13, 20, 27, 34, …, -1, -8, -15, -22, -29, -36, … } => represented by e.g. 6 Now for the question: Modulo operations on numbers, by definition, work identically on any number of the same class. Thus, for example, the operation “” (multiplication) modulo 7, on the numbers -3 and 12, works identically as on the numbers 4 and 5, since -3 and 4 are in the same class, and so are 5 and 12. So what is -3 12 modulo 7? Formally, it’s the class containing the integer product of any member of the class of -3 with any member of the class of 12. So it’s the class of e.g. 45 = 20, which is the class {6, 13, 20, 27, 34, …, -1, -8, -15, …}, which can be represented by 6. But it’s (luckily) also the class of -3 12 = -36. Final remark: “modulo operations”, when formulated on individual numbers and not on classes, are only well-defined if their result falls in the same class, irrespective of which member of the class was taken to perform the operation. Luckily, multiplication, addition and subtraction satisfy this condition. Exponentiation does not (at least, not for the exponent). Division only does when the base for the modulo (class) definition is a prime number. Phil Scovis I play guitar. And sing in the car. · Author has 6.9K answers and 13.1M answer views · 3y Related How do I get the modulo of a very large number efficiently? I'm going to sidestep your question slightly, because I suspect that you are doing a code challenge, and a more helpful answer would be to the question: How do I avoid getting a very large number in the first place, when the result will only be used in a modulo operation? For example, consider the problem: “Find 1000! mod 1009”. A straightforward approach is to first multiply the numbers from 1 to 1000, and then to ask Quora what the hell to do when an overflow occurs. Yet, you know the final answer is less than 1009. There should be a better way. There is a better way. The trick is to apply the I'm going to sidestep your question slightly, because I suspect that you are doing a code challenge, and a more helpful answer would be to the question: How do I avoid getting a very large number in the first place, when the result will only be used in a modulo operation? For example, consider the problem: “Find 1000! mod 1009”. A straightforward approach is to first multiply the numbers from 1 to 1000, and then to ask Quora what the hell to do when an overflow occurs. Yet, you know the final answer is less than 1009. There should be a better way. There is a better way. The trick is to apply the modulo operation at every step. This takes advantage of the fact that (ab)%c == ((a%c)(b%c))%c. So, your factorial calculation looks like this: int result=1;for (int i=1;i<=1000;i++){ result=(resulti)%1009;}return result; At no point do you deal with any thing that doesn't fit comfortably into a 32-bit integer. No need to be doing modulo on any very large numbers. Now, I don't know what kind of problem you're working on—probably something involving either cryptography or combinatorics. But in any case, it will probably come down to addition, multiplication, division, and powers. I've covered multiplication above. Addition is similar: find the modulo after every addition step instead of waiting to the end. For exponentiation, I highly recommend adding the “exponentiation-by-squaring” algorithm to your code arsenal, modifying it to do modulo operations as you go. This modification goes by the name “modular exponentiation" or “PowMod", so you can look that up too. Division is a little tricky; you need to Google the algorithm for “modular inverse", and go from there. Whether or not you're doing a code challenge, good luck on your application of modulo to very large numbers. I hope I was some help. For homework: prove that (ab)%c == ((a%c)(b%c))%c. Related questions What is the order of a modulo? In coding, why is ! = used instead of ≠? Is the difference between two negative numbers always negative? How do I create a modulo operator that changes? What is the difference between using a modulo operator and division by its inverse in programming languages? Is there a practical reason for choosing one over the other? Rob Gofekyetsev Works at University of California, Los Angeles · 9y Related How does the modulo operation work with negative numbers and why? Remember what the modulo operator means. a mod b = r, where a = b x + r and r < b. Usually we have the convention that r >= 0 as well, though some programming languages like R allow the mod to be negative. The idea is that you get as close as possible but no greater than the target dividend and then use the remainder to go the rest of the way. The mod is just the remainder. So why does it work that way with negatives? Well, it’s very similar to the case for positive numbers. For example 10 mod 3 = 1 since 10 = 3 3 + 1. Similarly, -10 mod 3 = 2 since -10 = 3 -4 + 2. This idea is independen Remember what the modulo operator means. a mod b = r, where a = b x + r and r < b. Usually we have the convention that r >= 0 as well, though some programming languages like R allow the mod to be negative. The idea is that you get as close as possible but no greater than the target dividend and then use the remainder to go the rest of the way. The mod is just the remainder. So why does it work that way with negatives? Well, it’s very similar to the case for positive numbers. For example 10 mod 3 = 1 since 10 = 3 3 + 1. Similarly, -10 mod 3 = 2 since -10 = 3 -4 + 2. This idea is independent of whether the dividend is negative or positive. Exercise for you to ponder: If a mod b = c, what is (-a) mod b? Hint: Draw a number line and use the fact that b > c. Sponsored by CDW Corporation Do you need stronger support with your collaborative tools? Logitech and CDW deliver modern meeting technology and full–scale support for deployment and maintenance. Dan L. Oom Former Ex-Pert (1992–present) · Author has 36.6K answers and 5.7M answer views · Apr 14 Related Can you explain how to use an if statement with the remainder (modulo) operator (%)? In C we would write things like printf( x % 2 ? "You win!\n" : "You lose!\n"); Newbies may find C idiom hard to get used to. Catherine Celice Former Former Developmental Math and Statistics Lecturer at Wayne State University (1997–2008) · Author has 2K answers and 1.4M answer views · 3y Related How does a modulo operator work? If modulo produces a remainder of a division, then why does 2 % 3 == 2 and not 66 since 2 / 3 == 0.66? Modulo only deals with the set of integers (Whole numbers and their opposites). 0.66 is not an integer. When we talk about remainders we are talking about the whole number part left over when we divide; we don’t use fractions (or decimals). Many things really cannot be divided into anything but integers. For example, if you have 103 children and 5 buses, how many children will appear on each bus? 103 divided by 5 = 20 with a remainder of 3, not 20.66 It doesn’t make sense to have 20.66 children on a bus, but it does make sense to have 20 children on each bus with 3 children left over, Sponsored by Amazon Business Solutions and supplies to support learning. Save on essentials and reinvest in students and staff. Sanford Roman Physicist, engineer, experimentalist Clock enthusiast, Blues · Author has 637 answers and 515.6K answer views · 7y Related How do I find the modulo of a negative number? mod(a,b) will give the remainder of a/b. You can think of this as subtracting b from a enough times to get in the range [0, b-1] If b>0, then the remainder will be in the range [0, b-1] meaning positive or zero. If b<0 the remainder will be in the range [b+1, 0] meaning negative or zero. If a is negative and b is positive, you ADD b to a enough times to reach zero or a positive number. I.e. in the range [0, b-1]. If a is negative and b is negative, you subtract b from a enough times to reach a number in the range [b+1, 0] So using mod(21, 4) we’d have 1. And mod(-21, 4) we’d have +3. ( From -21 -> mod(a,b) will give the remainder of a/b. You can think of this as subtracting b from a enough times to get in the range [0, b-1] If b>0, then the remainder will be in the range [0, b-1] meaning positive or zero. If b<0 the remainder will be in the range [b+1, 0] meaning negative or zero. If a is negative and b is positive, you ADD b to a enough times to reach zero or a positive number. I.e. in the range [0, b-1]. If a is negative and b is negative, you subtract b from a enough times to reach a number in the range [b+1, 0] So using mod(21, 4) we’d have 1. And mod(-21, 4) we’d have +3. ( From -21 -> -17 -> -13 -> -9 -> -5 -> -1 -> +3 ) And mod(-21, -4) we’d have -1. ( From -21 -> -17 -> -13 -> -9 -> -5 -> -1 ) At least this is how Excel and Python handle it. Other implementations could be different. Michael Veksler 30+ years programming · Author has 657 answers and 3.4M answer views · Updated 6y Related Why is the modulo operator (%) used in hashing? What characteristics makes it "ideal" in calculating location of values in a hash table? Usually the modulo operator is used as the last step in selecting the bucket. The modulo operator is far from ideal, but it is good enough. It guarantees that the resulting hash bucket is in range: the result of key % num_buckets is always in range of 0..(num_buckets-1). If the hash function is good, then the modulo operator gives a reasonable distribution of values in the range. The modulo operator is bad when: If the hash function is weak, then the modulo may make it worse. For example gcc 7.1 had std::hash(x)==x for integers. So if there are 311 buckets, then 0,311,622,933,… will all enter buck Usually the modulo operator is used as the last step in selecting the bucket. The modulo operator is far from ideal, but it is good enough. It guarantees that the resulting hash bucket is in range: the result of key % num_buckets is always in range of 0..(num_buckets-1). If the hash function is good, then the modulo operator gives a reasonable distribution of values in the range. The modulo operator is bad when: If the hash function is weak, then the modulo may make it worse. For example gcc 7.1 had std::hash(x)==x for integers. So if there are 311 buckets, then 0,311,622,933,… will all enter bucket 0. Unless bucket size is a power of 2, modulo distorts the probabilities. For example, if the hash function produces 32 bit values, and there are 2^31+1 buckets, then bucket 0 is twice more likely to get elements than any other bucket. It's much better to have the hash function generate values in the correct range, without using modulo, but it is too complicated most of the time. EDIT: Extending on Cameron’s comment, ‘%’ in many languages is remainder and not simply modulo, which shows that the question has a false premise. The distinction between module and remainder makes no different for non-negative integers, such as for unsigned types, but if the left hand side of ‘%’ is negative, then the remainder may also be negative. This is so with Java, JavaScript, C, and C++. However, Python and perl are immune to this issue since for them % is modulo . Sponsored by Stake Stake: Online Casino games - Play & Win Online. Play the best online casino games, slots & live casino games! Unlock VIP bonuses, bet with crypto & win. Ellis Cave 48 U.S. Patents · Author has 7.9K answers and 4.3M answer views · Apr 9 Related Can you explain how to use an if statement with the remainder (modulo) operator (%)? Using the J programming language: What integers from 1 to 20 have a remainder of 1 when dividing by 5: (#~1=5&|) >:i.20 1 6 11 16 How it works: :i.20 NB. Generates a list of integers from 1 to 20 5&| NB. | is J’s modulo verb, so this takes modulo 5 on each of the integers 1 to 20. 1=5&| NB. Creates a binary mark vector (ones & zeros) marking all integers in the list with a remainder of 1. ~ NB. extracts & lists all integers from the original list if they were marked in the mark vector generated in step 3. The #~ is the if statement. <<<>>> Try a different divisor & remainder: What integers from 1 to 20 Using the J programming language: What integers from 1 to 20 have a remainder of 1 when dividing by 5: (#~1=5&|) >:i.20 1 6 11 16 How it works: :i.20 NB. Generates a list of integers from 1 to 20 5&| NB. | is J’s modulo verb, so this takes modulo 5 on each of the integers 1 to 20. 1=5&| NB. Creates a binary mark vector (ones & zeros) marking all integers in the list with a remainder of 1. ~ NB. extracts & lists all integers from the original list if they were marked in the mark vector generated in step 3. The #~ is the if statement. <<<>>> Try a different divisor & remainder: What integers from 1 to 20 have a remainder of 2 when dividing by 3: (#~2=3&|)i.20 2 5 8 11 14 17 Francis King Principal engineer and programmer · Author has 1.5K answers and 732K answer views · 1y Related Why does the mod() function return 0 instead of throwing an exception or returning a negative remainder when one operand is zero? “Why does the mod() function return 0 instead of throwing an exception or returning a negative remainder when one operand is zero?” [francis@francis-endeavour ~]$ python3 Python 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 1 % 0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: integer modulo by zero So, no, not in general. Which language are you talking about? There are literally thousands of them. If your mod ( ) returns zero rather than an error, that’s be “Why does the mod() function return 0 instead of throwing an exception or returning a negative remainder when one operand is zero?” [francis@francis-endeavour ~]$ python3 Python 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 1 % 0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: integer modulo by zero So, no, not in general. Which language are you talking about? There are literally thousands of them. If your mod ( ) returns zero rather than an error, that’s because the person who wrote the code decided to do it that way. They didn’t have to. Jack Brennen Works at Google (company) · Author has 2.8K answers and 20.1M answer views · 8y Related What is -1% of 256 where % is a modulo operator? The way that assignment of integers to an unsigned integer type works in the C language is actually fairly straightforward and simple, and better yet, it’s predictable. If you assign an integer M into a variable of an unsigned integer type, and if the value of M cannot be represented in that integer type, then it will pick the unique value N that can be represented in the target unsigned integer type, such that the difference between N and M is an exact multiple of the number of values the target type can hold. In this case, the target type (unsigned char) can hold 256 values. So when you assign The way that assignment of integers to an unsigned integer type works in the C language is actually fairly straightforward and simple, and better yet, it’s predictable. If you assign an integer M into a variable of an unsigned integer type, and if the value of M cannot be represented in that integer type, then it will pick the unique value N that can be represented in the target unsigned integer type, such that the difference between N and M is an exact multiple of the number of values the target type can hold. In this case, the target type (unsigned char) can hold 256 values. So when you assign any integer outside of [0…255] into an unsigned char, it picks the unique value within that range such that the difference between your number M and the assigned number N is divisible by 256. In this case, M is -1, and the only number N in the range [0…255] such that N-M is an exact multiple of 256 is the number N=255. When N=255, the difference N-M is 256, and so that’s the value it uses. Note that the above is a purely mathematical way to describe the behavior. Another way to describe it is to represent the number M in two’s complement binary notation, and then truncate all of the high order bits that don’t fit in the target type. However, if you don’t know what two’s complement binary notation is, that doesn’t help much. Also note that the way that it’s described above preserves the behavior of addition, subtraction, and multiplication. So if you have two integers A and B, without regard to any particular range, then these are all true: (unsigned char)(A+B) is equal to (unsigned char)(A) + (unsigned char)(B) (unsigned char)(A-B) is equal to (unsigned char)(A) - (unsigned char)(B) (unsigned char)(AB) is equal to (unsigned char)(A) (unsigned char)(B) Division (and modulo) behavior is not preserved. Hilmar Zonneveld Translator (1985–present) · Author has 58.5K answers and 19.3M answer views · 3y Related How does a modulo operator work? If modulo produces a remainder of a division, then why does 2 % 3 == 2 and not 66 since 2 / 3 == 0.66? 2 % 3 is the remainder of the division, not the quotient. If you divide 2 by 3 (integer division), you get a result of zero, and the remainder is two. Here is another example, with larger numbers… perhaps that will make it clearer. If you divide 15 by 2, you get a result of seven, and a remainder of one. Therefore (using Python): print(15 // 2) # Result: 7; the quotient print(15 % 2) # Result: 1; the remainder Or you can get both results at once: print(divmod(15, 2)) # Result: (7, 1) Related questions Why does the modulo operation not work with powers higher than three? How does the modulo operation work with negative numbers and why? Why is it that when a negative number is divided by a negative number, the answer becomes positive? What are imaginary numbers used for in the real world? Why are there numbers? What is the order of a modulo? In coding, why is ! = used instead of ≠? Is the difference between two negative numbers always negative? How do I create a modulo operator that changes? What is the difference between using a modulo operator and division by its inverse in programming languages? Is there a practical reason for choosing one over the other? In the modulo operation (a mod b), b is called the modulo. What is the official name of a? What are modulo operators? How does a modulo operator work? If modulo produces a remainder of a division, then why does 2 % 3 == 2 and not 66 since 2 / 3 == 0.66? Can you explain how to use an if statement with the remainder (modulo) operator (%)? What is the difference between a negative number and the absolute value of a negative number? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
190673
http://homepage.physics.uiowa.edu/~rmerlino/6Fall09/29006_FirstLaw_HeatEngines.pdf
29:006 Problems on the 1st Law of thermodynamics and heat engines Principles 1) The 1st Law of thermodynamics: The change in the internal energy of a system is equal to the heat that it absorbs (Qin) minus the work that it does (Wout) Change in internal energy = Qin – Wout 2) Heat engines: (a) Heat engines operate in a cycle in which the system is always returned to its original condition; therefore the change in internal energy for the cycle is ZERO. The engine absorbs an amount of heat Qin, performs an amount of work Wout, and discards an amount of heat Qout. The energy balance for this engine is then: Qin = Wout + Qout (b) engine efficiency: out in W efficiency Q = , this can be expressed either as a fraction or a percentage. Wout Qout Qin Heat, work, and internal energy are typically expressed either in Joules (J), calories (cal) or British Thermal Units (BTU). (Answers are on the next page, but try to do the problems first!) 1. 1000 BTUs of heat are absorbed by a gas while the gas expands and performs 700 BTUs of work. What is the change in the internal energy of the gas in this process? 2. An expanding gas that is in contact with a heat source performs 3000 BTU of work while its internal energy decreases by 1000 BTU. How much heat did the gas absorb from the source? 3. 4000 J of work is done on a gas while it is being compressed. If the internal energy of the gas increased by 2500 J, how much heat flowed into or out of the gas in this process? 4. An engine operating in a cycle absorbs 5000 cal of heat from a heat source and performs 2000 cal of work. How much heat was discarded in this cycle and what is the efficiency of this engine? 5. An engine operating in a cycle absorbs 15,000 BTU of heat and discards 10,000 BTU to a cold reservoir. How much work is done by this engine and what is its efficiency? SOLUTIONS Some useful hints: In applying the first law you must keep track of the signs of the quantities involved. • if the change in internal energy (IE) is positive, then the IE increased; if the change in IE is negative, then IE decreased • if heat enters (is absorbed by) a system then Qin is positive; if heat leaves the system, then Qin is negative • if the system does work, then Wout is positive; if work is done on the system, then Wout is negative 1. Change in IE = Qin – Wout = 1000 BTU – 700 BTU = 300 BTU Æ IE increases by 300 BTU 2. Change in IE = Qin – Wout Æ –1000 BTU = Qin – 3000 BTU Æ Q = –1000 BTU + 3000 BTU = +2000 BTU Æ the gas absorbed 2000 BTU 3. Here, Wout = –4000 J because work is done ON the system Change in IE = Qin – Wout Æ +2500 J = Qin – (–4000 J) = Qin +4000 J Æ Qin = 2500 J – 4000 J = – 1500 J Æ 1500 J of heat flowed OUT OF the gas 4. Qin = Wout + Qout 5000 cal = 2000 cal + Qout Æ Qout = 3000 cal 2000 0.4, 40% 5000 out in W J efficiency or Q J = = = 5. Qin = Wout + Qout 15,000 BTU = Wout + 10,000 BTU Æ Wout = 5000 BTU 5000 0.33, 33% 15,000 out in W BTU efficiency or Q = = =
190674
https://www.freemathhelp.com/forum/threads/sin-cos-periodic-2-pi-im-getting-confused-about-how-sin-x-sinx-x-2-pi-why-is-that.115175/
New posts Search forums Menu Log in Register Install the app Forums Free Math Help Geometry and Trig You are using an out of date browser. It may not display this or other websites correctly.You should upgrade or use an alternative browser. Sin/Cos periodic 2pi: I'm getting confused about how sin(x)=sinx(x + (2pi)) why is that? Thread starter Ryan$ Start date R Ryan$ Full Member Joined : Jan 25, 2019 Messages : 353 #1 Hi guys, I'm studying sin/cos functions and I'm getting confused about how sin(x)=sinx(x + (2pi)) why is that? I know that function sin is periodic at ever 2pi, but what does that mean?! however the argument of sin is not the same(x != x+(2pi)) we are getting the same function/answer? may please anyone illustrate for me what does "periodic" mean in math? for instance I can assume that a specific function is periodic at every 3pi so I can say f(x)=f(x+3pi) ? if so .. why?! how two functions having different argument are same function?!!! the point behind the term "periodic" isn't understandable at all for me.. any help please?! thanks. H HallsofIvy Elite Member Joined : Jan 27, 2012 Messages : 7,763 #2 You say "I know that function sin is periodic at ever 2pi". That is exactly what "periodic" means: sin(x+ 2pi)= sin(x). The function value repeats "periodically"- at regular intervals. You also say "however the argument of sin is not the same(x != x+(2pi)) we are getting the same function/answer?". The fact that (\displaystyle x\ne y) does NOT necessarily mean that f(x) cannot be equal to f(y). Look at the very simple, constant function "f(x)= 3 for all x". That is a perfectly valid, though very simple, function. Look at (\displaystyle y= x^2). f(-3)= 9 and f(3)= 9. The fact that the two "x" values are different doesn't mean the function values must be different. Finally, exactly what definition of "sine" and "cosine" were you given? The definitions in terms of right triangles won't work since an angle, in a right triangle must be positive and can't be larger than pi/2 radians so "x+ 2pi" wouldn't even make sense. Most common is the "circle definition" (and some texts use the phrase "circular functions" rather than "trigonometric functions"): Draw the unit circle on a coordinate system-center at the origin, radius 1. Starting from the point (1, 0) measure a distance "t" around the circumference of the circle. The (x, y) coordinates of the end point give sine and cosine- cos(t) is the x coordinate, sin(t) is the y coordinate. For example, since the circle has radius 1, it has circumference 2pi(1)= 2pi. pi/2 is 1/4 of that. If we start at (1, 0) and measure distance pi/2 around the circumference we go 1/4 of the way, ending at (0, 1). So cos(pi/2)= 0, sin(pi/2)= 1. If instead we measure distance pi, we go half way around the circle, from (1, 0) to (-1, 0). cos(pi)= -1, sin(pi)= 0. A little harder is sin(pi/4) and cos(pi/4). Since that is half of pi/2, we wind up half way between (1, 0) and (0, 1). By symmetry we are on the line y= x. The equation of the unit circle is x^2+ y^2= 1. Since y= x, we have x^2+ x^2= 2x^2= 1 so x^2= 1/2 and x= sqrt(1/2)= sqrt(2)/2 (we are still in the first quadrant so x and y are both positive). That is, cos(pi/4)= sin(pi/4)= sqrt(2)/2. But, again, the entire circle has circumference 2pi. If I measure a distance around the circle x+ 2pi, I go from (1, 0) to (cos(x), sin(x)) and then on another 2pi. I have measured a distance x+ 2pi so "by definition" I must end at the point (cos(x+ 2pi), sin(x+ 2pi). But that last 2pi takes me exactly once around the circle so I come right back to (cos(x), sin(x)). So cos(x+ 2pi)= cos(x), sin(x+ 2pi)=sin(x). R Ryan$ Full Member Joined : Jan 25, 2019 Messages : 353 #3 thanks alot!!!! R Ryan$ Full Member Joined : Jan 25, 2019 Messages : 353 #4 but once again, isn't necessary for two function like sinx and sin(x+180) to have same argument for saying that two functons are the same function?! really weird R Ryan$ Full Member Joined : Jan 25, 2019 Messages : 353 #5 guys I really need help, why sinx(x)=sin(x+2pi) ? argument aren't the same so how at specific x we get the same value of two functions?! topsquark Senior Member Joined : Aug 27, 2012 Messages : 2,370 #6 HallsofIvy had a good post. The way the function sin(x) is defined makes it periodic. If you go around the unit circle once (an angle of (\displaystyle 2 \pi) radians) the you get the same value back. If you go around twice ( (\displaystyle 4 \pi ) ) radians you get the same value back again. Note how the graph below repeats itself every multiple of (\displaystyle 2 \pi ) radians.-Dan R Ryan$ Full Member Joined : Jan 25, 2019 Messages : 353 #7 So we can have the same function with two different argument ? like sinx=sinx(x+180) and that's why called periodic? I mean periodic implicitly mean we could have different argument with the same function's value?! thanks Otis Elite Member Joined : Apr 22, 2015 Messages : 4,592 #8 Hello Ryan$. It looks like you're sometimes mistyping the function's name sin as sinx. Also, I think it's best to use function notation, when typing functions. I've made some other comments below, too. Ryan$ said: … sin(x)=sinx(x + (2pi)) … Click to expand... We don't really need grouping symbols around 2∙pi, but I like that you used function notation on each side. However, typing sinx(x + 2∙pi) could be interpreted as either sin(x[x + 2∙pi]) or sin(x)∙(x + 2∙pi), while what you mean is: sin(x) = sin(x + 2∙pi) Ryan$ said: … sinx and sin(x+180) … Click to expand... This time, you used function notation only on the right-hand side. Also, it's good form to type a degree symbol next to values measured in degrees, especially in a thread containing both degree measures and radian measures. What you mean is: sin(x) and sin(x + 180º) Ryan$ said: … sinx(x)=sin(x+2pi) … Click to expand... Good, you used function notation throughout (and no extra grouping symbols around 2∙pi), but you mean the name sin, not sinx: sin(x) = sin(x + 2∙pi) Ryan$ said: … sinx=sinx(x+180) … Click to expand... Hopefully, you now understand three things to be fixed here. You mean: sin(x) = sin(x + 180º) ? H HallsofIvy Elite Member Joined : Jan 27, 2012 Messages : 7,763 #9 You seem to be completely misunderstanding what a "function" is! You say "isn't [it] necessary for two function like sinx and sin(x+180) to have same argument for saying that two functions are the same function?!" No, the function here is "sine" (abbreviated "sin"). A function, f, takes one number as its argument, x (the "argument" of the function), and returns another number, f(x) (the "value" of the function at that argument). (\displaystyle f(x)) and (\displaystyle f(x+ 2\pi)) are the same function, f, evaluated at two different arguments. Whether those values are the same or not depends on exactly what the function, f, is. Otis Elite Member Joined : Apr 22, 2015 Messages : 4,592 #10 Ryan$ said: … we can have the same function [output] with two different [arguments]? like sin(x)=sin(x+180º) … Click to expand... Yes, and we don't need periodicity for that. Many functions output the same value for different inputs. Here's an example: f(x) = x^4 + 2x^3 - 13x^2 - 14x + 24 f(-4) = f(-2) = f(1) = f(3) … and that's why [the sine function is] called periodic? … Click to expand... No, the reason why periodic functions are called periodic involves more than just outputting the same value for different inputs. My example function f above outputs the same value for multiple inputs, but f is not periodic. Graphically speaking, a function is periodic when its curve over each period (interval) is the same as all others. In other words, a periodic function has exactly the same behavior within each of its periods. Here are some links to free, online trigonometry textbooks and lecture-notes. ? R Ryan$ Full Member Joined : Jan 25, 2019 Messages : 353 #11 So to draw sinx is the same as sin(x+2pi)?! H HallsofIvy Elite Member Joined : Jan 27, 2012 Messages : 7,763 #12 Yes, the graph of (\displaystyle y= sin(x)) is identical to the graph of (\displaystyle y= sin(x+ 2\pi)). Otis Elite Member Joined : Apr 22, 2015 Messages : 4,592 #13 Ryan$ said: So to draw sinx is the same as sin(x+2pi)?! Click to expand... Yes. The graph of sin(x+2∙pi) is the graph of sin(x) shifted 2∙pi units to the left. The period of the sine function is 2∙pi, so the behavior of sin(x+2∙pi) within [-2∙pi,0] is exactly the same as the behavior of sin(x) in [0,2∙pi]. If we plotted both functions on the same graph from -2∙pi to 2∙pi, we would see only one curve because the two graphs are identical (i.e., they match-up perfectly). Symbol x represents an angle. We also use symbol θ, instead of x. The animation below shows why the graphs of sine and cosine repeat their behavior every 2∙pi units (i.e., every revolution around the unit circle), using these definitions: cos(θ) = x-coordinate of point where the terminal ray of angle θ intersects unit circle sin(θ) = y-coordinate of point where the terminal ray of angle θ intersects unit circle Circle cos sin LucasVB [Public domain], via Wikimedia Commons ? You must log in or register to reply here. Share: Facebook Twitter Reddit Pinterest Tumblr WhatsApp Email Share Link Forums Free Math Help Geometry and Trig This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.By continuing to use this site, you are consenting to our use of cookies. Accept Learn more…
190675
https://arxiv.org/pdf/1505.07416
Published Time: Fri, 20 Jan 2023 19:55:36 GMT Combinatorial Game Complexity: An Introduction with Poset Games Stephen A. Fenner ∗ John Rogers † Abstract Poset games have been the object of mathematical study for over a century, but little has been written on the computational complexity of determining important properties of these games. In this introduction we develop the fundamentals of combinatorial game theory and focus for the most part on poset games, of which Nim is perhaps the best-known example. We present the complexity results known to date, some discovered very recently. 1 Introduction Combinatorial games have long been studied (see [5, 1], for example) but the record of results on the complexity of questions arising from these games is rather spotty. Our goal in this introduction is to present several results— some old, some new—addressing the complexity of the fundamental problem given an instance of a combinatorial game: Determine which player has a winning strategy. A secondary, related problem is Find a winning strategy for one or the other player, or just find a winning first move, if there is one. The former is a decision problem and the latter a search problem. In some cases, the search problem clearly reduces to the decision problem, i.e., having a solution for the decision problem provides a solution to the search problem. In other cases this is not at all clear, and it may depend on the class of games you are allowed to query. ∗ University of South Carolina, Computer Science and Engineering Department. Tech-nical report number CSE-TR-2015-001. † DePaul University, School of Computing 1 arXiv:1505.07416v2 [cs.CC] 24 Jun 2015 We give formal definitions below, but to give an idea of the subject matter, we will discuss here the large class of games known as the poset games . One of the best known of these is Nim , an ancient game, but given its name by Charles Bouton in 1901 . There are many others, among them, Hackendot, Divisors, and Chomp . Poset games not only provide good examples to illustrate general combinatorial game concepts, but they also are the subject of a flurry of recent results in game complexity, which is the primary focus of this article. The rest of this section gives some basic techniques for analyzing poset games. Section 2 lays out the foundations of the general theory of combi-natorial games, including numeric and impartial games, using poset games as examples. The rest of the paper is devoted to computational complex-ity. Section 3 gives an upper bound on the complexity of so-called “N-free” games, showing that they are solvable in polynomial time. Section 4 gives lower bounds on the complexity of some games, showing they are hard for various complexity classes. The section culminates in two recent PSPACE -completeness results—one for impartial poset games, and the other for “black-white” poset games. Section 5 discusses some open problems. 1.1 Poset games Definition 1.1. A partial order on a set P (hereafter called a poset ) is a binary relation ≤ on P that is reflexive, transitive, and antisymmetric (i.e., x ≤ y and y ≤ x imply x = y). For any x ∈ P , define Px := {y ∈ P | x 6 ≤ y}.We identify a finite poset P with the corresponding poset game : Starting with P , two players (Alice and Bob, say) alternate moves, Alice moving first, where a move consists of choosing any point x in the remaining poset and removing all y such that x ≤ y, leaving Px remaining. Such a move we call playing x. The first player unable to move (because the poset is empty) loses. 1 Poset games are impartial , which means that, at any point in the play, the set of legal moves is the same for either player. There is a rich theory of impartial games, and we cover it in Section 2.5. In an impartial game, the only meaningful distinction between players is who plays first (and we have named her Alice). Since every play of a poset game has only finitely many moves, one of the two players (but clearly not both!) must have a winning strategy. We say that a poset P is an ∃-game 1Games can be played on some infinite posets as well, provided every possible sequence of moves is finite. This is true if and only if the poset is a well-quasi-order (see, e.g., Kruskal ). 2(or winning position ) if the first player has a winning strategy, and P is a ∀-game (or losing position ) if the second player has a winning strategy. In the combinatorial game theory literature, these are often called N -games (“Next player win”) and P-games (“Previous player win”), respectively. We get the following concise inductive definition for any poset P : P is an ∃-game iff there exists x ∈ P such that Px is a ∀-game. P is a ∀-game iff P is not an ∃-game (iff, for all x ∈ P , Px is an ∃-game). We call the distinction of a game being a ∀-game versus an ∃-game the outcome of the game. There are at least two natural ways of combining two posets to produce a third. Definition 1.2. For posets P = 〈P, ≤P 〉 and Q = 〈Q, ≤Q〉, • define P + Q (the parallel union of P and Q) to be the disjoint union of P and Q, where all points in P are incomparable with all points in Q: P + Q := 〈P ˙∪ Q, ≤〉 , where ≤:= ≤P ˙∪ ≤ Q. • Define P/Q (or PQ —the series union of P over Q) to be the disjoint union of P and Q where all points in P lie above (i.e., are ≥ to) all points in Q: PQ := 〈P ˙∪ Q, ≤〉 , where ≤ := ≤P ˙∪ ≤ Q ˙∪(Q × P ).Note that + is commutative and associative, and that / is associative but not commutative. Using these two operations, let’s build some simple posets. Let C1 be the one-element poset. For any n ∈ N, let 1. Cn := C1/C 1/ . . . /C 1 ︸ ︷︷ ︸ n is the chain of n points (totally ordered). This is also called a NIM stack .2. An := C1 + C1 + · · · + C1 ︸ ︷︷ ︸ n is the antichain of n pairwise incomparable points. 3. Vn := An/C 1 is the n-antichain with a common lower bound. 333C3 = 31 V5 Λ4 Figure 1: Some simple posets constructed from individual points via parallel and series union. 4. Λn := C1/A n is the n-antichain with a common upper bound. 5. 3n := C1/A n/C 1 is the n-antichain with common upper and lower bounds. Some examples are shown in Figure 1. Exercise 1.3. Find a simple way, given m and n, to determine whether Am/A n is an ∃-game or a ∀-game. Exercise 1.4. Show that P/Q is an ∃-game if and only if either P is an ∃-game or Q is an ∃-game. 1.1.1 More examples The best-known poset game is Nim , an ancient game first formally described and “solved” by C. L. Bouton in 1902 . Here, the poset is a union of disjoint chains, i.e., of the form Cn1 + Cn2 + · · · + Cnk for some positive integers n1, . . . , n k. A move then consists of choosing a point in one of the chains and remove that point and everything above it. Other families of poset games include Chomp , introduced in 1974 by D. Gale , which, in its finite form, is represented by a rectangular arrangement of squares with the leftmost square in the bottom row removed. This is a poset with two minimal elements (first square on the second row, second square on bottom row). Every element in a row is greater than all of the elements to the left and below so playing an element removes it and all elements to the right and above. Hackendot , attributed to von Newmann, where the poset is a forest of upside-down trees (roots at the top). Hackendot was solved in 1980 by Úlehla . 4Divisors , introduced by F. Schuh , the poset is the set of all positive divisors (except 1) of a fixed integer n, partially ordered by divisibility. Divisors is a multidimensional generalization of Chomp . Chomp occurs as the special case where n = pmqn for distinct primes p, q . 1.2 Dual symmetry Some poset games can be determined (as ∃-games or ∀-games just by inspec-tion). For example, suppose a poset P has some kind of dual symmetry, that is, there is an order-preserving map ϕ : P → P such that ϕ ◦ ϕ = id. Fact 1.5. Let P be a poset and let ϕ : P → P be such that ϕ ◦ ϕ = id P and x ≤ y ⇐⇒ ϕ(x) ≤ ϕ(y) for all x, y ∈ P . • If ϕ has no fixed points, then P is a ∀-game. • If ϕ has a minimum fixed point (minimum among the set of fixed points), then P is an ∃-game. Proof. If ϕ has no fixed points, then Bob can answer any x played by Alice by playing ϕ(x). If ϕ has a least fixed point z, then Alice plays z on her first move, leaving Pz , which is symmetric with no fixed points, and thus a ∀-game. For example, the poset below is symmetric with a unique fixed point x,which Alice can win by playing on her first move: x After we introduce game equivalence, we can give a partial generalization of Fact 1.5 (Lemma 2.21 below) that has been useful in determining the outcomes of several games. 1.3 Strategy stealing Another class of posets that are easy to determine by inspection are those with an articulation point , i.e., a point that is comparable with every other point in the poset. For example, minimum and maximum points of P are articulation points. 5Fact 1.6. If a poset P contains an articulation point, then P is an ∃-game. Proof. Let x be some articulation point of P . If x is a winning first move for Alice, then we are done. If x is a losing first move for Alice, then there must be some winning response y for Bob if Alice first plays x. But if Alice plays x, then all points ≥ x are now gone, and so we have y < x . This means that the game after Bob moves is Py, which is a ∀-game by assumption. But then, Alice could have played y instead on her first move, leaving the ∀-game Py for Bob, and thus winning. We call this “strategy stealing” because Alice steals Bob’s winning strat-egy. The interesting thing about this proof is how nonconstructive it is. It shows that Alice has a winning first move, but gives virtually no informa-tion about what that first move could be. All we know is that the winning first play must be ≤ x. If x is a maximum point of P , then the proof gives no information whatsoever about Alice’s winning first move. Several poset games, including Chomp , have initial posets with maximum points, so we know that they are ∃-games. But determining a winning first move for Al-ice in Chomp appears quite difficult, and no fast algorithm is known. This suggests that, in the case of Chomp at least, the search problem (finding a winning first move) is apparently difficult, whereas the decision problem (∃-game or ∀-game?) is trivial. The search versus decision issue is discussed further in Section 4.1, below. Exercise 1.7. Show that the winning first moves in any poset form an an-tichain. 1.4 Black-white poset games Many interesting games are not impartial because the legal moves differ for the players. In chess, for example, one player can only move white pieces and the other only black pieces. We will informally call a game “black-white” when each player is assigned a color (black or white) and can only make moves corresponding to their color. 2 Many impartial games have natural black-white versions. Here, then, is a black-white version of a poset game: Definition 1.8. A black-white poset game consists of finite poset P , each of whose points are colored either black or white. The same rules apply to black-white poset games as to (impartial) poset games, except that one player (Black) can only play black points and the other player (White) can 2A different, popular color combination is red-blue. We use black-white so that illus-trations are faithfully rendered on a black-and-white printer. 6only play white points. (All points above a played point are still removed, regardless of color.) One could generalize this definition by allowing a third color, grey, say, where grey points can be played by either player. We will not pursue this idea further. Other “colored” games include red-blue Hackenbush and red-green-blue Hackenbush . Combinatorial games that are not impartial are known as partisan . In partisan games, we must make a distinction between the two players beyond who moves first. Generically, these players are called Left and Right. There is a surprisingly robust general theory of combinatorial games, both impartial and partisan, developed in [1, 5], and we give the basics of this theory in the next section. 2 Combinatorial game theory basics In this section we give some relevant definitions and a few facts from the general theory of combinatorial games. We give enough of the theory to understand later results. Thorough treatments of this material, with lots of examples, can be found in [1, 5] as well as other sources, e.g., and the recent book by Siegel . Our terminology and notation vary a little bit from [1, 5], but the concepts are the same. When we say, “game,” we always mean what is commonly referred to as a combinatorial game , i.e., a game between two players, say, Left and Right, alternating moves with perfect information, where the first player unable to move loses (and the other wins). In their fullest generality, these games can be defined abstractly by what options each player has to move, given any position in the game. 2.1 Notation We let N denote the set {0, 1, 2, . . . , } of natural numbers. We let |X| denote the cardinality of a finite set X. We use the relation “ := ” to mean “equals by definition.” We extend the definition of an operator on games to an operator on sets of games in the customary way; for example, if ∗ is a binary operation on games, and G and H are sets of games, then G ∗ H := {g ∗ h | g ∈ G ∧ h ∈ H}, and if g is a game, then g ∗ H := {g} ∗ H, and so on. 2.2 Basic definitions Definition 2.1. A game is an ordered pair G = ( GL, G R), where GL and GR are sets of games. The elements of GL (respectively, GR) are the left options 7(respectively, right options ) of G. An option of G is either a left option or a right option of G.It is customary to write {GL|GR} or {1, 2, . . . |r1, r 2, . . . } rather than (GL, G R), where GL = {1, 2, . . . } and GR = {r1, r 2, . . . }. We will do the same. For this and the following inductive definitions to make sense, we tacitly assume that the “option of” relation is well-founded, i.e., there is no infinite sequence of games g1, g 2, . . . where gi+1 is an option of gi for all i.3 A position of a game G is any game reachable by making a finite series of moves starting with G (the moves need not alternate left-right). Formally, Definition 2.2. A position of a game G is either G itself or a position of some option of G. We say that G is finite iff G has a finite number of positions. 4 Starting with a game G, we imagine two players, Left and Right, alter-nating moves as follows: the initial position is G; given the current position P of G (also a game), the player whose turn it is chooses one of her or his options of P (left options for Left; right options for Right), and this option becomes the new game position. The first player faced with an empty set of options loses. The sequence of positions obtained this way is a play of the game G. Our well-foundedness assumption implies that every play is finite, and so there must be a winning strategy for one or the other player. We classify games by who wins (which may depend on who moves first) when the players play optimally. This is our broadest and most basic classification. Before giving it, we first introduce the “mirror image” of a game G: define −G to be the game where all left options and right options are swapped at every position, as if the players switched places. Formally, Definition 2.3. For any game G, define −G := {− GR|− GL}.It is a good warm-up exercise to prove—inductively, of course—that −(−G) = G for every game G. For impartial games, e.g., poset games, the “−” operator has no effect; for black-white poset games, this is tantamount to swapping the color of each point in the poset. We can consider the following definition to be the most fundamental property of a game: Definition 2.4. Let G be a game. We say that G ≥ 0 (or 0 ≤ G) iff there is no right option gR of G such that −gR ≥ 0. We will say G ≤ 0 to mean that −G ≥ 0. 3This follows from the Foundation Axiom of set theory, provided ordered pairs are implemented in some standard way, e.g., (x, y ) := {{ x},{x, y }} for all sets xand y. 4Finite games are sometimes called short games ; see . 8So G ≥ 0 if and only if no right option gR of G satisfies gR ≤ 0. Sym-metrically, G ≤ 0 if and only if no left option gL of G satisfies gL ≥ 0. In terms of strategies, G ≥ 0 means that G is a first-move loss for Right or a second-move win for Left . If Right has to move first in G, then Left can win. Symmetrically, G ≤ 0 means that G is a first-move loss for Left or a second-move win for Right .The ≤ notation suggests that a partial order (or at least, a preorder) on games is lurking somewhere. This is true, and we develop it below. Definition 2.4 allows us to partition all games into four broad categories. Definition 2.5. Let G be a game. • G is a zero game (or a first-move loss , or P-game) iff G ≤ 0 and G ≥ 0. • G is positive (or a win for Left , or L-game) iff G ≥ 0 and G 6 ≤ 0. • G is negative (or a win for Right , or R-game) iff G ≤ 0 and G 6 ≥ 0. • G is fuzzy (or a first-move win , or N -game) iff G 6 ≤ 0 and G 6 ≥ 0.These four categories, P (for previous player win), L (for Left win), R (for Right win), and N (for next player win), partition the class of all games. The unique category to which G belongs is called the outcome of G, written o(G).For example, the simplest game is the endgame 0 := {|} with no options, which is a zero game ( o(0) = P). The game 1 := {0|} is positive ( o(1) = L), and the game −1 := {| 0} is negative o(−1) = R, while the game ∗ := {0|0} is fuzzy ( o(∗) = N ). 2.3 Game arithmetic, equivalence, and ordering Games can be added, and this is a fundamental construction on games. The sum G + H of two games G and H is the game where, on each move, a player may decide in which of the two games to play. Formally: Definition 2.6. Let G and H be games. We define G + H := {(GL + H) ∪ (G + HL) | (GR + H) ∪ (G + HR)} . In Section 1 we used the + operator for the parallel union of posets. Observe that this corresponds exactly to the + operator on the corresponding games, i.e., the game corresponding to the parallel union of posets P and Q is the game-theoretic + applied to the corresponding poset games P and Q.9We write G − H as shorthand for G + ( −H). One can easily show by induction that + is commutative and associative when applied to games, and the endgame 0 is the identity under +. This makes the class of all games into a commutative monoid (albeit a proper class). One can also show for all games G and H that −(G + H) = −G − H. Furthermore, if G ≥ 0 and H ≥ 0, then G + H ≥ 0. It is not the case, however, that G − G = 0 for all G, although G − G is always a zero game. These easy results are important enough that we state and prove them formally. Lemma 2.7. For any games G and H,1. G − G is a zero game. 2. Suppose G ≥ 0. Then H ≥ 0 implies G + H ≥ 0, and H 6 ≤ 0 implies G + H 6 ≤ 0.3. Suppose G ≤ 0. Then H ≤ 0 implies G + H ≤ 0, and H 6 ≥ 0 implies G + H 6 ≥ 0.4. −(G + H) = −G − H.Proof. For (1.): Any first move in G − G is either a move in G or in −G.The second player can then simply play the equivalent move in the other game ( −G or G, respectively). This is called a mirroring strategy , and it guarantees a win for the second player. For example, if, say, Left moves first and chooses some g ∈ GL, then the game position is now g − G = g + ( −G),and so Right responds with −g ∈ (−G)R, resulting in the game position g − g. An inductive argument now shows that Right wins using this strategy. For (2.) with G ≥ 0: First, suppose H ≥ 0 and Right moves first in G + H. Then Right is moving either in G or in H. Left then chooses her winning response in whichever game Right moved in. Left can continue this strategy until she wins. For example, if Right chooses h ∈ HR, then the game position is now G + h. Since H ≥ 0, we must have h 6 ≤ 0, and so there exists some h′ ∈ hL such that h′ ≥ 0. Left responds with h′, resulting in the position G + h′. An inductive argument again proves that Left can win, and thus G + H ≥ 0. Now suppose H 6 ≤ 0. Then there is some h ∈ HL such that h ≥ 0. If Left moves first in G + H, she chooses this h, leaving the position G + h for Right, who moves next. By the previous argument G + h ≥ 0, and so Left can win it, because Right is moving first. Thus G + H is a first-move win for Left, i.e., G + H 6 ≤ 0.(3.) is the dual of (2.) and follows by applying (2.) to the games −G and −H (and using (4.)). 10 For (4.): By induction (with the inductive hypothesis used for the fourth equality), −(G + H) = −{ (GL + H) ∪ (G + HL) | (GR + H) ∪ (G + HR)} = {− (( GR + H) ∪ (G + HR)) | − (( GL + H) ∪ (G + HL)) } = {(−(GR + H)) ∪ (−(G + HR)) | (−(GL + H)) ∪ (−(G + HL)) } = {(−GR − H) ∪ (−G − HR) | (−GL − H) ∪ (−G − HL)} = {(( −G)L − H) ∪ (−G + ( −H)L) | (( −G)R − H) ∪ (−G + ( −H)R)} = −G − H . The outcome o(G) of a game G is certainly the first question to be asked about G, but it leaves out a lot of other important information about G. It does not determine, for example, the outcome when G is added to a fixed game X. That is, it may be that two games G and H have the same outcome, but o(G + X) 6 = o(H + X) for some game X. Indeed, defining 2 := {1|} ,one can check that o(1) = o(2) = L, but we have o(2 − 1) = L (left wins by choosing 1 ∈ 2L when she gets the chance), whereas we know already from Lemma 2.7 that o(1 − 1) = P.Behavior under addition leads us to a finer classification of games. Definition 2.8. Let G and H be games. We say that G and H are equivalent ,written G ≈ H, iff o(G + X) = o(H + X) for all games X.5 It follows immediately from the definition that ≈ is an equivalence relation on games, and we call the equivalence classes game values . We let PG denote the Class 6 of all game values. 7 Letting X be the endgame 0 in the definition shows that equivalent games have the same outcome. Using the associativity of +, we also get that G ≈ H implies G + X ≈ H + X for any game X. Thus respects equivalence and naturally lifts to a commutative and associative Operation (also denoted +) on PG .The remaining goal of this subsection is finish showing that 〈PG , +, ≤〉 is a partially ordered abelian Group. We have built up enough basic machinery 5In much of the literature, the overloaded equality symbol =is used for game equiva-lence. We avoid that practice here, preferring to reserve =for set theoretic equality. There are some important game properties that are not ≈-invariant. 6We will start to capitalize words that describe proper classes. 7Since each game value itself is a proper Class, we really cannot consider it as a member of anything. A standard fix for this in set theory is to represent each game value vby the set of elements of vwith minimum rank, so PG becomes the Class of all such sets. 11 that we can accomplish our goal in a direct, arithmetic way, without referring to players’ strategies. Lemma 2.9. A game G is a zero game if and only if G + H ≈ H for all games H.Proof. (Only if): It suffices to show that o(G + H) = o(H) for any H, for then, given any game X, we have o(G + H + X) = o(H + X) by substituting H + X for H, hence the lemma. Now by Lemma 2.7(2.), we get that H ≥ 0 implies G + H ≥ 0, and conversely, H 6 ≥ 0 implies G + H 6 ≥ 0. A symmetric argument using Lemma 2.7(3.) proves that H ≤ 0 if and only if G + H ≤ 0.Combining these statements implies o(H) = o(G + H) as desired. (If:) Set H := 0 , the endgame. Then G = G + 0 ≈ 0, and so o(G) = o(0) = P. Corollary 2.10. A game G is a zero game if and only if G ≈ 0 (where 0 is the endgame). Proof. For the forward direction, set H := 0 in Lemma 2.9. For the reverse direction, add any H to both sides of the equivalence G ≈ 0, then use Lemma 2.9 again. Here is our promised Preorder on games. Definition 2.11. Let G and H be games. We write G ≤ H (or H ≥ G) to mean H − G ≥ 0 (equivalently, G − H ≤ 0). As usual, we write G < H to mean G ≤ H and H 6 ≤ G.8 You can interpret G < H informally as meaning that H is more preferable a position for Left than G, or that G is more preferable for Right than H.For example, if Left is ever faced with moving in position G, and (let us pretend) she had the option of replacing G with H beforehand, she always wants to do so. Proposition 2.12. The ≤ Relation on games is reflexive and transitive. Proof. Reflexivity follows immediately from Lemma 2.7(1.). For transitivity, suppose G, H, and J are games such that G ≤ H and H ≤ J. Then J − G ≈ J + ( −H + H) − G = ( J − H) + ( H − G) ≥ 0 . The first equivalence is by Lemma 2.9 and the fact that −H + H is a zero game by Lemma 2.7(1.). The final statement is by Lemma 2.7(2.), noticing that J − H and H − G are both ≥ 0. Thus G ≤ J. 8We now have two ways of interpreting the expression “ G≥0”: one using Definition 2.4 directly and the other using Definition 2.11 with 0being the endgame. One readily checks that the two interpretations coincide. 12 Proposition 2.13. For any two games G and H, G ≈ H if and only if G − H is a zero game, if and only if G ≤ H and G ≥ H.Proof. The second “if and only if” follows straight from the definitions. (First only if:) G ≈ H implies G − H ≈ H − H, since + is ≈-invariant. Then by Lemma 2.7(1.), o(G − H) = o(H − H) = P, i.e., G − H is a zero game. (First if:) By Lemma 2.9 and the fact that H − H is also a zero game, we have G ≈ G + ( H − H) = ( G − H) + H ≈ H.The last two propositions show that the binary Relation ≤ on games is a Preorder that induces a partial Order on PG . Proposition 2.13 also gives a good working criterion for proving or disproving game equivalence—just check whether G − H is a second player win—without having to quantify over all games. Proposition 2.14. 〈PG , +〉 is an abelian Group, where the identity element is the ≈-equivalence class of zero games, and inverses are obtained by the negation Operator on games. Proof. We already know that + is associative and commutative on PG and that the zero games form the identity under + (Lemma 2.9). All we have left to show is that the negation Operator on games is ≈-invariant, for then, Lemma 2.7(1.) implies that it acts as the group theoretic inverse on PG .Now suppose G ≈ H for any games G and H. Then G ≤ H and G ≥ H by Proposition 2.13, i.e., G − H ≤ 0 and G − H ≥ 0. Since by Lemma 2.7(4.), −G − (−H) = H − G, we also have −G ≤ − H and −G ≥ − H, hence −G ≈ − H by Proposition 2.13. Finally, ≤ is translation-invariant on PG , making it a partially ordered abelian Group: Corollary 2.15. For any games G, H, and X, if G ≤ H then G + X ≤ H + X.Proof. We have G ≤ H =⇒ H − G ≥ 0 = ⇒ H − G + X − X ≥ 0=⇒ (H + X) − (G + X) ≥ 0 = ⇒ G + X ≤ H + X . The first and last implications are by definition, and the other two are by Lemma 2.7. We next look at two important subclasses of games—the numeric games and the impartial games. 13 2.4 Numeric games A numeric game is one where at each position all the left options are < all the right options. Formally, Definition 2.16. A game G is numeric iff < r for every ∈ GL and r ∈ GR, and further, every option of G is numeric. One can show that G is numeric if and only if < G for every ∈ GL and G < r for every r ∈ GR. If H is also numeric, then either G ≤ H or H ≤ G. The + and − operations also yield numeric games when applied to numeric games. 9 Numeric games have a peculiar property: making a move only worsens your position (for Left this means having to choose a smaller game; for Right, having to choose a larger game). Thus neither player wants to make a move—if they were given the option to skip a turn, they would always take it. For these games, an optimal play is easy to describe: Left always chooses a maximum left option (i.e., one that does the least damage), and Right always chooses a minimum right option, assuming these options exist. 10 This intuitive idea is formalized in the following theorem, which is referred to in the literature as the “dominating rule.” It applies to all games, not just numeric games. Theorem 2.17. Let G be a game. If y ≤ for some ∈ GL, then G ≈{y, G L|GR}. Similarly, if y ≥ r for some r ∈ GR, then G ≈ { GL|GR, y }. If y ≤ ∈ GR, then we say that y is dominated by in G. Similarly, if y ≥ r ∈ GR, then y is dominated by r in G. We obtain equivalent games by removing dominated options. A player never needs to play a dominated option; it is just as well (or better) to choose an option that dominates it. Numeric games are called such because their values act like real numbers; for one thing, their values are totally ordered by ≤. These games are con-structed in a way somewhat akin to how the real numbers are constructed from the rationals via Dedekind cuts. The left options of a game form the left cut, the right options the right cut, and the game itself represents a number strictly between the two. The differences are that the two cuts might be bounded away from each other (one or the other may even be empty), and the left cut might contain a maximum element. 9The property of being numeric is not invariant under ≈. One can easily concoct two equivalent games, one of which is numeric and the other not. 10 In general, Left can win by choosing any option `≥0, and Right can win by choosing any option r≤0. 14 2.4.1 Finite numeric games The values of finite numeric games form a subgroup of PG naturally iso-morphic (in an order-preserving way) to the dyadic rational numbers under addition, according to the following “simplicity rule”: Definition 2.18. Let G be a finite numeric game. The (numerical) value of G, denoted v(G), is the unique rational number a/ 2k such that 1. k is the least nonnegative integer such that there exists an integer a such that v() < a/ 2k for all ∈ GL and a/ 2k < v (r) for all r ∈ GR,and 2. a is the integer with the least absolute value satisfying (1.) above. So for example, the endgame 0 has value v(0) = 0 , the game 1 has value v(1) = 1 , and the game −1 has value v(−1) = −1, as the notation suggests. Intuitively, |v(G)| indicates the number of “free moves” one of the players has before losing (Left if v(G) > 0, and Right if v(G) < 0). In fact, for any two finite numeric games P and Q, one can show that v(P + Q) = v(P ) + v(Q) and that v(−P ) = −v(P ). Also, P ≤ Q if and only if v(P ) ≤ v(Q).11 The valuation map v is not one-to-one on games, but induces a one-to-one map on values of numeric games. To illustrate the simplicity rule, consider the game h := {0|1}. The rule says that v(h) is the simplest dyadic rational number strictly between 0 and 1, namely, 1/2. First note that Left can always win h whether or not she plays first, so h > 0. If v respects +, then we should also have h + h ≈ 1.Let us check this. First consider 1 − h: 1 − h = 1 + ( −h) = {0|} + {− 1|0} = {0 − h, 1 − 1|1 + 0 } = {− h, 0|1} ≈ { 0|1} = h (the equivalence is by the dominating rule and −h < 0). Thus h + h ≈ h + (1 − h) ≈ 1 . Black-white poset games are numeric . Here we identify Black with Left and White with Right. So for example, an antichain of k black points has numeric value k, and an antichain of k white nodes has numeric value −k. Figure 2 shows the numeric value of two simple, two-level black-white poset games. 11 One can define a purely game-theoretic multiplication operation on numeric games in such a way that v(P Q ) = v(P)v(Q)for all Pand Q. See for details. 15 2−k 12`k · · · u1u2uk k−12 Figure 2: The numerical values of two simple black-white poset games. The left has value k − 12 and the right has value 2−k, for k ≥ 1. Exercise 2.19. Use the simplicity rule to prove the values in Figure 2. The numerical values of arbitrary numeric games (not necessarily finite) form an ordered, real-closed field No into which the real numbers embed, but which also contains all the ordinals as well as infinitesimals . Donald Knuth dubbed No the surreal numbers , and they are formed via a transfinite construction. The dyadic rationals are those constructed at finite stages, but numbers constructed through stage ω already form a proper superset of R. 2.5 Impartial games and Sprague-Grundy theory A game is impartial if at every position, the two players have the same options. Formally, Definition 2.20. A game G is impartial iff GL = GR and every g ∈ GL is impartial. Equivalently, G is impartial if and only if G = −G. This means that values of impartial games are those that have order two in the group 〈PG , +〉.Examples of impartial games include 0 and ∗. Families of impartial games include Nim , Geography , Node Kayles , and poset games. 12 There is a beautiful theory of impartial games, developed by R. P. Sprague and P. M. Grundy [28, 17] that predates the more general theory of combinatorial games described in [1, 5]. We develop the basics of this older theory here. First note that, since there are no Left/Right biases, all impartial games are either zero ( P) or fuzzy ( N ), and we can assume that Left always moves first. We will call impartial zero games ∀-games (“for all first moves . . . ”) and impartial fuzzy games ∃-games (“there exists a first move such that . . . ”). In this section only, we restrict our attention to impartial games, so when we say “game,” we mean impartial game. 12 Impartiality is not ≈-invariant. 16 Two (impartial) games G and H are equivalent ( G ≈ H) if and only if G + H is a ∀-game, because H = −H (Sprague and Grundy defined this notion for impartial games). Applied to poset games, we get Lemma 2.21 below (a partial generalization of Fact 1.5), which has been handy in finding the outcomes of some poset games. A down set in a partial order P is a subset S ⊆ P that is closed downwards under ≤, i.e., x ∈ S and y ≤ x implies y ∈ S. Lemma 2.21. Let P be a poset and let ϕ : P → P be such that ϕ ◦ ϕ = id P and x ≤ y ⇐⇒ ϕ(x) ≤ ϕ(y) for all x, y ∈ P . Let F := {x ∈ P | ϕ(x) = x} be the set of fixed points of ϕ, considered as an induced subposet of P . If F is a down set, then G ≈ F as games. Proof. Let F ′ be a copy of F , disjoint from P , and consider the parallel union P + F ′ as a poset game. By Proposition 2.13, we only need to show that P + F ′ is a ∀-game, which we do by giving a winning strategy for the second player. If the first player plays in F or F ′, then the second player plays the corresponding point in F ′ or F , respectively. If the first player plays some point x ∈ G \ F , then the second player responds by playing ϕ(x). Since F is a down set, this latter pair of moves does not disturb F or F ′, and the resulting position in either case is seen to have the same basic form as the original game. One can associate an ordinal number with each game, which we call the g-number 13 of the game, such that two games are equivalent if and only if they have the same g-number. The g-number of a finite game is a natural number. We will restrict ourselves to finite games. Definition 2.22. Let A be any coinfinite subset of N. Define mex A (the minimum excluded element from A) to be the least natural number not in A, i.e., mex A := min( N − A) . More generally, for i = 0 , 1, 2, . . . , inductively define mex i A := min ( N − (A ∪ { mex 0(A), . . . , mex i−1 A})) , the i’th least natural number not in A. (So in particular, mex 0 A = mex A.) Definition 2.23. Let G be any (finite) game. Define the g-number of G as g(G) := mex g-set (G) , where g-set (G) := {g(x) | x ∈ GL} is called the g-set of G. 13 also called the Grundy number or the NIM number —not to be confused with the value of a numerical game 17 That is, g(G) is the least natural number that is not the g-number of any option of G, and the set of g-numbers of options of G is g-set (G). For example, g-set (0) = ∅, and so g(0) = 0 . Also, g-set (∗) = {g(0) } = {0}, and so g(∗) = 1 . Exercise 2.24. Prove the following for any finite poset P and any n ∈ N.1. g(P ) ≤ | P |. (Generally, g(G) ≤ ∣∣GL∣∣ for any impartial G.) 2. g(Cn) = n for all n ∈ N.3. g(An) = n mod 2 .4. g(Vn) = ( n mod 2) + 1 .What is g(Λ n)? What is g(3n)? Exercise 2.25. Describe g(Am/A n) simply in terms of m and n.Here is the connection between the g-number and the outcome of a game. Proposition 2.26. A game G is a ∀-game if and only if g(G) = 0 .Proof idea. If g(G) 6 = 0 , then there is some option x of G that Left can play such that g(x) = 0 , but if g(G) = 0 , then no move Left makes can keep the g-number at 0.The central theorem of Sprague-Grundy theory—an amazing theorem with a completely nonintuitive proof—concerns the g-number of the sum of two games. Definition 2.27. For any m, n ∈ N, define m ⊕ n to be the natural num-ber k whose binary representation is the bitwise exclusive OR of the binary representations of m and n. We may also call k the bitwise XOR of m and n.For example, 23 ⊕ 13 = 10111 ⊕ 01101 = 11010 = 26 . Theorem 2.28 (Sprague, Grundy [28, 17]) . For any finite games G and H, g(G + H) = g(G) ⊕ g(H) . Proof. As with most of these proofs, we use induction. Let G and H be games. If Left plays some x ∈ GL, say, then g(x) 6 = g(G), and so g(x + H) = g(x) ⊕ g(H) (inductive hypothesis) 6 = g(G) ⊕ g(H) (because g(G) 6 = g(x).) 18 Similarly, g(G + y) 6 = g(G) ⊕ g(H) for any y ∈ HL. This means that g(G) ⊕ g(H) is not the g-number of any option of G + H. We’ll be done if we can show that every natural number less than g(G) ⊕ g(H) is the g-number of some option of G + H.Set gG := g(G) and gH := g(H), and let m = gG ⊕ gH . Fix any k < m .We find an option of G + H with g-number k. Let’s assign numbers to bit positions, 0 being the least significant, 1 being the second least, and so forth. For any number ∈ N, let ()i be the ith least significant bit of (starting with0). Since k < m , there exists a unique i such that (k)i = 0 , (m)i = 1 ,and (k)j = ( m)j for all j > i . Fix this i. We have (gG)i ⊕ (gH )i = ( m)i = 1 ,and so one of gG and gH has a 1 in the ith position and the other a 0. Suppose first that gG has a 1 in that position. Then Left can play in G to “clear” that bit: First, notice that k ⊕ gH < g G. Why? Because (k ⊕ gH )i = ( k)i ⊕ (gH )i = 0 ⊕ 0 = 0 < 1 = ( gG)i , and for all j > i , (k ⊕ gH )j = ( k)j ⊕ (gH )j = ( m)j ⊕ (gH )j = ( gG)j ⊕ (gH )j ⊕ (gH )j = ( gG)j . So there must exist an x ∈ GL such that g(x) = k ⊕ gH , and then by the inductive hypothesis, g(x + H) = g(x) ⊕ gH = k ⊕ gH ⊕ gH = k . Similarly, if (gH )i = 1 and (gG)i = 0 , then there exists y ∈ HL such that g(P + y) = k. Corollary 2.29. Two impartial games G and H are equivalent if and only if g(G) = g(H).Proof. G and H are equivalent iff G + H is a ∀-game, iff g(G + H) = 0 (Proposition 2.26), iff g(G) ⊕ g(H) = 0 (Theorem 2.28), iff g(G) = g(H).Since every natural number n is the g-number of the poset game Cn, this means that every game is equivalent to a single NIM stack. We can use Theorem 2.28 to solve Nim . Given a Nim game P = Cn1 + · · · + Cnk , we get g(P ) = n1 ⊕ · · · ⊕ nk. If this number is nonzero, then let i be largest such that (g(P )) i = 1 . Alice can win by choosing a j such that (nj )i = 1 and playing in Cnj to reduce its length (and hence its g-number) from nj to nj ⊕ (g(P )) i. This makes the g-number of the whole Nim game zero. We can use Corollary 2.29 and Lemma 2.21 to find the g-numbers of some natural, interesting posets. We give Proposition 2.30 below as an example. 19 For positive integer n, let [n] := {1, 2, . . . , n }, and let 2[n] be the powerset of [n], partially ordered by ⊆. For 0 ≤ k ≤ n, we let ([n] k ) ⊆ 2[n] be the set of all k-element subsets of [n]. Then we have the following: Proposition 2.30. Let n > 0 be even and let 1 ≤ k < k ′ ≤ n be such that k′ is odd. Let n = nj−1 · · · n1n0 and k = kj−1 · · · k1k0 be binary representations of n and k, respectively, where ni, k i ∈ { 0, 1} for 0 ≤ i < j . Letting P := ([n] k ) ∪ ([n] k′ ), we have g(P ) = { 1 if ki > n i for some 0 ≤ i < j , 0 otherwise. In particular, if k is even, then g(P ) = (n/ 2 k/ 2 ) mod 2 .Proof. For sets A and B, we say that A respects B if either B ⊆ A or A∩B = ∅. Define the map ϕ : [ n] → [n] so that ϕ(2 i) = 2 i − 1 and ϕ(2 i − 1) = 2 i,for all 1 ≤ i ≤ n/ 2. Then ϕ swaps the elements of each two-element set si := {2i − 1, 2i}. We lift the involution ϕ to an involution ϕ′ : 2 [n] → 2[n] in the usual way: ϕ′(S) := {ϕ(x) | x ∈ S} for all S ⊆ [n]. Notice that ϕ′ preserves set cardinality, and so ϕ′ maps P onto P . Also notice that ϕ′(S) = S if and only if S respects all the si.Let F be the set of all fixed points of ϕ′. Since k′ is odd, no S ∈ ([n] k′ ) can respect all the si, and thus ϕ′(S) 6 = S for all S ∈ ([n] k′ ). It follows immediately that F ⊆ ([n] k ) is a down set, and so we have g(P ) = g(F ) by Lemma 2.21 and Corollary 2.29. Since F is also an antichain, we have g(F ) = |F | mod 2 (Exercise 2.24(3)). Now F consists of those k-sets that respect all the si. If k is odd, then F = ∅, whence 0 = g(F ) = g(P ),and we also have 1 = k0 > n 0 = 0 so the proposition holds. If k is even, then by a simple combinatorial argument we have |F | = (n/ 2 k/ 2 )—by selecting exactly k/ 2 of the si to be included in each element of F . Therefore, we have g(P ) = g(F ) = |F | mod 2 = (n/ 2 k/ 2 ) mod 2 , and the proposition follows by Lucas’s theorem. Proposition 2.30 clearly still holds if we include in P any number of odd levels of 2[n] above the kth level (including zero). Theorem 2.28 shows how the g-number behaves under parallel unions of posets (Definition 1.2). How does the g-number behave under series unions? Unfortunately, g(P/Q ) might not depend solely on g(P ) and g(Q). For example, g(V2) = g(C1) = 1 , but g(C1/V 2) = g(32) = 3 whereas g(C1/C 1) = g(C2) = 2 . However, g-set (P/Q ) does depend solely on g-set (P ) and g-set (Q) 20 for any posets P and Q, and this fact forms the basis of the Deuber & Thomassé algorithm of the next section. There is one important case where g(P/Q ) does only depend on g(P ) and g(Q): Fact 2.31. For any finite poset P and any k ≥ 0, g ( PCk ) = g(P ) + k . This can shown by first showing that g(P/C 1) = g(P ) + 1 , then using induction on k. By Fact 2.31, we get that g(3n) = 1 + g(Λ n) for example. 3 Upper bounds When asking about the computational difficulty of determining the outcome of a game, we really mean a family of similar games, represented in some way as finite inputs. In discussing game complexity, we will abuse terminology and refer to a family of games simply as a game. (The same abuse occurs in other areas of complexity, notably circuit complexity.) We will also use the same small-caps notation to refer both to a family of games and to the corresponding decision problem about the outcomes. Perhaps the most common upper bound in the literature on the complex-ity of a game is membership in PSPACE . Without pursuing it further, we will just mention that, if a game G of size n satisfies: (i) every position of G has size polynomial in n; (ii) the length of any play of G is polynomial in n; and (iii) there are polynomial-time (or even just polynomial-space) al-gorithms computing the “left option of” and “right option of” relations on the positions of G, then o(G) can be computed in polynomial space. These properties are shared by many, many games. In this section we will give some better upper bounds on some classes of finite poset games, the best one being that N-free poset games are in P . We will assume that a poset is represented by its Hasse diagram, a directed acyclic graph (DAG) in which each element is represented as a node and an arc is placed from a node for element x to the node for y when x < y and there is no element z such that x < z < y . The poset is the reflexive, transitive closure of the edge relation of the DAG. 3.1 N-free games With the Hasse diagram representation, we can apply results from graph theory to devise efficient ways to calculate Grundy numbers for certain classes 21 of games. A good example is the class of N-free poset games. An “N” in a poset is a set of four elements {a, b, c, d } such that a < b , c < d , c < b , and the three other pairs are incomparable. When drawn as a Hasse diagram the arcs indicating comparability form the letter “N”. A poset is N-free if it contains no N as an induced subposet. We let N-Free denote the class of N-free poset games. Valdes, Tarjan, and Lawler show that an N-free DAG can be con-structed in linear time from a set of single nodes. New components are created either by applying parallel union ( G + H) or by applying series union (G/H ). As with posets, the parallel union is the disjoint union of G and H.The series union is a single DAG formed by giving to every element in H with out-degree 0 (the sinks in H) an arc to every element in G with in-degree 0 (the sources in G). This gives the Hasse diagram of the series union of the corresponding posets. Their algorithm provides a sequence of + and / operations that will construct a given N-free DAG from single points. Deuber & Thomassé show that N-Free ∈ P by applying this con-struction to demonstrate how to calculate the g-number of an N-free poset game based on the sequence of construction steps obtained by the VTL al-gorithm above. Their algorithm, which we now describe, works by keeping track of the g-sets of the posets obtained in the intermediate steps of the construction, rather than the g-numbers. There is no need to store the g-numbers, because the g-number of any poset can always be easily computed from its g-set by taking the mex .The g-number of a single node is 1. This is the base case. Fact 3.1. Given posets P and Q, the g-set of the parallel union P + Q is g-set (P + Q) = {g(P + Qq) : q ∈ Q} ∪ { g(Pp + Q) : p ∈ P } = {g(P ) ⊕ g(Qq) : q ∈ Q} ∪ { g(Pp) ⊕ g(Q) : p ∈ P } . The second equality follows from the Sprague-Grundy theorem. This is easy to see if you consider the root of the game tree for P + Q. Each of its children results from playing either an element in P or one in Q. The left-hand set in the union contains the g-numbers of the games resulting from playing an element in Q; the right-hand set from playing an element in P .Their union is the g-set of P + Q, so its g-number is the mex of that set. To calculate the g-set of a series union, we will need the definition of the Grundy product of two finite sets of natural numbers: A B := B ∪ { mex a B | a ∈ A} .A B is again a finite set of natural numbers that is easy to compute given A and B. Basically, A B unions B with the version of A we get after re-22 indexing the natural numbers to go “around” B. Notice that mex( A B) = mex mex A B. We will use this fact below. Lemma 3.2 (Deuber & Thomassé ) . For any finite posets P and Q,g-set (P/Q ) = g-set (P ) g-set (Q) = g-set (Q) ∪{ mex i(g-set (Q)) : i ∈ g-set (P )}. The left-hand set of the union results from playing an element in Q, which removes all of the elements in P . Using induction, we can see what happens when an element in P is played. Proof of Lemma 3.2. The fifth equality uses the inductive hypothesis. g-set (P/Q ) = {g(( P/Q )r) : r ∈ P/Q } = {g(( P/Q )p) : p ∈ P } ∪ { g(( P/Q )q) : q ∈ Q} = {g(( Pp/Q )) : p ∈ P } ∪ { g(Qq) : q ∈ Q} = {mex g-set (Pp/Q ) : p ∈ P } ∪ g-set (Q)= {mex( g-set (Pp) g-set (Q)) : p ∈ P } ∪ g-set (Q)= {mex mex g-set (Pp)(g-set (Q)) : p ∈ P } ∪ g-set (Q)= {mex g(Pp)(g-set (Q)) : p ∈ P } ∪ g-set (Q)= {mex i(g-set (Q)) : i ∈ g-set (P )} ∪ g-set (Q)= g-set (P ) g-set (Q) In particular, the g-number of P/Q is greater than or equal to the sum of the g-numbers of P and Q. Notably, it’s an equality if Q is Cn for some n (Fact 2.31) and the reason is that the g-set of Cn has no gaps, that is, it contains all of the values from 0 to n − 1. It’s easy to see that it’s true when P and Q are both singletons. Their g-numbers are both 1 and forming their series-union creates a NIM stack of size 2 and that has g-number 2. Another way to understand Lemma 3.2 is to consider the game tree of P/Q , and we’ll look at the simple case where P is an arbitrary game with g-number k and Q is a singleton. Consider the root node r of the game tree of P/Q . One of its children represents playing the single element in Q and that child has g-number 0. The rest of r’s children represent game configurations reached by playing an element in P . By the induction hypothesis the g-number of each of these nodes will be one more than in P ’s game tree where they had g-numbers 0 to k − 1, and perhaps g-numbers k + 1 and larger. So in P/Q ’s tree they have g-numbers 1 to k, with perhaps g-numbers k + 2 or larger. Because the child reached by playing Q’s single element has g-number 0, the first missing value in the g-set formed from these g-numbers is k + 1 .23 Now using Fact 3.1 and Lemma 3.2, the decomposition described in can generate a binary tree where each internal node is labeled with a poset P and an operation (parallel union or series union), and its children are the two posets combined to form P . Starting with each leaf, where the poset is a singleton and the g-set is {0}, and moving up the tree, one can apply Fact 3.1 and Lemma 3.2 to compute the g-set of the root (and none of the g-numbers involved exceed the size of the final poset). This can all be done in time O(n4). 3.2 Results on some classes of games with N’s General results for classes of games containing an “N” have been few. In 2003, Steven Byrnes proved a poset game periodicity theorem, which applies to, among others, Chomp -like games, which contain many “N”-configurations. Here’s the theorem, essentially as stated in the paper: Theorem 3.3. In an infinite poset game X, suppose we have two infinite chains C (c1 < c 2 < · · · ) and D (d1 < d 2 < · · · ), and a finite subset A, all pairwise disjoint, and assume that no element of C is less than an element of D. Let Am,n = A ∪ C ∪ D − { x ∈ X|x ≥ cm+1 } − { x ∈ X|x ≥ dn+1 } (that is, Am,n is the position that results from starting with the poset A ∪ C ∪ D,then making the two moves cm+1 and dn+1 ). Let k be a nonnegative integer. Then either: 1. there are only finitely many different Am,n with g-number k; or 2. we can find a positive integer p such that, for large enough n, g(Am,n ) = k if and only if g(Am+p,n +p) = k.Thus, as the poset A expands along the chains C and D, positions with any fixed g-number have a regular structure. A simple example of a class of games covered by the theorem is the family of two-stack Nim games, where A is empty and Am,n consists of an m-chain and an n-chain. The g-number 0 occurs for every An,n so the periodicity is 1.The g-number 1 occurs for every A2n, 2n+1 and so has periodicity 2. In fact, one can find a periodic repetition for every g-number. The surprising thing is that this is still true when you allow elements in one chain to be less than elements in the other. Another family contains Chomp , described in Section 1.1.1. We can generalize Chomp to games where the rows do not have to contain the same number of elements. Byrnes showed that for such games there is a periodicity in the g-numbers when we fix the size of all but the top two rows. 24 As Byrnes claims, this yields a polynomial-time decision algorithm for each family generated from a fixed A but not a uniformly polynomial-time algorithm across the families, as the time is parameterized by A. 3.2.1 Bounded-width poset games If a poset P has width k, that is, if k is the maximum size of any an-tichain in P , then there are only |P |k many positions at most in the game: if x0, x 1, . . . , x n−1 ∈ P are the elements chosen by the players in the first n moves of the game, then the resulting position is completely determined by the minimal elements of the set {x0, . . . , x n−1}, i.e., an antichain of size ≤ k.This means that, for constant k, one can compute the g-number of P in polynomial time using dynamic programming. The exponent on the running time depends on k, however. For certain families of bounded-width posets, one can beat the time of the dynamic programming algorithm; for example, one can compute the g-number of width-2 games in linear time. 3.2.2 Parity-uniform poset games Daniel Grier recently showed that computing arbitrary poset game outcomes is PSPACE -complete (Theorem 4.13 and its proof, below). He reduces from True Quantified Boolean Formulas (see Section 4.2). His reduction constructs posets with only three levels, i.e., posets that can be partitioned into three an-tichains (equivalently, the maximum size of a chain is 3). An obvious follow-up question is whether two-level poset games remain PSPACE -complete. This question is still open, but in it is shown that a certain subclass of two-level posets is easy, namely, the “parity-uniform” posets. This result builds on and extends earlier results of Fraenkel & Scheinerman . Definition 3.4 () . Let P be a two-level poset, partitioned into two sets T (top points) and B (bottom points) so that for any x, y ∈ P , if x < y then x ∈ B and y ∈ T . We can then view P as a bipartite graph, where the points of P are the vertices and with an edge drawn between each x and y iff x < y .We say that P (viewed as a bipartite graph) is parity-uniform iff: (i) all top points have the same degree parity (i.e., degrees of top points are either all even or all odd); and (ii) there is a bipartition of the bottom points such that every top point has a odd number of neighbors in at least one of the partitions (one of the partitions could also be empty). A parity-uniform poset has a simple expression for its g-number. 25 Theorem 3.5 (F et al. ) . Let P be a two-level poset, viewed as a bipartite graph with bipartition T, B as in Definition 3.4, and suppose that P is parity-uniform. Let p ∈ { 0, 1} be the common degree parity of the points in T . Let b := |B| mod 2 and let t := |T | mod 2 . Then g(P ) = b ⊕ t(p ⊕ 2) . Theorem 3.5 is proved by induction on |P | together with a case analysis. 4 Lower bounds In this section we give some lower bounds on game complexity. There is a vast literature on combinatorial game complexity, and we make no attempt to be thorough, but rather concentrate on poset game complexity. 4.1 A note about representations of games The complexity of a game depends quite a bit on its representation. The choice of representation is usually straightforward, but not always. For ex-ample, how should we represent an N-free poset? Just via its Hasse diagram, or via an expression for the poset in terms of single points and parallel union and series union operators? The results of Valdes, et al. show that one representation can be converted into the other in polynomial time, so the choice of representation is not an issue unless we want to consider complex-ity classes within P or more succinct representations of posets, as we will do below. There, fortunately, our hardness results apply to either representa-tion. Even if the representation of a game is clear, the results may be coun-terintuitive. For example, how should we represent members of the class of all finite games? In Section 2, we defined a game as an ordered pair of its left and right options. We must then represent the options, and the options of options, and so on. In effect, to represent an arbitrary finite game explic-itly, we must give its entire game tree (actually, game DAG, since different sequences of moves may end up in the same position). Under this represen-tation, there is a straightforward algorithm to compute the outcome of any game: use dynamic programming to find the outcome of every position in the game. Since every position is encoded in the string representing the game, this algorithm runs in polynomial time. What makes a game hard, then, is that we have a succinct representation for it that does not apply to all games. For example, the obvious represen-tation of a poset game is the poset itself, and the number of positions is 26 typically exponential in the size of the poset. Subfamilies of poset games may have even more succinct representations. For example, a Nim game can be represented as a finite list of natural numbers in binary, giving the sizes of the stacks, and a game of Chomp can be represented with just two natural numbers m and n in binary, giving the dimensions of the grid. Notice that this Chomp representation is significantly shorter than what is needed to represent an arbitrary position in a Chomp game; the latter is polynomial in m + n.In what sense does finding a winning strategy in Chomp reduce to deter-mining the outcome of Chomp games? We already know that every Chomp game is an ∃-game because it has a maximal point. We could find a winning strategy if we were able to determine the outcome of every Chomp posi-tion, but even writing down a query to an “outcome oracle” takes time linear in m + n, which is exponential in the input size. The more modest goal of finding a winning first move may be more feasible, because the position after one move is simple enough to describe by a polynomial-length query string. To our knowledge, no efficient algorithm is known to determine the outcome of an arbitrary Chomp position after a single move, even allowing time (m + n)O(1) .We will more to say about representations below when we discuss lower bounds for poset games within the complexity class P. 4.2 Some PSPACE-hard games Many games have been shown PSPACE -hard over the years. Early on, Even and Tarjan showed that Hex generalized to arbitrary graphs is PSPACE -complete . A typical proof of PSPACE -hardness reduces the PSPACE -complete True Quantified Boolean Formulas (TQBF ) problem to the outcome of a game. We can consider a quantified Boolean formula ϕ =(∃x1)( ∀x2) · · · ψ (where ψ is a Boolean formula in conjunctive normal form (cnf)) itself as a game, where players alternate choosing truth values for x1, x 2, . . . , the first player (Right, say) winning if the resulting instantiation of ψ is true, and Left winning otherwise. 14 TQBF seems ideal for encoding into other games. Thomas Schaefer showed a number of interesting games to be PSPACE -hard this way . One interesting variant of TQBF that Schaefer proved PSPACE -complete is the game where a positive Boolean formula ψ is in cnf with no negations, and players alternate choosing truth values for the Boolean variables. Schae- 14 This is technically not a combinatorial game by our definition, because the end condi-tion is different. One can modify the game slightly to make it fit our definition, however. 27 fer called this game Gpos (POS CNF ). Unlike TQBF, however, the variables need not be chosen in order; players may choose to assign a truth value to any unassigned variable on any move. Left (who moves first) wins if ψ is true after all variables have been chosen, and Right wins otherwise. Since ψ is positive, Left always wants to set variables to 1 and Right to 0.As another example, consider Geography . The input is a directed graph G and a designated vertex s of G on which a token initially rests. The two players alternate moving the token on G from one node to a neighboring node, trying to force the opponent to move to a node that has already been visited. Geography is a well-known PSPACE -complete game [24, 27]. In , Lichtenstein & Sipser show that Geography is PSPACE -complete even for bipartite graphs. An obvious way to turn Geography into a black-white game is to color the nodes of graph G black and white. Each player is then only allowed to move the token to a node of their own color. Since moves are allowed only to neighboring nodes, the black-white version is equivalent to the un-colored version on bipartite graphs. The standard method of showing that Geography is PSPACE -complete is via a reduction from True Quantified Boolean Formulas (TQBF) to Geography (see for example ). Observe that the graph constructed in this reduction is not bipartite. That is, there are nodes that potentially may be played by both players. Hence, we can-not directly conclude that the black-white version is PSPACE -complete. However, in Lichtenstein & Sipser show that Geography is indeed PSPACE -complete for bipartite graphs. We now consider the game Node Kayles . This game is defined on an undirected graph G. The players alternately play an arbitrary node from G.In one move, playing node v removes v and all the direct neighbors of v from G. In the black-white version of the game, we color the nodes black and white. Schaefer showed that determining the winner of an arbitrary Node Kayles instance is PSPACE -complete. He also extended the reduc-tion to bipartite graphs, which automatically yields a reduction to the black-white version of the game (see ). Therefore, black-white Node Kayles is also PSPACE -complete. The game of Col is a two-player combinatorial strategy game played on a simple planar graph, some of whose vertices may be colored black or white. During the game, the players alternate coloring the uncolored vertices of the graph. One player colors vertices white and the other player colors vertices black. A player is not allowed to color a vertex neighboring another vertex of the same color. The first player unable to color a vertex loses. A well-known theorem about Col is that the value of any game is either x or x + ∗ where x is a number. Removing the restriction that Col games 28 be played on planar graphs and considering only those games in which no vertex is already colored, we get a new game, GenCol (generalized Col ). It is shown in that GenCol is PSPACE -complete; furthermore, GenCol games only assume the two very simply game values 0 and ∗.In , Stockmeyer & Chandra give examples of games that are complete for exponential time and thus provably infeasible. 4.3 Lower bounds for poset games Until recently, virtually no hardness results were known relating to poset games, and the question of the complexity of determining the outcome of a game was wide open, save the easy observation that it is in PSPACE .For the moment, let PG informally denote the decision problem of de-termining the outcome of a arbitrary given (impartial) poset game, that is, whether or not the first player (Alice) can win the game with perfect play. The first lower bound on the complexity of PG we are aware of, and it is a modest one, was proved by Fabian Wagner in 2009. He showed that PG is L-hard 15 under FO -reductions (First-Order reductions). This is enough to show, for example, that PG /∈ AC 0. Soon after, Thomas Thierauf showed that PG is hard for NL under AC 0 reductions. 16 A breakthrough came in 2010, when Adam Kalinich, then a high school student near Chicago, Illinois, showed that PG is hard for NC 1 under AC 0 reductions . For the proof, he invents a clever way to obliviously “flip” the outcome of a game, i.e., to toggle the outcome between ∃ and ∀. This allows for the simulation of a NOT-gate in an NC 1 circuit. (An OR-gate can be simulated by the series union construction of Definition 1.2. See below.) The astute reader will notice that Kalinich’s result appears to be weaker than the other two earlier results. In fact, the three results are actually in-comparable with each other, because they make different assumptions about how poset games are represented as inputs. We say more about this below, but first we mention that Wagner’s and Thierauf’s results both hold even when restricted to Nim games with two stacks, and Kalinich’s result holds restricted to N -free games. Modest as they are, these are currently the best lower bound we know of for N-free poset games. Very recently, the complexity of PG was settled completely by Daniel Grier, an undergraduate at the University of South Carolina . He showed that PG is PSPACE -complete via a polynomial reduction (henceforth, p-reduction) from Node Kayles . Here, it is not important how a game is 15 Lis short for LOGSPACE. 16 NL is nondeterministic LOGSPACE. 29 represented as an input, so long as the encoding is reasonable. His proof shows that PSPACE -completeness is still true when restricted to three-level games, i.e., posets where every chain has size at most three (equivalently, posets that are partitionable into at most three antichains). The games used in the reduction are of course not N-free. 4.4 Representing posets as input As we discussed above, for any of the various well-studied families of poset games ( Chomp , Divisors , Nim , etc.), there is usually an obvious and nat-ural way to represent a game as input. For example, an instance of Chomp can be given with just two positive integers, one positive integer for Divisors, and a finite list of positive integers for Nim , giving the heights of the stacks. When considering arbitrary finite posets, however, there is no single natural way to represent a poset as input, but rather a handful of possibilities, and these may affect the complexity of various types of poset games. We consider two broad genres of poset representation: Explicit The poset is represented by an explicit data structure, including the set of points and the relations between them. In this representation, the size of the poset is always comparable to the size of the input. Succinct (Implicit) The poset is represented by a Boolean circuit with two n-bit inputs. The inputs to the circuit uniquely represent the points of the poset, and the ( 1-bit) output gives the binary relation between these two inputs. In this representation, the size of the poset can be exponential in the size of the circuit. Within each representational genre, we will consider three general approaches to encoding a poset P , in order from “easiest to work with” to “hardest to work with”: Partial Order (PO) P is given as a reflexive, transitive, directed acyclic graph, where there is an edge from x to y iff x ≤ y. Hasse Diagram (HD) P is given as a directed acyclic graph whose reflex-ive, transitive closure (i.e., reachability relation) is the ordering ≤. The graph then gives the Hasse diagram of P . Arbitrary (binary) Relation (AR) An arbitrary directed graph (or ar-bitrary binary relation) is given, whose reflexive, transitive closure is then a pre-order whose induced partial order is P . (Equivalently, P is the set of strongly connected components, and ≤ is the reachability relation between these components.) 30 The first two (PO and HD) must involve promises that the input satisfies the corresponding constraint, so problems in these categories are posed as promise problems. Notice that the PO promise is stronger than the HD promise, which is stronger than the AR (vacuous) promise. So in either the Explicit or Succinct cases, the complexity of the corresponding problems increases monotonically as PO → HD → AR. We will ignore some additional subtleties: In the explicit case, is the graph (or relation) given by an adjacency matrix or an array of edge lists? In the succinct case, should we be able to represent a poset whose size is not a power of 2? For example, should we insist on including a second circuit that tells us whether a given binary string represents a point in the poset? These questions can generally be finessed, and they do not affect any of the results. 4.5 The decision problems The two genres and three approaches above can be combined to give six versions of the basic decision problem for arbitrary posets: the three ex-plicit problems PG (Explicit , PO ), PG (Explicit , HD ), and PG (Explicit , AR );and the three succinct problems PG (Succinct , PO ), PG (Succinct , HD ), and PG (Succinct , AR ). We will define just a couple of these, the others being defined analogously. Definition 4.1. PG (Succinct , HD ) is the following promise problem: Input: A Boolean circuit C with one output and two inputs of n bits each, for some n. Promise: G is acyclic, where G is the digraph on {0, 1}n whose edge relation is computed by C. Question: Letting P be the poset given by the reachability re-lation on G, is P an ∃-game? Definition 4.2. PG (Explicit , AR ) is the following promise problem: Input: A digraph G on n nodes. Promise: None. Question: Letting P be the poset given by the reachability re-lation on the strongly connected components of G, is P an ∃-game? We also can denote subcategories of poset games the same way. For example, Nim (Explicit , HD ) is the same as PG (Explicit , HD ), but with the additional promise that the poset is a parallel union of chains; for any k > 0,31 Nim k(Explicit , HD ) is the same as Nim (Explicit , HD ) but with the additional promise that there are at most k chains; N-Free (Succinct , PO ) is the same as PG (Succinct , PO ) with the additional promise that the poset is N-free. 4.6 The first results Here are the first lower bounds known for poset games, given roughly in chronological order. The first four involve Nim ; the first two of these consider explicit games, and the next two consider succinct games. None of these results is currently published, and we will give sketches of their proofs here. Theorem 4.3 (Wagner, 2009) . Nim 4(Explicit , HD ) is L-hard under AC 0 reductions. The proof reduces from the promise problem ORD (order between ver-tices), which is known to be complete for L via quantifier-free projections [8, 18]. Proof. The promise problem ORD (order between vertices) is known to be complete for L via quantifier-free projections [8, 18]: Input: A directed graph G on n nodes (given by a binary edge relation E(G)) and two distinct vertices x and y of G. Promise: G is a single directed path with no cycles. Question: Is y reachable from x in G?We may assume that both x and y have successors along the path in G, say, s and t, respectively; otherwise, the problem is trivial. We can translate any instance 〈G, x, y 〉 of ORD into an instance P of Nim 4(Explicit , HD ) (i.e., a dag consisting of at most four disjoint simple paths) so that y is reachable from x if and only if P (considered a poset game) is an ∃-game. We do this as follows: P contains two disjoint copies of G, say, G and G′, where we label vertices of G with unprimed letters and the corresponding duplicate vertices in G′ with primed letters. We make the following additional changes to P : • Remove the edges (x, s ) and (y, t ) from E(G), and remove the edges (x′, s ′) and (y′, t ′) from E(G′). • Add crossing edges (y, t ′) and (y′, t ) to E(P ). • Add two directed paths p1 → p2 → · · · → pn and q1 → q2 → · · · → qn to P , both of length n. • Add connecting edges (pn, v ) and (x, q 1) to E(P ), where v is the initial vertex along the path of G.32 qnq1p1pn qnq1p1pn G′: G:vxsytwG: G′: vv′x′ xss′y′ ytt′w′ wP(yreachable from x): vv′w′ wP(ynot reachable from x): yxy′x′ tss′ t′ G: Figure 3: The construction of P from G. G is shown at the top in the case where y is reachable from x. Shown immediately below is P in this case. Below that is shown P when y is not reachable from x.Let w be the final vertex of G. The two possible scenarios for P are shown in Figure 3. If y is reachable from x, then we get the Nim game near the top of the figure, whose g-number is of the form (2 n + k) ⊕ k for some k, owing to the two paths on the left (the paths on the right are the same length, so they cancel). This is nonzero, hence P is an ∃-game. Otherwise, we have the game at the bottom of the figure, and this is clearly a ∀-game, consisting of two pairs of paths of equal length. The construction of P from G can be done in AC 0, which proves the theorem. Theorem 4.4 (Thierauf, 2009) . Nim 2(Explicit , AR ) is NL -hard under AC 0 reductions. The proof reduces from the reachability problem for directed graphs, which is NL -complete under AC 0-reductions. Proof. We reduce from the reachability problem for directed graphs, which is NL -complete under AC 0-reductions: Input: A directed graph G on n nodes (given by a binary edge relation E(G)) and two distinct vertices s and t of G. Question: Is t reachable from s in G?33 G′: ... stG: s′ t′ ... Figure 4: The graph H constructed from G.Given G as above, we construct a (possibly cyclic) digraph H whose corre-sponding poset game is an ∃-game if and only if t is reachable from s in G.(Recall that a move in a digraph corresponds to removing a vertex and all vertices reachable from it.) We let H be two disjoint copies of G, say, G and G′, where s′ are t′ are the vertices in G′ corresponding to s and t in G,respectively. We then add two more edges to H: one from t to s′ and the other from t′ to s. See Figure 4. The construction of H from G is clearly AC 0. If t is reachable from s in G, then choosing, say, s removes from H all vertices except those not reachable from either s or s′. This is a win-ning move, because the remaining graph consists of two disjoint, identical components—one in G and the other in G′, and so it is the parallel union of identical subgames, thus a ∀-game. If t is not reachable from s in G, then the game H itself consists of two disjoint, identical subgraphs, and so is a ∀-game. The next result about succinct poset games is straightforward. Theorem 4.5 (F, 2009) . Nim 2(Succinct , PO ) is co C=P-hard under p-re-ductions. The idea here is that, for any L ∈ coC =P and any input x, we produce two NIM stacks, and x ∈ L if and only if they are of unequal length. Proof. If L is a language in coC =P, then by standard results in complexity theory (see for example), there exists a positive polynomial p(n) and a polynomial-time predicate R such that, for all n and x ∈ { 0, 1}n, we have x ∈ L ⇐⇒ ∣∣∣{ y ∈ { 0, 1}p(n) : R(x, y ) }∣ ∣∣ 6 = 2 p(n)−1 . Then given x of length n, we can construct in polynomial time a Boolean circuit Cx that takes two p(n)-bit inputs and produces a one-bit output such that Cx(y, z ) = 1 ⇐⇒ y ≤ z ∧ R(x, y ) = R(x, z ) 34 for all y, z ∈ { 0, 1}p(n). The circuit Cx computes a partial order relation on {0, 1}p(n) which is the parallel union of two chains. The size of one chain is the number of y ∈ { 0, 1}p(n) such that R(x, y ) holds, and the sum of the two sizes is 2n. Thus x ∈ L if and only if the chains are of unequal size, if and only if the resulting two-stack Nim game is an ∃-game. Theorem 4.6 (F, 2009) . Nim 6(Succinct , HD ) is PSPACE -hard under p-re-ductions. The proof uses a result of Cai & Furst based on techniques of David Barrington on bounded-width branching programs. Recall that S5 is the group of permutations of the set {1, 2, 3, 4, 5}. Their result is essentially as follows: Theorem 4.7 (Cai & Furst) . For any PSPACE language L, there exists a polynomial p and a polynomial-time computable (actually, log-space com-putable) function σ such that, for all strings x of length n and positive integers c (given in binary), σ(x, c ) is an element of S5, and x ∈ L if and only if the composition σ(x, 1) σ(x, 2) σ(x, 2) · · · σ(x, 2p(n)), applied left to right, fixes the element 1. The idea is that we connect the first five NIM stacks level-by-level via permutations in S5, as well as adding a couple of widgets. If the product of all the permutions fixes 1, then we get five NIM stacks of equal length N and one NIM stack of length N + 2 , which is an ∃-game by the Sprague-Grundy theorem. If 1 is not fixed, then we get four stacks of length N and two of length N + 1 —a ∀-game by the same theorem. Proof of Theorem 4.6. Fix L ∈ PSPACE , and let p and σ be as in Cai & Furst’s result above. For any x of length n, we define a directed acyclic graph Gx as follows: Gx has 6 · 2p(n) + 2 vertices that come in three types (letting N = 2 p(n)): 1. For c = 0 , 1, 2, . . . , N and all k ∈ { 1, 2, 3, 4, 5}, ukc is a vertex of Gx.2. For c = 0 , 1, 2, . . . , N , vc is a vertex of Gx.3. Gx has two additional vertices s and t.For convenience, let σc denote σ(x, c ). The graph Gx has three kinds of edges (and no others): 1. For c = 1 , 2, 3, . . . , N and all k ∈ { 1, 2, 3, 4, 5}, (ukc−1, u σc(k) c ) is an edge of Gx.35 tsv0 v1 v2 v2p(n)−1 v2p(n) u10u11u12u12p(n)−1u12p(n) u20 u30 u40 u50u52p(n) u42p(n) u32p(n) u22p(n) σ1σ2σ2p(n)· · · · · · · · · · · · · · · Figure 5: The graph Gx constructed from x.2. For c = 1 , 2, 3, . . . , N , (vc−1, v c) is an edge of Gx.3. (s, u 10) and (u1 N , t ) are edges of Gx.A typical Gx is shown in Figure 5. The columns of vertices (besides s and t)are indexed by c running from 0 to N . The five rows of u-vertices are indexed by k ∈ { 1, 2, 3, 4, 5}. The k’th u-vertex in column c − 1 has one outgoing edge to the σc(k)’th u-vertex in column c. Then it is evident that the game Gx consists of six NIM stacks—the first five involving u-vertices and the last consisting of the v-vertices. Let σ ∈ S5 be the left-to-right composition σ1σ2 · · · σN . If σ fixes 1, then s and t lie in the same stack, which thus has length N + 3 , and the other five stacks have length N + 1 . Otherwise, s and t lie in different stacks, and thus Gx has two stacks of length N + 2 and four stacks of length N + 1 . In the former case, Gx is an ∃-game and in the latter case, Gx is a ∀-game. This shows that x ∈ L if and only if Gx is an ∃-game. Since each permutation σc is computed uniformly in polynomial time, one can easily (time polynomial in n) construct a Boolean circuit computing the edge relation on Gx as well as a membership test for V (Gx). Thus we have a p-reduction from L to Nim 6(Succinct , HD ).Although the above results all mention Nim , the representations we use of a Nim game as a poset are not the natural one. Therefore, it is better to consider these as lower bounds on N-free poset games, which are naturally represented as posets. The next results regard N -free games. They depend on Adam Kalinich’s game outcome-flipping trick. The trick turns a poset game A into another poset game ¬A with opposite outcome, starting with A and applying series 36 and parallel union operations in a straightforward way. Here we describe a simplification of the trick due to Daniel Grier: Given a poset A,1. Let k be any (convenient) natural number such that 2k ≥ | A| (that is, A has at most 2k elements). 2. Let B := A/C 2k −1.3. Let C := B + C2k .4. Let D := C/C 1.5. Finally, define ¬A := D + A.Let’s check the following Claim 4.8. If g(A) 6 = 0 , then g(¬A) = 0 . If g(A) = 0 , then g(¬A) = 2 k+1 .Proof. Recall that g(P ) ≤ | P | for any poset P , and thus g(A) ≤ 2k. By Fact 2.31, g(B) = g(A) + 2 k − 1, so if g(A) = 0 , then g(B) < 2k, and otherwise, 2k ≤ g(B) < 2k+1 , which implies the (k + 1) st least significant bit position of g(B) is 1. By Theorem 2.28, g(C) = g(B) ⊕ g(C2k ) = g(B) ⊕ 2k,which is just g(B) with its (k + 1) st bit flipped. So if g(A) = 0 , then clearly, g(C) = g(B) + 2 k = g(A) + 2 k+1 − 1 = 2 k+1 − 1, and otherwise, g(C) = g(B) − 2k = g(A) − 1. Next, we have g(D) = g(C) + 1 , and so g(D) = 2 k+1 if g(A) = 0 , and g(D) = g(A) otherwise. Finally, this gives g(¬A) = g(D) ⊕ g(A) = { 2k+1 if g(A) = 0 , 0 if g(A) 6 = 0 ,and we are done. Observe that the size of ¬A is linearly bounded in |A|. In fact, |¬ A| ≤ 6|A| if A 6 = ∅. Theorem 4.9 (Kalinich ) . N-Free (Explicit , PO ) is NC 1-hard under AC 0 reductions. Proof sketch. We reduce from the Circuit Value problem for NC 1 circuits with a single output. Given an NC circuit C with a single output and whose inputs are constant Boolean values, we produce a poset game P so that P is an ∃-game if and only if C = 1 . We can assume WLOG that all gates in C are either (binary) OR-gates or NOT-gates. Starting with the input nodes, we associate a poset Pn with every node n in C from bottom up so that the outcome of Pn matches the Boolean value at node n. P is then the poset associated with the output node of C. The association is as follows: 37 • If n is an input node, we set Pn := ∅ if n = 0 ; otherwise, if n = 1 , we set Pn := C1. • If n is an OR-gate taking nodes ` and r as inputs, then we set Pn := P`/P r. (Recall Exercise 1.4.) • If n is a NOT-gate taking node c as input, we set Pn := ¬Pc.This transformation from C to P can be done in (uniform) AC 0, producing a poset of polynomial size, provided C has O(log n) depth. The next theorem is not published elsewhere. Theorem 4.10 (F, 2011) . N-Free (Succinct , PO ) is PP -hard under p-re-ductions. To prove this theorem, we first need to generalize the Kalinich/Grier construction a bit. Definition 4.11. For any poset A and any integer t > 0, define Threshold (A, t ) := (A/C 2k −t) + C2k Ct A , where k is any convenient natural number (the least, say) such that 2k > max( |A| − t, t − 1) .Note that ¬A = Threshold (A, 1) . A proof virtually identical to that of Claim 4.8 shows that g(Threshold (A, t )) = { 2k+1 if g(A) < t , 0 if g(A) ≥ t. (1) We then use the Threshold (·, ·) operator to polynomially reduce any PP language to N-Free (Succinct , PO ). The next fact is routine and needed for the proof of Theorem 4.10. Fact 4.12. Given as input a value of t and the succinct representation of a poset A, one can build a succinct representation of Threshold (A, t ) in poly-nomial time. Proof of Theorem 4.10. By standard results in complexity, for any L ∈ PP ,there is a polynomial p and a polynomial-time function x 7 → Bx mapping inputs to Boolean circuits such that, for all x, (i) Bx has q := p(|x|) many input nodes; and (ii) x ∈ L if and only if Bx(y) = 1 for at least 2q−1 many inputs y. We can assume WLOG that q ≥ 2. Given Bx, we can in polynomial 38 time construct a circuit Dx with two input registers of q bits each, such that for all y, z ∈ { 0, 1}q, Dx(y, z ) = 1 if and only if either: (a) y = z, or (b) y < z and Bx(y) = Bx(z) = 1 . Suppose |{ y : Bx(y) = 1 }| = k.Then Dx is the succinct PO representation of the poset P := Ck + A2q −k,consisting of the parallel union of a chain of length k with an antichain of length 2q − k. Using Theorem 2.28, we get that g(P ) = g(Ck) ⊕ g(A2q −k) = k ⊕ (k mod 2) , the latter quantity being either k or k − 1, whichever is even. Now let T := ¬Threshold (P, 2q−1). Then T is an ∃-game if and only if g(Threshold (P, 2q−1)) = 0 , if and only if g(P ) ≥ 2q−1, if and only if k ≥ 2q−1 (note that 2q−1 is even, because g ≥ 2), if and only if x ∈ L. Since T is clearly N -free, and a circuit for T can be constructed from x in polynomial time, this shows that L ≤pm N-Free (Succinct , PO ). 4.7 A note on the complexity of the g-number Of course, computing the g-number of an impartial game is at least as hard as computing its outcome, the latter just being a test of whether the g-number is zero. Is the reverse true, i.e., can we polynomial-time reduce computing the g-number to computing the outcome? For explicitly represented poset games, this is certainly true. Given an oracle S returning the outcome of any poset game, we get the g-number of a given poset game G as follows: query S with the games G, G + C1, G + C2, . . . , G + Cn, where n is the number of options of G (recall that that Ci is a NIM stack of size i). By the Sprague-Grundy theorem (Theorem 2.28), all of these are ∃-games except G + Cg(G),which is a ∀-game. What about succinctly represented games? The approach above can’t work, at least for poset games, because the poset has exponential size. Sur-prisingly, we can still reduce the g-number to the outcome for succinct poset games in polynomial time, using the threshold construction of Definition 4.11 combined with binary search. Given a succinctly represented poset P of size ≤ 2n, first query S with Threshold (P, 2n−1). If S says that this is an ∃-game, then we have g(P ) < 2n−1; otherwise, g(P ) ≥ 2n−1. Next, query S with Threshold (P, 2n−2) in the former case and Threshold (P, 3 · 2n−2) in the lat-ter case, and so on. Note that in this reduction, the queries are adaptive, whereas they are nonadaptive for explicitly represented games. 4.8 PSPACE-completeness In this section we sketch the proofs of two recent PSPACE -completeness re-sults for poset game. The first, by Daniel Grier, is that the outcome problem for general explicit (impartial) poset games is PSPACE -complete . The 39 v2 · · · · · · ce ae v1 Figure 6: The < relations in P obtained from the edge e = {v1, v 2} in G.second is a similar result about the complexity of black-white poset games . Theorem 4.13 (Grier ) . Deciding the outcome of an arbitrary finite poset game is PSPACE -complete. Proof. Membership in PSPACE is clear. For PSPACE -hardness, we re-duce from Node Kayles . Let G = ( V, E ) (a simple undirected graph) be an arbitrary instance of Node Kayles . By altering the graph slightly if necessary without changing the outcome of the game, we can assume that |E| is odd and that for every v ∈ V there exists e ∈ E not incident with v.We can do this by adding two disjoint cliques to G—either two K2’s or a K2 and a K4, whichever of these options results in an odd number of edges. We then construct the following three-level poset P from G: • The points of P are grouped into three disjoint antichains, A, B, and C, with A being the set of minimal points, C the maximal points, and B the points intermediate between A and C. • For each edge e ∈ E there correspond unique points ce ∈ C and ae ∈ A,and vice versa. • We let B := V . • For each edge e = {v1, v 2} and b ∈ B, we have b < c e iff b = v1 or b = v2, and ae < b iff this is not the case, i.e., iff b 6 = v1 and b 6 = v2.This is illustrated in Figure 6. This construction can clearly be done in polynomial time, given G.Now we show the outcomes are the same for the two games: The winning player in the game G—Left, say, who may play first or second—can also win 40 in the game P by playing the B-points corresponding to the vertices she plays to win in G, for as long as Right does the same. When Right first deviates from this type of play (and he must, because he loses the game G), Left can respond as follows: • If Right plays some v ∈ B adjacent (in G) to some other u ∈ B already played, then Left plays a{u,v }, resulting in an empty poset. • If Right plays ce ∈ A for some e ∈ E, then Left plays ae, leaving an antichain of size 2. • If Right plays ae ∈ A for some e = {u, v } ∈ E, then – if either u or v has already been played, then Left plays the other vertex, leaving only an even number of points in P , all of them in A, and – if neither u nor v has been played, then Left plays ce, leaving u, v ∈ B and an even number of points in A.In the latter case, if Right then plays either u or v, then Left plays the other vertex. Otherwise, if Right plays some ae′ , then this removes at least one of u and v, say, u. Then Left plays some ae′′ where e′′ is not incident to v, thus removing v (if it still remains) and leaving an even number of points in P , all of them in A.Thus the winner of G is the same as the winner of P .Finally, we turn to the complexity of black-white poset games. The next theorem is the first PSPACE -hardness result for a numeric game. Theorem 4.14. Determining the outcome of a black-white poset game is PSPACE -complete. Proof sketch. Membership in PSPACE is straightforward. For hardness, we reduce from TQBF. We present the reduction in detail and briefly describe optimal strategies for the winning players, but we do not show correctness. See for a full proof. Suppose we are given a fully-quantified boolean formula ϕ of the form ∃x1∀x2∃x3 · · · ∃ x2n−1∀x2n∃x2n+1 f (x1, x 2, . . . , x 2n+1 ), where f = c1 ∧ c2 ∧ · · · ∧ cm is in cnf with clauses c1, . . . , c m. We define a two-level black-white poset (game) X based on ϕ as follows: • X is divided into sections. There is a section (called a stack ) for each variable, a section for the clauses (the clause section ), and a section for fine-tuning the balance of the game ( balance section ). 41 • The ith stack consists of a set of incomparable waiting nodes Wi above (i.e., greater than) a set of incomparable choice nodes Ci. We also have a pair of anti-cheat nodes , αi and βi, on all stacks except the last stack. For odd i, the choice nodes are white, the waiting nodes are black, and the anti-cheat nodes are black. The colors are reversed for even i. • The set of choice nodes Ci, consists of eight nodes corresponding to all configurations of three bits (i.e., 000 , 001 , . . . , 111 ), which we call the left bit , assignment bit and right bit respectively. • The number of waiting nodes is |Wi| = (2 n + 2 − i)M , where M is the number of non-waiting nodes in the entire game. It is important that |Wi| ≥ | Wi+1 | + M . • The anti-cheat node αi is above nodes in Ci with right bit 0 and nodes in Ci+1 with left bit 0. Similarly, βi is above nodes in Ci with right bit 1 and nodes in Ci+1 with left bit 1. • The clause section contains a black clause node bj for each clause cj ,in addition to a black dummy node . The clause nodes and dummy node are all above a single white interrupt node . The clause node bj is above a choice node z in Ci if the assignment bit of z is 1 and xi appears positively in cj , or if the assignment bit of z is 0 and xi appears negatively in cj . • The balance section or balance game is incomparable with the rest of the nodes. The game consists of eight black nodes below a white node, which is designed to have numerical value −712 . All nodes in this section are called balance nodes .The number of nodes is polynomial in m and n, so the poset can be efficiently constructed from ϕ.A sample construction is shown in Figure 7. The idea is that players take turns playing choice nodes, starting with White, and the assignment bits of the nodes they play constitute an assignment of the variables, x1, . . . , x 2n+1 .The assignment destroys satisfied clause nodes, and it turns out that Black can win if there remains at least one clause node. The waiting nodes and anti-cheat nodes exist to ensure players take nodes in the correct order. The interrupt node and dummy node control how much of an advantage a clause node is worth (after the initial assignment), and the balance node ensures the clause node advantage can decide whether White or Black wins the game. One can show that White (i.e., Right) can force a win when playing first if and only if the formula is true. 42 · · · clause nodes node W1 C1 α1β1α2β2 W2 C2C3 W3 balance nodes dummy node interrupt Figure 7: An example game with three variables ( n = 1 ). Circles represent individual nodes, blobs represent sets of nodes, and χ is the set of clause nodes. An edge indicates that some node in the lower level is less than some node in the upper level. The dotted lines divide the nodes into sections (stacks, clause section and balance section). Suppose that White and Black agree to play choice nodes in order, thus producing a truth assignment a1, a 2, . . . via the assignment bits. The other bits are arbitrary, but players would do well to choose each left bit to preserve the remaining anti-cheat node in the previous stack, starting with the second move (so Black preserves a black anti-cheat node in stack 1, White an anti-cheat node in stack 2, etc.). This continues until White plays a choice node in C2n+1 . At this point, all the variables have been assigned, but there are still points in X; we assume the players continue under optimal play. Assuming both players stick to the agreement, one can show that White wins (under optimal play) if and only if ϕ is true. The rest of the proof in shows that either player can win if the other player violates the agreement (“cheats”). Here, we only describe here what to do when your opponent cheats. We think of the game as having two phases. The first phase ends when the players have taken at least one node from each Ci. The second phase begins when the first phase ends, and lasts until the end of the game. If the players stick to the agreement as described above, then the last move in the first phase coincides with White setting the truth value a2n+1 by playing in C2n+1 . 4.8.1 Phase one strategy In phase one, our strategy for White is the same as our strategy for Black: play fair (no cheating!) until our opponent cheats. If our opponent cheats then reply according to the following rules, and continue to reply according to these rules for future moves. For the following rules, stack i is the left-43 most stack containing waiting nodes of our color (i.e., we are waiting for our opponent to play in stack i). • If the opponent moves in Cj , then – if j = 2 n + 1 , then take a waiting node in Wi, else – if it is their first move in Cj , reply in Cj+1 . Choose a node that saves one of your anti-cheat nodes and destroys your opponent’s anti-cheat nodes where possible. The assignment bit of your reply will not matter. – if it is not their first move in Cj , take a waiting node in Wi. • If the opponent takes a waiting node in Wj+1 then take a node in Wj . • If the opponent takes an anti-cheat node, a clause node, the dummy node, the interrupt node, or a balance node then take a waiting node in Wi.Observe that we take a waiting node in Wj if the opponent takes a non-waiting node (this can happen at most M times) or takes a waiting node in Wj+1 . By construction, |Wj | ≥ M + |Wj+1 |, so we cannot run out of waiting nodes. Similarly, we only take a node in Cj+1 when the opponent takes their first node from Cj , so we have all eight nodes to choose from when we play in Cj+1 . In other words, the strategy never asks us to take a node that isn’t there; the reply moves are always feasible. 4.8.2 Phase two strategy Let H be the black-white poset game at the start of phase two, and let k be the number of surviving clause nodes in H. Assuming no cheating in phase one, each player took exactly one choice node from each stack in phase one, and since there are more white Ci’s, Black has the first move in phase two. The waiting nodes in Wi are gone because some node in Ci is missing for all i. Similarly, there is at most one anti-cheat node in each stack, since at least one was destroyed by the missing choice nodes on either side. Our description of phase two consists of a series of facts: • A player can always avoid destroying their own anti-cheat nodes in H,and therefore we may assume it is impossible for a player to destroy their own anti-cheat node. This gives us a new, equivalent game H′ ≈ H, where in H′ the anti-cheat node in stack i is incomparable with all the choice nodes in stack i + 1 , for i = 1 , . . . , 2n.44 • It is optimal (in H′) for White to take the interrupt node after Black’s first move, as long as the dummy node is intact. • It is optimal for Black to take a clause node on his first move in H′, if one exists. It follows that the clause nodes are gone by Black’s second move in H′. Let J be H′ with its clause section removed. Then every section (i.e., each stack and the balance section) in J is incomparable with the rest of J. This means we can write J as the sum of much simpler games: J = J1 + J2 + · · · + J2n + J2n+1 + B , where Ji is the i’th stack component of J and B is the balance nodes. Ji has numerical value ±7 without an anti-cheat node, and ±612 with an anti-cheat node, where the sign is (−1) i. Note that the last stack, i = 2 n + 1 ,does not contain an anti-cheat node, and so its value is −7. The balance section B has value 712 by construction (see Exercise 2.19), so if all the anti-cheat nodes survive, v(J) = 2n+1 ∑ i=1 v(Ji) + v(B) = 6 122n∑ i=1 (−1) i − 7 + 7 12 = 12 . We call this the baseline value. If ϕ is true (and Black does not cheat), then White manages to clear away all the clause nodes in phase one. So then H′ = J + C, where C is just the interrupt node and dummy node. Since v(C) = −12 , we get v(H′) = 0 , which is a win for White (because Black plays first in H′). If Black cheats, one can show that she does so at the cost of one of her anti-cheat nodes, which again reduces v(H′) to 0, a win for White. If ϕ is false (and White does not cheat), then White cannot clear all the clause nodes in phase one. Black then plays a clause node to start phase two, after which White plays the interrupt node. The remaining game is J,with no clause section and all anti-cheat nodes, whose value is 12 , a win for Black. If White tries to cheat, then he may be able to destroy all clause nodes, but at the expense of at least one white anti-cheat node. The clause section subtracts 12 , but losing an anti-cheat node adds 12 , bringing us back to the baseline 12 , a win for Black. 5 Open questions Are there interesting games whose complexity is complete for a subclass of PSPACE ? The natural black-white version of GenCol is complete for the 45 class PNP [log] (that is, the class of decision problems computable in polyno-mial time with O(log n) many oracle queries to an NP language), but the game itself and the reasons for its complexity are not so interesting. In this version, each uncolored node is reserved (“tinted”) for being colored one or the other color, e.g., some node u can only be colored black, while some other node v can only be colored white, and so on for all the nodes. Then the outcome of this game depends only on which subgraph (the black-tinted nodes or the white-tinted nodes) contains a bigger independent set. Given two graphs G1 and G2, the problem of determining whether G1 has a bigger independent set than G2 is known to be complete for PNP [log] . Fix a natural number k > 2. For poset games of bounded width k, defined in Section 3.2.1, is there an algorithm running in time o(nk)?Grier’s proof that the poset game decision problem is PSPACE -complete (Theorem 4.13) constructs posets having three levels, that is, whose maxi-mum chain length is three. What about two-level games in general? Those having a single maximum or a single minimum element are easily solved. What is the complexity of those with more than one minimum and more than one maximum? Certain subfamilies of two-level posets have g-numbers that show regular patterns and are easily computed, or example, games where each element is above or below at most two elements, as well as parity-uniform games (see Definition 3.4 and Theorem 3.5) . Despite this, we conjecture that the class of all two-level poset games is PSPACE -complete, but are nowhere near a proof. Are there larger subfamilies of the two-level poset games that are in P?A more open-ended goal is to apply the many results and techniques of combinatorial game theory, as we did in Theorem 4.14, to more families of games. Finally, we mention a long-standing open problem about a specific infi-nite poset game: What is the outcome of the game N3 − { (0 , 0, 0) }, where (x1, x 2, x 3) ≤ (y1, y 2, y 3) iff xi ≤ yi for all i ∈ { 1, 2, 3}? References E. R. Berlekamp, J. H. Conway, and R. Guy. Winning Ways for your Mathematical Plays . Academic Press, 1982. C. L. Bouton. Nim, a game with a complete mathematical theory. Annals of Mathematics , 3:35–39, 1901-1902. S. Byrnes. Poset game periodicity. INTEGERS: The Electronic Journal of Combinatorial Number Theory , 3, 2003. 46 Jin-Yi Cai and Merrick Furst. PSPACE survives constant-width bot-tlenecks. Int. J. Found. Comput. Sci. , 02(01):67, March 1991. J. H. Conway. On Numbers and Games . Academic Press, 1976. W. Deuber and S. Thomassé. Grundy sets of partial orders. www.mathematik.uni-bielefeld.de/sfb343/preprints/pr96123.ps.gz. S. Even and R. E. Tarjan. A combinatorial problem which is complete in polynomial space. Journal of the ACM , 23:710–719, 1976. Kousha Etessami. Counting quantifiers, successor relations, and loga-rithmic space. Journal of Computer and System Sciences , 54(3):400– 411, 1997. S. Fenner, L. Fortnow, and S. Kurtz. Gap-definable counting classes. Journal of Computer and System Sciences , 48(1):116–148, 1994. S. A. Fenner, R. Gurjar, A. Korwar, and T. Thierauf. On two-level poset games. Technical Report TR13-019, Electronic Colloquium on Computational Complexity, 2013. S. A. Fenner, D. Grier, J. Meßner, L. Schaeffer, and T. Thierauf. Game values and computational complexity: An analysis via black-white com-binatorial games. Technical Report TR15-021, Electronic Colloquium on Computational Complexity, February 2015. A.S. Fraenkel, R.A. Hearn, and A.N. Siegel. Theory of combinatorial games. In H. Peyton Young and Shmuel Zamir, editors, Handbook of Game Theory , volume 4, chapter 15, pages 811–859. Elsevier, 2015. A. S. Fraenkel and E. R. Scheinerman. A deletion game on hypergraphs. Discrete Applied Mathematics , 30(2-3):155–162, 1991. D. Gale. A curious nim-type game. Amer. Math. Monthly , 81:876–879, 1974. M. Garey and D. Johnson. Computers and Intractability . W. H. Free-man and Company, 1979. Daniel Grier. Deciding the winner of an arbitrary finite poset game is PSPACE-complete. In Proceedings of the 40th International Collo-quium on Automata, Languages and Programming , volume 7965-7966 of Lecture Notes in Computer Science , pages 497–503. Springer-Verlag, 2013. P. M. Grundy. Mathematics and games. Eureka , 2:6–8, 1939. Birgit Jenner, Johannes Köbler, Pierre McKenzie, and Jacobo Torán. Completeness results for graph isomorphism. Journal of Computer and System Sciences , 66(3):549–566, 2003. A. O. Kalinich. Flipping the winner of a poset game. Information Processing Letters , 112(3):86–89, January 2012. 47 Donald E. Knuth. Surreal Numbers . Addison-Wesley, 1974. J. B. Kruskal. The theory of well-quasi-ordering: A frequently discov-ered concept. Journal of Combinatorial Theory , 13(3):297–305, 1972. David Lichtenstein and Michael Sipser. GO is polynomial-space hard. Journal of the ACM , 27(2):393–401, 1980. L. J. Stockmeyer and A. K. Chandra. Provably difficult combinatorial games. SIAM Journal on Computing , 8(2):151–174, 1979. T. J. Schaefer. On the complexity of some two-person perfect-information games. Journal of Computer and System Sciences ,16(2):185–225, 1978. F. Schuh. Spel van delers (game of divisors). Nieuw Tijdschrift voor Wiskunde , 39:299, 2003. A. N. Siegel. Combinatorial Game Theory , volume 146 of Graduate Studies in Mathematics . American Mathematical Society, 2013. M. Sipser. Introduction to the Theory of Computation (2nd Ed.) .Course Technology, Inc., 2005. R. P. Sprague. Über mathematische Kampfspiele. Tohoku Mathematical Journal , 41:438–444, 1935-1936. L. Stockmeyer. The polynomial-time hierarchy. Theoretical Computer Science , 3:1–22, 1977. H. Spakowski and J. Vogel. θ2 p -completeness: A classical approach for new results. In Proceedings of the 20th Conference on Foundations of Software Technology and Theoretical Computer Science (FST TCS) ,number 1974 in Lecture Notes in Computer Science, pages 348–360, 2000. T. Thierauf, 2009. Private communication. J. Úlehla. A complete analysis of Von Neumann’s Hackendot. Interna-tional Journal of Game Theory , 9:107–113, 1980. J. Valdes, R. E. Tarjan, and E. L. Lawler. The recognition of series parallel digraphs. SIAM Journal on Computing , 11:298–313, 1982. F. Wagner, 2009. Private communication. 48
190676
https://www.tutorchase.com/blog/ap-computer-science-principles-a-complete-guide
AP Computer Science Principles: A Complete Guide George Christofi Contents AP Computer Science Principles Guide is designed to provide students and parents with a clear, engaging overview of one of the most accessible and impactful AP courses available. Offered by the College Board, this subject introduces the major areas of computer science, from creative development and computing innovations to data, algorithms, and ethical computing culture. AP Computer Science Principles is one of many Advanced Placement subjects that allow students to challenge themselves academically while earning potential university credit. Throughout this course, students learn to design computer programs, implement algorithms, and evaluate computational solutions using real world examples. This guide explores how students benefit from the AP CSP exam, what computational thinking practices are essential, and how the performance tasks, multiple choice section, and program development prepare them for further study. Whether you're new to programming or curious about computing systems, this resource will help you understand the structure, goals, and strategies of AP Computer Science, and how students develop the skills needed to succeed in today's digital world. Boost your grades with our revision platform, used by 100,000+ students! Access thousands of practice questions, study notes, and past papers for every subject. What Is AP Computer Science Principles? What Is AP Computer Science Principles? AP Computer Science Principles (often abbreviated as AP CSP) is an introductory-level AP computer science course developed by the College Board. Designed specifically for high school students, the course offers an engaging entry point into the world of computer science, with a focus on developing foundational skills in computing that go beyond just programming. Unlike more code-intensive courses, AP CSP explores broad computing concepts, including computing systems, data, creative development, and the impact of computing innovations. It equips students to design and create computer programs, test algorithms, and evaluate computational solutions using structured methods such as code analysis and computational thinking practices. This course is equivalent to a first-semester introductory college-level class and provides an excellent foundation for students who may wish to pursue STEM fields or develop a deeper understanding of the role of computing in society. Key Learning Goals Key Learning Goals Throughout the course, students learn how to: By blending real world examples with hands-on performance tasks, students develop a strong foundation in science principles and computational solutions, positioning them to think like future computer scientists. Improve your grades with TutorChase The world’s top online tutoring provider trusted by students, parents, and schools globally. 4.93/5 based on733 reviews in Why Should Students Take AP CSP? Why Should Students Take AP CSP? AP Computer Science Principles makes it clear that this course offers more than just technical knowledge—it equips students with critical skills for academic success and lifelong learning. Whether aiming for university credit or gaining a competitive edge in STEM fields, students benefit significantly from engaging with this curriculum. Benefits for Students Benefits for Students 1. Earn university credit through successful performance on the AP CSP exam, which can reduce college costs and course loads. 2. Prepares students for future studies and careers in technology, computer science, and broader STEM fields. 3. Encourages underrepresented groups to explore computer science, promoting equity in the tech industry. 4. Develops transferable skills such as: 5. Promotes computational thinking practices that enable students to: Ideal Candidates Ideal Candidates 1. Suitable for students new to computer science, with no prior programming background required. 2. Ideal for those interested in: 3. A great starting point for students who want to apply computing to real-world examples and build meaningful, ethical solutions. Selecting AP Computer Science Principles as part of a broader course plan should be based on a student’s academic interests and future study goals. Is AP Computer Science Principles Hard? Is AP Computer Science Principles Hard? AP Computer Science Principles is widely considered one of the more accessible AP computer science courses. While AP CSP is generally viewed as more accessible than other AP courses, it’s important to understand what makes an AP computer science course challenging and how preparation affects success. Unlike AP Computer Science A, which is heavily focused on Java programming, AP CSP introduces broad computer science principles and encourages students to develop a foundational understanding of how computing systems, data, and computing innovations impact society. Rather than relying heavily on syntax or complex code, the course focuses on computational thinking practices—like how to implement algorithms, analyse data, and evaluate computational solutions. Students also explore creative development, how to incorporate abstractions, and how to contribute to an ethical computing culture. Success in the AP CSP exam comes from engaging with performance tasks, practicing the multiple choice section, and understanding real world examples of program development. With consistent effort during class time, students can develop the essential skills needed to solve problems using computer programs effectively. 2024 Performance Statistics 2024 Performance Statistics According to the College Board’s 2024 data: | Year | 5 | 4 | 3 | 2 | 1 | 3+ | Test Takers | Mean Score | --- --- --- --- | 2024 | 10.9% | 20.0% | 33.1% | 20.3% | 15.7% | 64.0% | 175,261 | 2.90 | | 2023 | 11.5% | 20.6% | 31.1% | 20.5% | 16.4% | 63.1% | 164,505 | 2.90 | | 2022 | 11.4% | 21.0% | 31.1% | 19.9% | 16.6% | 63.5% | 134,651 | 2.91 | | 2021 | 12.4% | 21.7% | 32.5% | 19.9% | 13.6% | 66.6% | 102,610 | 2.99 | | 2020 | 10.9% | 23.6% | 37.1% | 19.8% | 8.6% | 71.6% | 116,751 | 3.09 | Table showing past years’ AP Computer Science Principles score distributions These results show that while AP CSP does require effort and genuine engagement, it has a high pass rate—over 67% of students scored 3 or higher in 2023. With regular use of AP Classroom, code analysis, practice performance tasks, and structured review of science principles, students can perform well, even with no prior coding experience. AP CSP Course Structure and Topics AP CSP Course Structure and Topics AP Computer Science Principles guide wouldn't be complete without a clear breakdown of how the course is organised. The College Board structures AP CSP around five Big Ideas, each representing a major conceptual theme in computer science. These big ideas ensure that students learn the breadth of the field, gain a deep understanding of how computational solutions are designed, and explore both technical and societal dimensions of computing. The Five Big Ideas The Five Big Ideas 1. Creative Development (10–13%) 1. Creative Development (10–13%) This topic introduces students to the creative process behind designing computer programs. Students engage in program development by using real world examples, applying an iterative design method, and working collaboratively. They create algorithms, develop solutions, and explore how students benefit from team-based innovation in computing. 2. Data (17–22%) 2. Data (17–22%) In this unit, students explore how data is collected, represented, and used to generate new knowledge. They analyse trends, investigate data privacy and security, and learn how to manipulate and visualise information using computing systems. Understanding data is crucial for designing meaningful and impactful computational solutions. 3. Algorithms and Programming (30–35%) 3. Algorithms and Programming (30–35%) As the most heavily weighted section, this area focuses on how students develop and implement algorithms to solve problems. Key topics include: This is where students deepen their understanding of programming, write and test code, and explore the structure and logic of computer programs. 4. Computer Systems and Networks (11–15%) 4. Computer Systems and Networks (11–15%) Students study the infrastructure of computing systems, including the internet, digital communication, and network security. They learn how computing systems interact, share data, and function efficiently at scale. This section also explores topics such as fault tolerance and distributed computing. 5. Impact of Computing (21–26%) 5. Impact of Computing (21–26%) Here, students explore the broader implications of computing. They assess how computing innovations affect society, examine digital equity issues like the digital divide, and reflect on how to build an ethical computing culture. Topics also include bias, legal concerns, and computing accessibility—ensuring students become responsible and informed computer scientists. | No. | Big Ideas | Exam Weightage | --- | 1 | Creative Development | 10-13% | | 2 | Data | 17-22% | | 3 | Algorithms and Programming | 30-35% | | 4 | Computer Systems and Networks | 11-15% | | 5 | Impact of Computing | 21-26% | AP Computer Science Principles Big Ideas and their Exam Weightage Computational Thinking Practices Computational Thinking Practices Alongside the five Big Ideas, AP Computer Science Principles also emphasises a set of Computational Thinking Practices. These practices guide how students approach problems, structure computational solutions, and interact with the broader computing community. These skills are essential not only for succeeding in the AP CSP exam, but also for developing the mindset of a future computer scientist. The College Board uses these practices to shape questions in both the multiple choice section and the performance tasks, ensuring that students are assessed on their ability to think critically and creatively about computing systems, programs, and data. 1. Computational Solution Design 1. Computational Solution Design Students learn how to design and evaluate computational solutions that meet specific goals. This includes identifying problems, breaking them down, and planning how a computer program could be used to address them using clear, logical structures. 2. Algorithm and Program Development 2. Algorithm and Program Development In this practice, students explore how to implement algorithms, create procedures, and apply control structures like conditional statements and loops. Emphasis is placed on program development, testing, and debugging within a structured design process. 3. Abstraction and Data Management 3. Abstraction and Data Management Students learn how to incorporate abstractions—such as variables, procedures, and lists—into their programs. Abstraction helps simplify complexity and is vital in managing and manipulating data to reveal new knowledge and meaning. 4. Code Analysis 4. Code Analysis Students practise reading, interpreting, and reasoning about existing algorithms and code segments. They learn how to test algorithms, predict outputs, and identify potential errors, all while focusing on efficiency and clarity. 5. Investigating Computing Innovations 5. Investigating Computing Innovations This practice focuses on exploring computing innovations—such as AI, encryption, or data-driven apps—and understanding their design, functionality, and broader implications. Students apply their knowledge of computer science principles to analyse real-world technologies. 6. Ethical and Inclusive Computing Culture 6. Ethical and Inclusive Computing Culture Students are encouraged to develop an ethical computing culture by considering inclusion, bias, accessibility, and safe computing practices. This fosters awareness of how computing impacts individuals and communities differently and encourages responsible innovation. AP CSP Exam Structure AP CSP Exam Structure The AP Computer Science Principles exam is divided into two main components, both designed to assess students’ mastery of key computer science principles, computational thinking practices, and their ability to develop computational solutions effectively. These components evaluate a student’s ability to create algorithms, analyse code, and reflect on their understanding of computing innovations and ethical responsibilities. Part 1 – Create Performance Task (30% of Exam Score) Part 1 – Create Performance Task (30% of Exam Score) The Create Performance Task is a significant portion of the AP CSP exam, accounting for 30% of the final score. Unlike traditional exams, this component is project-based and is completed during the course with at least 9 classroom hours allocated specifically for its development. Students are required to develop a computer program that demonstrates their ability to solve problems, manage data, and apply key programming skills such as abstraction, algorithms, and code analysis. The task includes: This performance task allows students to develop and showcase a computational solution that is personal, relevant, and aligned with ethical computing culture. Deadline: The completed performance task must be submitted through the AP Digital Portfolio by 30 April 2025, 11:59 PM (ET), as specified by the College Board. Part 2 – Multiple-Choice Exam (70% of Exam Score) Part 2 – Multiple-Choice Exam (70% of Exam Score) The second part of the AP Computer Science Principles exam is the multiple-choice section, which makes up 70% of the total exam score. This portion is completed on exam day and is designed to assess students’ understanding of computer science principles, their ability to evaluate computational solutions, and apply key concepts such as algorithms, data, and computing systems. The questions are structured to reflect both technical knowledge and conceptual reasoning, with an emphasis on applying what students learn through the course’s computational thinking practices. Breakdown of the question types: The multiple-choice section reinforces the importance of both broad and detailed knowledge. Success in this part of the exam depends on regular practice, clear understanding of the big ideas, and the ability to apply science principles in real-world computing scenarios. | Section | Question Type / Component | Number of Questions | Exam Weightage | Timing | --- --- | (I) | Multiple-choice questions | 70 | 70% | 120 minutes End-of-course AP Exam | | | Single-select | 57 | | | | | Single-select with reading passage about a computing innovation | 5 | | | | | Multi-select | 8 | | | | (II) | Create Perfomance Task | See Below | 30% | See Below | | | Program code, video and Personalized Project Reference | Atleast 9 hours in class | | | Written response questions related to the Create performance task | 2 | | 60 minutes End-of-course AP Exam | Table Showing Computer Science PrincipleS Breakdown Scoring Breakdown and What It Means Scoring Breakdown and What It Means The AP Computer Science Principles exam is scored on a scale from 1 to 5, with each score reflecting a student’s level of understanding and readiness for university-level computer science work. The scoring system evaluates both conceptual mastery and practical ability to apply computer science principles through program development, code analysis, and computational thinking practices. These scores are recognised by many universities and can be used to earn university credit or advanced course placement. Most institutions accept a score of 3 or higher, though competitive programmes may require a 4 or 5. The two sections of the exam are assessed differently: This hybrid assessment approach ensures students are tested both on theoretical knowledge and practical programming skills—aligning with the goal of preparing students to think and work like computer scientists. Students who don’t meet their target score the first time may consider retaking the AP CSP exam to strengthen their academic record. How the Exam Is Scored How the Exam Is Scored | Score | Meaning | Typical College Equivalent | --- | 5 | Extremely well qualified | A+ or A | | 4 | Very well qualified | A-, B+, or B | | 3 | Qualified | B-, C+, or C | | 2 | Possibly qualified | Usually no credit awarded | | 1 | No recommendation | No credit awarded | AP CSP vs AP Computer Science A AP CSP vs AP Computer Science A While both courses fall under the AP computer science umbrella, AP Computer Science Principles (AP CSP) and AP Computer Science A differ significantly in focus, structure, and the type of skills they help students develop. Understanding these differences is crucial for choosing the right course based on a student's interests, background, and academic goals also choosing between AP and IB qualifications depends on your academic goals, university plans, and preferred learning style. | Feature | AP Computer Science Principles (AP CSP) | AP Computer Science A (AP CSA) | --- | Focus | Broad computer science principles, including computing systems, data, computing innovations, and ethical computing culture | Intensive programming using Java, focused on software design and problem-solving | | Programming Requirements | Minimal prior experience required; emphasises computational thinking practices, program development, and real-world applications | Strong emphasis on Java syntax, logic, and implementing algorithms in complex code structures | | Assessment | Create Performance Task (30%) + Multiple choice section (70%) | Entirely exam-based: Multiple choice and Free response coding problems | | Skills Emphasised | Code analysis, abstract thinking, teamwork, ethics, societal impact, and project-based learning | Writing, testing, and debugging Java programs, solving algorithmic problems, and building object-oriented software | | Who It’s For | Best for students new to computer science, or those who prefer conceptual and creative exploration | Ideal for students with a strong interest or background in programming and software engineering | | College Credit Opportunities | Widely accepted with scores of 3+; focuses on foundational computing knowledge | Also accepted broadly, with scores of 3+ often leading to credit in introductory computer science courses | Table Showing Key Differences Between AP CSP and AP CSA Which One Should You Choose? Which One Should You Choose? Students who are curious about the impact of computing innovations, interested in solving problems creatively, and want to explore how computing systems work within societal contexts will likely thrive in AP CSP. The course promotes a strong foundation in computer science using practical performance tasks and accessible projects that allow students to build meaningful computational solutions. On the other hand, students who want a deep dive into coding—specifically Java—and are ready to tackle more intensive programming challenges will find AP Computer Science A more appropriate. It focuses heavily on language structure, algorithms, data structures, and writing efficient code, providing a strong technical base for further study in computer science or software development. How to Prepare for the AP CSP Exam How to Prepare for the AP CSP Exam Success in AP CSP depends not only on classroom effort, but also on how effectively students use available tools and strategies. Preparation should focus on mastering the big ideas, strengthening computational thinking practices, and confidently completing both the multiple choice section and the Create Performance Task. Below are proven methods and resources to help students learn, revise, and build strong computational solutions. Study Strategies Study Strategies The CED outlines all required topics, skills, and computational thinking practices. Reviewing it ensures that no major areas are missed during preparation. Analysing past Create Performance Task examples and understanding how they were scored gives valuable insight into how to structure responses and improve program development. Video explanations of topics such as implementing algorithms, managing data, or understanding the impact of computing provide visual reinforcement of key concepts. The multiple choice section often tests understanding of pseudocode, conditional statements, and specific computer science vocabulary. Regular practice helps improve both speed and accuracy. Recommended Resources Recommended Resources Offers interactive lessons aligned with the AP curriculum, covering everything from creative development to code analysis. Widely used in schools, this free curriculum supports students with structured lessons, labs, and assignments focused on the full range of computer science principles. These include detailed content summaries, exam strategies, and full-length practice tests tailored to the AP CSP exam structure. Perfect for revising key terms, such as “abstraction,” “computing systems,” “ethical computing culture,” and common programming concepts. Many educators and former students share tutorials and walkthroughs on the Create Task, how to design a solid computer program, and how to demonstrate mastery of the big ideas. Many students benefit from structured best AP tutoring companies that align closely with the College Board’s course framework and exam expectations. Create Performance Task Tips Create Performance Task Tips The Create Task can significantly impact your final score. Below are essential tips to help students maximise their marks: Use your class time wisely and don’t leave the task until the last minute. Allocate time for brainstorming, coding, testing, and revisions. The College Board scoring guidelines are very specific. Refer to them constantly to ensure every requirement is addressed. The video must clearly show how the program works using real world examples and relevant input/output. Keep it concise and functional. Well-structured, readable code makes it easier to demonstrate abstractions, data management, and algorithm design. Explain how you created the program, why certain algorithms were used, and how the solution solves the intended problem. Clarity and relevance are key. Support from experienced tutors can also help students structure their learning and improve performance across both the multiple choice section and performance tasks. Students’ Testimonials Students’ Testimonials Hearing directly from past students provides valuable insight into what the AP Computer Science Principles experience is really like. These reflections highlight how the course goes beyond just programming and helps learners connect computer science principles with their own interests, creativity, and academic growth. These testimonials reflect how students benefit not only academically but personally, gaining valuable skills, confidence, and a deeper understanding of how computer programs can be used to solve real world problems. Conclusion Conclusion This course offers a well-rounded introduction to computer science, combining technical skill-building with creativity and ethical awareness. Through hands-on program development, students learn how to create algorithms, work with data, and build meaningful computer programs that reflect real-world applications. With its balanced focus on computational thinking practices, computing innovations, and collaboration, AP CSP prepares students to become thoughtful, skilled, and responsible computer scientists. Whether navigating the multiple choice section, developing a Create Performance Task, or exploring computing systems and the internet, students gain the knowledge and skills needed to succeed academically and beyond. Supported by resources from the College Board, Code.org, and other platforms, the course helps students discover their potential and confidently pursue future opportunities in STEM. With consistent effort and guided support, students benefit greatly from what AP CSP has to offer. FAQ What programming language is used in AP Computer Science Principles? AP Computer Science Principles does not require a specific programming language, allowing schools and instructors to choose from a range of options such as Python, JavaScript, Scratch, or block-based platforms like Snap! or App Lab. The focus is on developing core computer science principles such as creating algorithms, program development, data manipulation, and computational thinking—rather than language syntax. This flexibility ensures students learn how to design computer programs and evaluate computational solutions using tools suited to their skill level. Do you need a strong maths background to succeed in AP CSP? No, AP Computer Science Principles does not require advanced maths skills. The course is accessible to students with a basic understanding of algebra and places greater emphasis on logical reasoning, algorithm design, and abstraction. Instead of complex equations, students use computational thinking practices to solve problems, analyse data, and implement algorithms—making it an ideal starting point for students who are new to computer science or programming. Can AP CSP be self-studied without taking the class? Yes, students can self-study AP Computer Science Principles using online resources, textbooks, and tutorials aligned with the College Board’s Course and Exam Description (CED). However, to sit the official AP exam and submit the Create Performance Task, students must be registered with an authorised school or testing centre that offers AP CSP. While self-studying is possible, guided instruction often helps students better understand computing systems, code analysis, and exam requirements. What devices or software are required for AP CSP? To complete AP Computer Science Principles, students need a computer or laptop with internet access and a browser that supports coding environments such as Code.org, Replit, or Snap!. The specific software depends on the curriculum chosen by the instructor, but no specialised or high-end hardware is required. Most programming tasks can be completed using cloud-based platforms that support data analysis, program development, and algorithm implementation through accessible interfaces. How do teachers grade the Create Performance Task before submission? Teachers may provide general feedback and facilitate classroom time for the Create Task, but the final assessment is conducted externally by AP Readers trained by the College Board. These evaluators assess the submitted computer program, video demonstration, and written responses using a structured rubric. The scoring focuses on abstraction, algorithm design, and computational thinking—ensuring a fair and consistent evaluation based on national AP standards. What’s the difference between pseudocode and actual code in the exam? In the AP CSP multiple-choice section, pseudocode is used to test students’ ability to apply computer science principles without requiring mastery of a specific programming language. Pseudocode represents logic and algorithmic steps in a simplified, language-neutral format, allowing students to demonstrate understanding of structures like loops, conditional statements, and list manipulation. Unlike actual code, pseudocode prioritises logic over syntax and is essential for solving conceptual problems in the exam. Can you take both AP CSP and AP Computer Science A in the same year? Yes, motivated students can take both AP Computer Science Principles and AP Computer Science A in the same academic year. AP CSP provides a conceptual foundation in data, computing innovations, and problem-solving strategies, while AP CSA focuses intensively on Java programming and software development. Taking both courses together allows students to strengthen their computational thinking skills while gaining in-depth programming experience—ideal for students aiming to pursue computer science at university. How long should the Create Task video be? The Create Performance Task video must be no longer than one minute and should clearly demonstrate the key functionality of the computer program. Students should focus on showing inputs, outputs, and the meaningful use of algorithms and abstractions within the application. The video is a required component of the submission and helps AP Readers understand how the code operates in a real-world context, making it a vital part of the overall assessment. What happens if you miss the Create Task submission deadline? If a student misses the Create Performance Task submission deadline, they will not receive a complete AP Computer Science Principles score. Since the task accounts for 30% of the final exam grade and must be submitted through the AP Digital Portfolio, failure to submit it results in an incomplete or invalid score. To avoid this, students should manage their class time effectively and ensure all components—code, video, written responses, and references—are finalised before the deadline. Is AP CSP useful for students not pursuing a STEM career? Yes, AP Computer Science Principles is highly valuable even for students not planning to enter STEM fields. The course builds essential skills such as digital literacy, logical reasoning, ethical decision-making, and data interpretation—all of which are relevant across disciplines like business, media, education, and the humanities. By exploring how computing systems and innovations impact society, students gain a well-rounded understanding of technology’s role in the modern world, making AP CSP a versatile and practical course for any future path. Need help from an expert? 4.93/5 based on733 reviews in The world’s top online tutoring provider trusted by students, parents, and schools globally. Study and Practice for Free Trusted by 100,000+ Students Worldwide Achieve Top Grades in your Exams with our Free Resources. Practice Questions, Study Notes, and Past Exam Papers for all Subjects! Need Expert AP Help? If you’re looking to boost your AP grades, get in touch with the TutorChase team and we’ll be able to provide you with an expert AP Computer Science Principles Tutor. We’ll be there every step of the way! Charlie Professional tutor and Cambridge University researcher Written by: George Christofi George studied undergraduate and masters degrees in Classics and Philosophy at Oxford, as well as spending time at Yale. He specialises in helping students with UK and US university applications, including Oxbridge and the Ivy League. He writes extensively on education including on schools, universities, and pedagogy. Related Posts Top 10 Easiest AP Courses Top 10 Hardest AP Courses AP Grades Explained Hire a tutor Please fill out the form and we'll find a tutor for you 1/2 Your details Alternatively contact us via WhatsApp, Phone Call, or Email
190677
https://www.kylesconverter.com/flow/cubic-feet-per-minute-to-cubic-feet-per-second
Cubic Feet Per Minute to Cubic Feet Per Second | Kyle's Converter Home: Kyle's Converter Kyle's Calculators Kyle's Conversion Blog Convert Cubic Feet Per Minute to Cubic Feet Per Second 1. Kyle's Converter> Flow> Cubic Feet Per Minute> Cubic Feet Per Minute to Cubic Feet Per Second Discover more Calculators Calculator Cubic Feet Per Minute (CFM)Cubic Feet Per Second (ft 3/s) Precision: _Reverse conversion? Cubic Feet Per Second to Cubic Feet Per Minute_(or just enter a value in the "to" field) Please share if you found this tool useful: facebook twitter reddit | Unit Descriptions | | 1 Cubic Foot per Minute:A flow rate of 1 cubic foot per minutes. In SI units 4.719474432 x 10-4 cubic meters per second. | 1 Cubic Foot per Second:A flow rate of 1 cubic foot per second. In SI units 0.028316846592 cubic meters per second. | | Link to Your Exact Conversion | | | Discover more Calculator Calculators | Conversions Table | | 1 Cubic Feet Per Minute to Cubic Feet Per Second = 0.0167 | 70 Cubic Feet Per Minute to Cubic Feet Per Second = 1.1667 | | 2 Cubic Feet Per Minute to Cubic Feet Per Second = 0.0333 | 80 Cubic Feet Per Minute to Cubic Feet Per Second = 1.3333 | | 3 Cubic Feet Per Minute to Cubic Feet Per Second = 0.05 | 90 Cubic Feet Per Minute to Cubic Feet Per Second = 1.5 | | 4 Cubic Feet Per Minute to Cubic Feet Per Second = 0.0667 | 100 Cubic Feet Per Minute to Cubic Feet Per Second = 1.6667 | | 5 Cubic Feet Per Minute to Cubic Feet Per Second = 0.0833 | 200 Cubic Feet Per Minute to Cubic Feet Per Second = 3.3333 | | 6 Cubic Feet Per Minute to Cubic Feet Per Second = 0.1 | 300 Cubic Feet Per Minute to Cubic Feet Per Second = 5 | | 7 Cubic Feet Per Minute to Cubic Feet Per Second = 0.1167 | 400 Cubic Feet Per Minute to Cubic Feet Per Second = 6.6667 | | 8 Cubic Feet Per Minute to Cubic Feet Per Second = 0.1333 | 500 Cubic Feet Per Minute to Cubic Feet Per Second = 8.3333 | | 9 Cubic Feet Per Minute to Cubic Feet Per Second = 0.15 | 600 Cubic Feet Per Minute to Cubic Feet Per Second = 10 | | 10 Cubic Feet Per Minute to Cubic Feet Per Second = 0.1667 | 800 Cubic Feet Per Minute to Cubic Feet Per Second = 13.3333 | | 20 Cubic Feet Per Minute to Cubic Feet Per Second = 0.3333 | 900 Cubic Feet Per Minute to Cubic Feet Per Second = 15 | | 30 Cubic Feet Per Minute to Cubic Feet Per Second = 0.5 | 1,000 Cubic Feet Per Minute to Cubic Feet Per Second = 16.6667 | | 40 Cubic Feet Per Minute to Cubic Feet Per Second = 0.6667 | 10,000 Cubic Feet Per Minute to Cubic Feet Per Second = 166.6667 | | 50 Cubic Feet Per Minute to Cubic Feet Per Second = 0.8333 | 100,000 Cubic Feet Per Minute to Cubic Feet Per Second = 1666.6667 | | 60 Cubic Feet Per Minute to Cubic Feet Per Second = 1 | 1,000,000 Cubic Feet Per Minute to Cubic Feet Per Second = 16666.6667 | Similar Flow Units Cubic Feet Per Minute to Acre-Feet per Month Cubic Feet Per Minute to Megaliters per Month Cubic Feet Per Minute to Thousand Cubic Feet per Day Common Units Cubic Feet Per Minute to Liters per Minute Cubic Feet Per Minute to Cubic Meters per Hour Cubic Feet Per Minute to Liters per Second Measurement Categories: Acceleration Angle Area Area Density Chemical Amount Data Bandwidth Data Storage Density Electric Charge Electric Current Electric Potential Energy, Work, and Heat Flow Force Frequency Fuel Economy Illuminance Length Luminance Luminous Intensity Mass Mass Flow Power Pressure Speed or Velocity Temperature Time Torque Volume Most Popular: Energy: Kilojoules to Calories Force: Newtons to Pounds Newtons to Pounds-Force Length: Inches to Meters Micron to Inches Speed: Miles per Hour to Mach Number Angle: Radians to Degrees Converter A reasonable effort has been made to ensure the accuracy of the information presented on this web site. However, the accuracy cannot be guaranteed. The conversions on this site will not be accurate enough for all applications. Conversions may rely on other factors not accounted for or that have been estimated. Before using any of the provided tools or data you must check with a competent authority to validate its correctness. KylesConverter.com is not responsible for any inaccurate data provided. To learn how we use any data we collect about you see our privacy policy. Content on this site produced by www.kylesconverter.com is available under a creative commons license unless otherwise stated. Please attribute www.kylesconverter.com when using the work, thank you! This work by www.kylesconverter.com is licensed under a Creative Commons Attribution 3.0 Unported License | Privacy Unit Conversions | Calculators | Units, Conversion & Calculation Blog | Contact | 2009-2025
190678
https://iopscience.iop.org/article/10.1088/0026-1394/30/6/023
Density, Thermal Expansion and Compressibility of Mercury - IOPscience We value your privacy We and our 3 partners use cookies and other tracking technologies to improve your experience on our website. We may store and/or access information on a device and process personal data, such as your IP address and browsing data, for personalised advertising and content, advertising and content measurement, audience research and services development. Additionally, we may utilize precise geolocation data and identification through device scanning. Please note that your consent will be valid across all our subdomains. Once you give consent, a floating button will appear at the bottom of your screen, allowing you to change or withdraw your consent at any time. We respect your choices and are committed to providing you with a transparent and secure browsing experience.Privacy and Cookies policy Customize Accept All Customize Consent Preferences Customise your consent preferences for Cookie Categories and advertising tracking preferences for Purposes & Features and Vendors below. You can give granular consent for each Third Party Vendor. Most vendors require explicit consent for personal data processing, while some rely on legitimate interest. However, you have the right to object to their use of legitimate interest. Additionally, please note that your preferences regarding purposes and vendors are saved in a cookie named 'euconsent' on your device and may be retained for up to 730 days to remember your choices. Cookie Categories Purposes & Features Vendors Cookie Categories We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below. The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ...Show more Necessary Always Active Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data. Cookie __uzma Duration 6 months Description No description available. Cookie __uzmb Duration 6 months Description No description available. Cookie __uzme Duration 6 months Description No description available. Cookie IOP_session_live Duration session Description No description available. Cookie __ssds Duration 6 months Description No description available. Cookie __ssuzjsr2 Duration 6 months Description No description available. Cookie __uzmaj2 Duration 6 months Description No description available. Cookie __uzmbj2 Duration 6 months Description No description available. Cookie __uzmcj2 Duration 6 months Description No description available. Cookie __uzmdj2 Duration 6 months Description No description available. Cookie __uzmd Duration 6 months Description This cookie is set by the provider ShieldSquare. This is a performance and monitoring cookie used for distinguishing bot and scraper traffic. Cookie AWSALBCORS Duration 7 days Description Amazon Web Services set this cookie for load balancing. Cookie JSESSIONID Duration session Description New Relic uses this cookie to store a session identifier so that New Relic can monitor session counts for an application. Cookie __uzmc Duration 6 months Description This cookie is set by the provider ShieldSquare. This is a performance and monitoring cookie used for distinguishing bot and scraper traffic. Cookie AWSALB Duration 7 days Description AWSALB is an application load balancer cookie set by Amazon Web Services to map the session to the target. Cookie __sstester Duration 6 months Description Description is currently not available. Cookie __cf_bm Duration 1 hour Description This cookie, set by Cloudflare, is used to support Cloudflare Bot Management. Cookie PHPSESSID Duration session Description This cookie is native to PHP applications. The cookie stores and identifies a user's unique session ID to manage user sessions on the website. The cookie is a session cookie and will be deleted when all the browser windows are closed. Cookie cookieyes-consent Duration 1 year Description CookieYes sets this cookie to remember users' consent preferences so that their preferences are respected on subsequent visits to this site. It does not collect or store any personal information about the site visitors. Cookie __uzmf Duration 6 months Description Description is currently not available. Cookie uzmx Duration 6 months Description Description is currently not available. Cookie IOP_prod_state Duration 2 minutes Description Description is currently not available. Cookie auth0 Duration session Description No description available. Cookie did_compat Duration session Description No description available. Cookie auth0_compat Duration session Description No description available. Cookie _csrf Duration 10 days Description This cookie is essential for the security of the website and visitor. It ensures visitor browsing security by preventing cross-site request forgery. Cookie _cfuvid Duration session Description Calendly sets this cookie to track users across sessions to optimize user experience by maintaining session consistency and providing personalized services Cookie TiPMix Duration 1 hour Description The TiPMix cookie is set by Azure to determine which web server the users must be directed to. Cookie x-ms-routing-name Duration 1 hour Description Azure sets this cookie for routing production traffic by specifying the production slot. Cookie euconsent Duration 1 year Description CookieYes sets this cookie to store data when IAB TCF is enabled. Cookie uzmxj Duration 6 months Description Description is currently not available. Cookie __ssuzjsr0 Duration 6 months Description Description is currently not available. Cookie __uzmaj0 Duration 6 months Description Description is currently not available. Cookie __uzmbj0 Duration 6 months Description Description is currently not available. Cookie __uzmcj0 Duration 6 months Description Description is currently not available. Cookie __uzmdj0 Duration 6 months Description Description is currently not available. Cookie __uzmlj0 Duration 6 months Description Description is currently not available. Cookie __uzmfj0 Duration 6 months Description Description is currently not available. Cookie __cflb Duration 1 hour Description This cookie is used by Cloudflare for load balancing. Functional [x] Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features. Cookie issuem_lp Duration 1 month Description No description available. Cookie ytidb::LAST_RESULT_ENTRY_KEY Duration Never Expires Description The cookie ytidb::LAST_RESULT_ENTRY_KEY is used by YouTube to store the last search result entry that was clicked by the user. This information is used to improve the user experience by providing more relevant search results in the future. Cookie yt-remote-session-app Duration session Description The yt-remote-session-app cookie is used by YouTube to store user preferences and information about the interface of the embedded YouTube video player. Cookie yt-remote-cast-installed Duration session Description The yt-remote-cast-installed cookie is used to store the user's video player preferences using embedded YouTube video. Cookie yt-remote-session-name Duration session Description The yt-remote-session-name cookie is used by YouTube to store the user's video player preferences using embedded YouTube video. Cookie yt-remote-fast-check-period Duration session Description The yt-remote-fast-check-period cookie is used by YouTube to store the user's video player preferences for embedded YouTube videos. Cookie yt-remote-cast-available Duration session Description The yt-remote-cast-available cookie is used to store the user's preferences regarding whether casting is available on their YouTube video player. Cookie __Secure-YEC Duration past Description Description is currently not available. Analytics [x] Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Cookie ga Duration 1 year 1 month 4 days Description Google Analytics sets this cookie to store and count page views. Cookie _ga Duration 1 year 1 month 4 days Description Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. Cookie pardot Duration past Description The pardot cookie is set while the visitor is logged in as a Pardot user. The cookie indicates an active session and is not used for tracking. Cookie vuid Duration 1 year 1 month 4 days Description Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website. Performance Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. No cookies to display. Advertisement [x] Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns. Cookie test_cookie Duration 15 minutes Description doubleclick.net sets this cookie to determine if the user's browser supports cookies. Cookie YSC Duration session Description Youtube sets this cookie to track the views of embedded videos on Youtube pages. Cookie VISITOR_INFO1_LIVE Duration 6 months Description YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface. Cookie VISITOR_PRIVACY_METADATA Duration 6 months Description YouTube sets this cookie to store the user's cookie consent state for the current domain. Cookie yt-remote-device-id Duration Never Expires Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. Cookie yt-remote-connected-devices Duration Never Expires Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. Cookie yt.innertube::requests Duration Never Expires Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. Cookie yt.innertube::nextId Duration Never Expires Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. Cookie did Duration session Description Arbor sets this cookie to show targeted ads to site visitors.This cookie expires after 2 months or 1 year. Cookie __Secure-ROLLOUT_TOKEN Duration 6 months Description Registers a unique ID to keep statistics of what videos from YouTube the user has seen. Purposes & Features Purposes (11) [x] Store and/or access information on a device Consent [x] Cookies, device or similar online identifiers (e.g. login-based identifiers, randomly assigned identifiers, network based identifiers) together with other information (e.g. browser type and information, language, screen size, supported technologies etc.) can be stored or read on your device to recognise it each time it connects to an app or to a website, for one or several of the purposes presented here. Illustrations Most purposes explained in this notice rely on the storage or accessing of information from your device when you use an app or visit a website. For example, a vendor or publisher might need to store a cookie on your device during your first visit on a website, to be able to recognise your device during your next visits (by accessing this cookie each time). Number of Vendors seeking consent: 3 Use limited data to select advertising Legitimate Interest [x] Consent [x] Advertising presented to you on this service can be based on limited data, such as the website or app you are using, your non-precise location, your device type or which content you are (or have been) interacting with (for example, to limit the number of times an ad is presented to you). Illustrations A car manufacturer wants to promote its electric vehicles to environmentally conscious users living in the city after office hours. The advertising is presented on a page with related content (such as an article on climate change actions) after 6:30 p.m. to users whose non-precise location suggests that they are in an urban zone. A large producer of watercolour paints wants to carry out an online advertising campaign for its latest watercolour range, diversifying its audience to reach as many amateur and professional artists as possible and avoiding showing the ad next to mismatched content (for instance, articles about how to paint your house). The number of times that the ad has been presented to you is detected and limited, to avoid presenting it too often. Number of Vendors seeking consent or relying on legitimate interest: 3 Create profiles for personalised advertising Consent [x] Information about your activity on this service (such as forms you submit, content you look at) can be stored and combined with other information about you (for example, information from your previous activity on this service and other websites or apps) or similar users. This is then used to build or improve a profile about you (that might include possible interests and personal aspects). Your profile can be used (also later) to present advertising that appears more relevant based on your possible interests by this and other entities. Illustrations If you read several articles about the best bike accessories to buy, this information could be used to create a profile about your interest in bike accessories. Such a profile may be used or improved later on, on the same or a different website or app to present you with advertising for a particular bike accessory brand. If you also look at a configurator for a vehicle on a luxury car manufacturer website, this information could be combined with your interest in bikes to refine your profile and make an assumption that you are interested in luxury cycling gear. An apparel company wishes to promote its new line of high-end baby clothes. It gets in touch with an agency that has a network of clients with high income customers (such as high-end supermarkets) and asks the agency to create profiles of young parents or couples who can be assumed to be wealthy and to have a new child, so that these can later be used to present advertising within partner apps based on those profiles. Number of Vendors seeking consent: 3 Use profiles to select personalised advertising Consent [x] Advertising presented to you on this service can be based on your advertising profiles, which can reflect your activity on this service or other websites or apps (like the forms you submit, content you look at), possible interests and personal aspects. Illustrations An online retailer wants to advertise a limited sale on running shoes. It wants to target advertising to users who previously looked at running shoes on its mobile app. Tracking technologies might be used to recognise that you have previously used the mobile app to consult running shoes, in order to present you with the corresponding advertisement on the app. A profile created for personalised advertising in relation to a person having searched for bike accessories on a website can be used to present the relevant advertisement for bike accessories on a mobile app of another organisation. Number of Vendors seeking consent: 3 Create profiles to personalise content Consent [x] Information about your activity on this service (for instance, forms you submit, non-advertising content you look at) can be stored and combined with other information about you (such as your previous activity on this service or other websites or apps) or similar users. This is then used to build or improve a profile about you (which might for example include possible interests and personal aspects). Your profile can be used (also later) to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Illustrations You read several articles on how to build a treehouse on a social media platform. This information might be added to a profile to mark your interest in content related to outdoors as well as do-it-yourself guides (with the objective of allowing the personalisation of content, so that for example you are presented with more blog posts and articles on treehouses and wood cabins in the future). You have viewed three videos on space exploration across different TV apps. An unrelated news platform with which you have had no contact builds a profile based on that viewing behaviour, marking space exploration as a topic of possible interest for other videos. Number of Vendors seeking consent: 0 Use profiles to select personalised content Consent [x] Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services (for instance, the forms you submit, content you look at), possible interests and personal aspects. This can for example be used to adapt the order in which content is shown to you, so that it is even easier for you to find (non-advertising) content that matches your interests. Illustrations You read articles on vegetarian food on a social media platform and then use the cooking app of an unrelated company. The profile built about you on the social media platform will be used to present you vegetarian recipes on the welcome screen of the cooking app. You have viewed three videos about rowing across different websites. An unrelated video sharing platform will recommend five other videos on rowing that may be of interest to you when you use your TV app, based on a profile built about you when you visited those different websites to watch online videos. Number of Vendors seeking consent: 0 Measure advertising performance Legitimate Interest [x] Consent [x] Information regarding which advertising is presented to you and how you interact with it can be used to determine how well an advert has worked for you or other users and whether the goals of the advertising were reached. For instance, whether you saw an ad, whether you clicked on it, whether it led you to buy a product or visit a website, etc. This is very helpful to understand the relevance of advertising campaigns. Illustrations You have clicked on an advertisement about a “black Friday” discount by an online shop on the website of a publisher and purchased a product. Your click will be linked to this purchase. Your interaction and that of other users will be measured to know how many clicks on the ad led to a purchase. You are one of very few to have clicked on an advertisement about an “international appreciation day” discount by an online gift shop within the app of a publisher. The publisher wants to have reports to understand how often a specific ad placement within the app, and notably the “international appreciation day” ad, has been viewed or clicked by you and other users, in order to help the publisher and its partners (such as agencies) optimise ad placements. Number of Vendors seeking consent or relying on legitimate interest: 3 Measure content performance Legitimate Interest [x] Consent [x] Information regarding which content is presented to you and how you interact with it can be used to determine whether the (non-advertising) content e.g. reached its intended audience and matched your interests. For instance, whether you read an article, watch a video, listen to a podcast or look at a product description, how long you spent on this service and the web pages you visit etc. This is very helpful to understand the relevance of (non-advertising) content that is shown to you. Illustrations You have read a blog post about hiking on a mobile app of a publisher and followed a link to a recommended and related post. Your interactions will be recorded as showing that the initial hiking post was useful to you and that it was successful in interesting you in the related post. This will be measured to know whether to produce more posts on hiking in the future and where to place them on the home screen of the mobile app. You were presented a video on fashion trends, but you and several other users stopped watching after 30 seconds. This information is then used to evaluate the right length of future videos on fashion trends. Number of Vendors seeking consent or relying on legitimate interest: 0 Understand audiences through statistics or combinations of data from different sources Legitimate Interest [x] Consent [x] Reports can be generated based on the combination of data sets (like user profiles, statistics, market research, analytics data) regarding your interactions and those of other users with advertising or (non-advertising) content to identify common characteristics (for instance, to determine which target audiences are more receptive to an ad campaign or to certain contents). Illustrations The owner of an online bookstore wants commercial reporting showing the proportion of visitors who consulted and left its site without buying, or consulted and bought the last celebrity autobiography of the month, as well as the average age and the male/female distribution of each category. Data relating to your navigation on its site and to your personal characteristics is then used and combined with other such data to produce these statistics. An advertiser wants to better understand the type of audience interacting with its adverts. It calls upon a research institute to compare the characteristics of users who interacted with the ad with typical attributes of users of similar platforms, across different devices. This comparison reveals to the advertiser that its ad audience is mainly accessing the adverts through mobile devices and is likely in the 45-60 age range. Number of Vendors seeking consent or relying on legitimate interest: 3 Develop and improve services Legitimate Interest [x] Consent [x] Information about your activity on this service, such as your interaction with ads or content, can be very helpful to improve products and services and to build new products and services based on user interactions, the type of audience, etc. This specific purpose does not include the development or improvement of user profiles and identifiers. Illustrations A technology platform working with a social media provider notices a growth in mobile app users, and sees based on their profiles that many of them are connecting through mobile connections. It uses a new technology to deliver ads that are formatted for mobile devices and that are low-bandwidth, to improve their performance. An advertiser is looking for a way to display ads on a new type of consumer device. It collects information regarding the way users interact with this new kind of device to determine whether it can build a new mechanism for displaying advertising on this type of device. Number of Vendors seeking consent or relying on legitimate interest: 3 Use limited data to select content Legitimate Interest [x] Consent [x] Content presented to you on this service can be based on limited data, such as the website or app you are using, your non-precise location, your device type, or which content you are (or have been) interacting with (for example, to limit the number of times a video or an article is presented to you). Illustrations A travel magazine has published an article on its website about the new online courses proposed by a language school, to improve travelling experiences abroad. The school’s blog posts are inserted directly at the bottom of the page, and selected on the basis of your non-precise location (for instance, blog posts explaining the course curriculum for different languages than the language of the country you are situated in). A sports news mobile app has started a new section of articles covering the most recent football games. Each article includes videos hosted by a separate streaming platform showcasing the highlights of each match. If you fast-forward a video, this information may be used to select a shorter video to play next. Number of Vendors seeking consent or relying on legitimate interest: 0 Special Purposes (3) Ensure security, prevent and detect fraud, and fix errors Your data can be used to monitor for and prevent unusual and possibly fraudulent activity (for example, regarding advertising, ad clicks by bots), and ensure systems and processes work properly and securely. It can also be used to correct any problems you, the publisher or the advertiser may encounter in the delivery of content and ads and in your interaction with them. Illustrations An advertising intermediary delivers ads from various advertisers to its network of partnering websites. It notices a large increase in clicks on ads relating to one advertiser, and uses data regarding the source of the clicks to determine that 80% of the clicks come from bots rather than humans. Number of Vendors seeking consent: 3 Deliver and present advertising and content Certain information (like an IP address or device capabilities) is used to ensure the technical compatibility of the content or advertising, and to facilitate the transmission of the content or ad to your device. Illustrations Clicking on a link in an article might normally send you to another page or part of the article. To achieve this, 1°) your browser sends a request to a server linked to the website, 2°) the server answers back (“here is the article you asked for”), using technical information automatically included in the request sent by your device, to properly display the information / images that are part of the article you asked for. Technically, such exchange of information is necessary to deliver the content that appears on your screen. Number of Vendors seeking consent: 3 Save and communicate privacy choices The choices you make regarding the purposes and entities listed in this notice are saved and made available to those entities in the form of digital signals (such as a string of characters). This is necessary in order to enable both this service and those entities to respect such choices. Illustrations When you visit a website and are offered a choice between consenting to the use of profiles for personalised advertising or not consenting, the choice you make is saved and made available to advertising providers, so that advertising presented to you respects that choice. Number of Vendors seeking consent: 1 Features (3) Match and combine data from other data sources Information about your activity on this service may be matched and combined with other information relating to you and originating from various sources (for instance your activity on a separate online service, your use of a loyalty card in-store, or your answers to a survey), in support of the purposes explained in this notice. Number of Vendors seeking consent: 3 Link different devices In support of the purposes explained in this notice, your device might be considered as likely linked to other devices that belong to you or your household (for instance because you are logged in to the same service on both your phone and your computer, or because you may use the same Internet connection on both devices). Number of Vendors seeking consent: 1 Identify devices based on information transmitted automatically Your device might be distinguished from other devices based on information it automatically sends when accessing the Internet (for instance, the IP address of your Internet connection or the type of browser you are using) in support of the purposes exposed in this notice. Number of Vendors seeking consent: 1 Special Features (2) [x] Use precise geolocation data Consent [x] With your acceptance, your precise location (within a radius of less than 500 metres) may be used in support of the purposes explained in this notice. Number of Vendors seeking consent: 0 Actively scan device characteristics for identification Consent [x] With your acceptance, certain characteristics specific to your device might be requested and used to distinguish it from other devices (such as the installed fonts or plugins, the resolution of your screen) in support of the purposes explained in this notice. Number of Vendors seeking consent: 0 Vendors Third party vendors (3) [x] Accept All Save My Preferences Powered by Skip to content IOP Science homeAccessibility Help Search all IOPscience content Search Article Lookup Select journal (required) Volume number: Issue number (if known): Article or page number: Lookup JournalsJournals listBrowse more than 100 science journal titles Subject collectionsRead the very best research published in IOP journals Publishing partnersPartner organisations and publications Open accessIOP Publishing open access policy guide IOP Conference SeriesRead open access proceedings from science conferences worldwide Books Publishing Support LoginIOPscience login / Sign Up Metrologia The International Bureau of Weights and Measures (BIPM) was set up by the Metre Convention and has its headquarters near Paris, France. It is financed jointly by its Member States and operates under the exclusive supervision of the CIPM. Its mandate is to provide the basis for a single, coherent system of measurements throughout the world, traceable to the International System of Units (SI). This task takes many forms, from direct dissemination of units (as in the case of mass and time) to coordination through international comparisons of national measurement standards (as in electricity and ionizing radiation). The BIPM has an international staff of over 70 and its status vis-à-vis the French Government is similar to that of other intergovernmental organizations based in Paris. Density, Thermal Expansion and Compressibility of Mercury K-D Sommer and J Poziemski Published under licence by IOP Publishing Ltd Metrologia, Volume 30, Number 6Citation K-D Sommer and J Poziemski 1994 Metrologia30 665DOI 10.1088/0026-1394/30/6/023 Authors K-D Sommer AFFILIATIONS Physikalisch-Technische Bundesanstalt, Bundesallee 100, D-38023 Braunschweig, Germany J Poziemski AFFILIATIONS Physikalisch-Technische Bundesanstalt, Bundesallee 100, D-38023 Braunschweig, Germany Figures Skip to each figure in the article Tables Skip to each table in the article References Citations Article data Skip to each data item in the article What is article data? Open science Authors K-D Sommer AFFILIATIONS Physikalisch-Technische Bundesanstalt, Bundesallee 100, D-38023 Braunschweig, Germany J Poziemski AFFILIATIONS Physikalisch-Technische Bundesanstalt, Bundesallee 100, D-38023 Braunschweig, Germany Article metrics 954 Total downloads 0 Video abstract views 26 CITATIONS 26 Total citations 0 Recent citations n/a Field Citation Ratio n/a Relative Citation Ratio Permissions Get permission to re-use this article Share this article Article information Buy this article in print Journal RSS Sign up for new issue notifications 0026-1394/30/6/665 Abstract Based on a comparison of the high-accuracy determinations known, the most probable estimate of the value of the mercury density at 20 °C and 101 kPa is 13 545,850 kg m-3. However, the relevant individual measurement results differ by relative amounts of up to 3 × 10-6 from one another, so exceeding the uncertainties of 1 × 10-6 stated by the investigators. The potential causes of these deviations are discussed, and the uncertainty of the above value is estimated. As regards the thermal expansion and the compressibility of mercury, the measurement results published to date are compared, and the most probable values are estimated. Export citation and abstractBibTeXRIS Previous article in issue Next article in issue Access this article The computer you are using is not registered by an institution with a subscription to this article. Please choose one of the options below. Login IOPscience login Find out more about journal subscriptions at your site. Purchase from Article Galaxy CCC RightFind Purchase this article from our trusted document delivery partners. Rent from This article is available from DeepDyve. Make a recommendation To gain access to this content, please complete the Recommendation Form and we will follow up with your librarian or Institution on your behalf. For corporate researchers we can also follow up directly with your R&D manager, or the information management contact at your company. Institutional subscribers have access to the current volume, plus a 10-year back file (where available). Abstract Back to top 10.1088/0026-1394/30/6/023 You may also like Journal articles Microstructural characterization of the Mg/Cu/Al diffusion bonded joint Three-dimensional, time-dependent simulation of tapered EUV FELs with phase shifters Hybrid model of radio-frequency low-pressure inductively coupled plasma discharge with self-consistent electron energy distribution and 2D electric field distribution MMIC LNA based novel composite-channel Al 0.3 Ga 0.7 N/Al 0.05 Ga 0.95 N/GaNHEMTs Developments of a Novel Impedance Matching Circuit for Electrically Small Antennas A novel MEMS inertial switch with frictional electrode IOPscience Journals Books IOP Conference Series About IOPscience Contact Us Developing countries access IOP Publishing open access policy Accessibility IOP Publishing Copyright 2024 IOP Publishing Terms and Conditions Disclaimer Privacy and Cookie Policy Text and Data mining policy Publishing Support Authors Reviewers Conference Organisers IOP Publishing Facebook pageIOP Publishing LinkedIn pageIOP Publishing Youtube pageIOP Publishing WeChat QR codeIOP Publishing Weibo pageIOP Publishing Bluesky pageIOP Publishing Threads page Search all IOPscience content Search JournalsJournals listBrowse more than 100 science journal titles Subject collectionsRead the very best research published in IOP journals Publishing partnersPartner organisations and publications Open accessIOP Publishing open access policy guide IOP Conference SeriesRead open access proceedings from science conferences worldwide Books Publishing Support LoginIOPscience login / Sign Up
190679
https://www.pregnancybirthbaby.org.au/let-down-reflex
Need to talk? Call 1800 882 436. It's a free call with a maternal child health nurse. call charges may apply from your mobile Is it an emergency? Dial 000 If you need urgent medical help, call triple zero immediately. share via Facebook beginning of content Let-down reflex 8-minute read Listen Key facts The let-down reflex is a response from your body that causes breastmilk to flow. It can take time and practice for your let-down reflex to become consistent. Your reflex can be impacted by stress, tiredness or discomfort. You can encourage your let-down reflex by relaxing and distracting yourself. Your let-down reflex can happen even when you're not breastfeeding, like when you hear your baby cry. What is the let-down reflex? The let-down reflex, or milk ejection reflex, is what makes breastmilk flow. It's an important part of breastfeeding and what happens when your baby suckles. When your baby sucks at your breast, tiny nerves are stimulated. This causes the hypothalamus and pituitary glands in the brain to release 2 hormones into your bloodstream: prolactin oxytocin Prolactin helps make the milk, while oxytocin causes your breast to push out the milk. Milk is then released, or let down, through your nipple. The let-down reflex is what makes breastmilk flow. How do I know when I'm having a let-down? Each person feels the let-down reflex differently. You may not feel anything when your let-down reflex happens. However, you might notice that: your breasts feel full your breasts feel tingly you feel thirsty while feeding or expressing from one breast, milk drips from your other breast you may feel cramping in your uterus, like when you have your period, especially in the first weeks after your baby is born You'll also notice a change in your baby's sucking pattern when the let-down reflex happens. As the milk begins to flow, their small, shallow sucks will become stronger and slower. Your let down reflex can be affected by stress, pain and tiredness. It can take time and practice for your let-down reflex to become consistent. When does my let-down reflex occur? Your let-down reflex can occur: in response to your baby sucking at your breast when hearing, seeing or thinking about your baby when using a breast pump, hand expressing or touching your breasts or nipples when looking at a picture of your baby when hearing your baby (or another baby) cry The let-down reflex generally occurs a number of times each feed. Most people who breastfeed only feel the first let-down. The let-down reflex can also occur with stimulation of your breasts, such as by your partner. What can I do to encourage my let-down reflex? The let-down reflex is not always consistent, particularly early on in breastfeeding. It takes time for you and your baby to practice and get used to feeding. It can help to get into a breastfeeding routine. A routine will help establish cues that your body will recognise. This will help to encourage your reflex. It takes around 2 weeks after birth for your milk supply to become established. After a few weeks of regular breastfeeding or expressing, your let-down reflex should become automatic. Try to breastfeed in a comfortable place. This is not always possible — but there are things you can do to feel more comfortable. If you are near others, it's okay to ask for some space. It may be easier to breastfeed without other people looking on. If you are with family, friends, your partner or other support, they may also be able to: help with other tasks help you relax You can distract and relax yourself during breastfeeding by: breathing steadily and slowly letting your shoulders drop putting your feet up having a warm, non-caffeinated drink playing some relaxing music thinking about your baby — if you are away from them, you can look at photos or videos of them You can also have a warm shower or place a warm cloth on your breast for a few minutes before you breastfeed. How can I encourage milk let-down by hand? You can also encourage your let-down reflex by hand by: gently massaging your breasts rolling your nipple between your fingers gently massaging your breast towards the nipple using a finger or the flat of your hand This can be helpful if you find your baby's suckling too painful to trigger the let-down reflex. Why has my milk let-down changed? There are some things that can affect your let-down reflex, such as: anxiety pain or discomfort tiredness caffeine and alcohol cigarette and vape use self-consciousness, which can happen when you are trying to breastfeed outside your home Stress hormones can interfere with oxytocin. There are many things to try if you are having trouble breastfeeding. Try not to think about the let-down reflex. How do I manage a fast let-down? You may also have a fast let-down reflex. This is when your milk let-down is forceful. Milk might spray out if your baby is not latched on. This doesn't necessarily mean that you have oversupply of breastmilk. You can manage a fast let-down reflex by: expressing a small amount of milk before breastfeeding reclining and allowing your baby to control the speed of the flow burping your baby after the first few minutes of breastfeeding removing your baby from your breast when you feel the let-down reflex and reattaching your baby when the milk flow is less forceful How do I deal with an unexpected let-down? Many sensations and thoughts can trigger your let-down reflex. Leaking breasts should usually stop once breastfeeding is fully established or as your child grows older. To manage leaks, you can: apply firm pressure to your breasts when you feel the first sensation of let-down use breast pads wear clothing that disguises milk stains Change your breast pads when they are wet, so your nipples don't become irritated. Resources and support If you need help and advice, or are having problems with breastfeeding, call your maternal child health nurse or a lactation consultant. You can call the Australian Breastfeeding Association on 1800 686 268. Speak to a maternal child health nurse Call Pregnancy, Birth and Baby to speak to a maternal child health nurse on 1800 882 436 or video call. Available 7am to midnight (AET), 7 days a week. Learn more here about the development and quality assurance of healthdirect content. Last reviewed: April 2025 Back To Top Related pages Oversupply of breastmilk How to increase breastmilk supply Breastfeeding your baby Search our site for Breastfeeding Need more information? #### Let Down Reflex | Breastfeeding Let Down | Tresillian Find out more about what the let down reflex is in breastfeeding and the signs that it is happening, which some, but not all, mothers notice. Read more on Tresillian website #### The let-down reflex and your milk flow | Australian Breastfeeding Association Oxytocin is the hormone that triggers your milk election reflex. A tingling feeling or rhythmic sucking show it's working. Read more on Australian Breastfeeding Association website Disclaimer Pregnancy, Birth and Baby is not responsible for the content and advertising on the external website you are now entering. Call us and speak to a Maternal Child Health Nurse for personal advice and guidance. Need further advice or guidance from our maternal child health nurses? 1800 882 436 Video call Healthdirect Australia acknowledges the Traditional Owners of Country throughout Australia and their continuing connection to land, sea and community. We pay our respects to the Traditional Owners and to Elders both past and present. © 2025 Healthdirect Australia Limited This information is for your general information and use only and is not intended to be used as medical advice and should not be used to diagnose, treat, cure or prevent any medical condition, nor should it be used for therapeutic purposes. The information is not a substitute for independent professional advice and should not be used as an alternative to professional health care. If you have a particular medical problem, please consult a healthcare professional. Except as permitted under the Copyright Act 1968, this publication or any part of it may not be reproduced, altered, adapted, stored and/or distributed in any form or by any means without the prior written permission of Healthdirect Australia. Support this browser is being discontinued for Pregnancy, Birth and Baby Support for this browser is being discontinued for this site Internet Explorer 11 and lower We currently support Microsoft Edge, Chrome, Firefox and Safari. For more information, please visit the links below: Chrome by Google Firefox by Mozilla Microsoft Edge Safari by Apple You are welcome to continue browsing this site with this browser. Some features, tools or interaction may not work correctly.
190680
https://math.stackexchange.com/questions/2006467/prove-this-xm1fmx-x
polynomials - prove this $x^{m+1}|f^{(m)}(x)-x$ - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more prove this x m+1|f(m)(x)−x x m+1|f(m)(x)−x Ask Question Asked 8 years, 10 months ago Modified8 years, 10 months ago Viewed 547 times This question shows research effort; it is useful and clear 12 Save this question. Show activity on this post. Let f(x)f(x) is polynomial with complex coefficients,such x 2|f(x)−e 2 π i m⋅x x 2|f(x)−e 2 π i m⋅x where m>1 m>1 be give postive integers,and define f(1)(x)=f(x),f(2)(x)=f(f(x)),f(3)(x)=f(f(f(x))),⋯,f(m)(x)=f(f(m−1)(x))f(1)(x)=f(x),f(2)(x)=f(f(x)),f(3)(x)=f(f(f(x))),⋯,f(m)(x)=f(f(m−1)(x)) show that x m+1|f(m)(x)−x x m+1|f(m)(x)−x I try to use mathematical induction to prove.But there is no proof polynomials Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications asked Nov 9, 2016 at 11:27 math110math110 95.1k 17 17 gold badges 154 154 silver badges 524 524 bronze badges 3 It's difficult to use induction, because you would have to take with you the condition that x 2∣f(x)−e 2 π i m⋅x x 2∣f(x)−e 2 π i m⋅x from one case to the next. The m m in there looks to me like it makes it very unpractical, perhaps impossible to prove the induction step.Arthur –Arthur 2016-11-09 11:57:18 +00:00 Commented Nov 9, 2016 at 11:57 1 Interesting problem. May I ask where it comes from?dxiv –dxiv 2016-11-11 05:10:21 +00:00 Commented Nov 11, 2016 at 5:10 1 @Inequality: I suggest to remove the acceptance mark of my answer, since a bounty question which is not already accepted might attract more user.Markus Scheuer –Markus Scheuer 2016-11-20 11:30:29 +00:00 Commented Nov 20, 2016 at 11:30 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 8 Save this answer. +100 This answer has been awarded bounties worth 100 reputation by Markus Scheuer Show activity on this post. Consider the space A=t C={a 1 t+a 2 t 2+a 3 t 3+…∣a i∈C}A=t C={a 1 t+a 2 t 2+a 3 t 3+…∣a i∈C}. A A can be equipped with the composition law ∘∘ which is linear in the first argument ( f∘h+g∘h=(f+g)∘h f∘h+g∘h=(f+g)∘h and (λ f)∘g=λ(f∘g)(λ f)∘g=λ(f∘g) for λ∈C λ∈C). Thus for any g∈A g∈A, we get a linear endomorphism ρ g(f)=f∘g ρ g(f)=f∘g. If g=b 1 t+b 2 t 2+b 3 t 3+…g=b 1 t+b 2 t 2+b 3 t 3+…, then ρ g(t k)=g(t)k=b k 1 t k+…ρ g(t k)=g(t)k=b 1 k t k+…. This shows that the "matrix" of ρ g ρ g is triangular (in particular, ρ g ρ g is compatible with the t t-adic topology) and the coefficients on the diagonal are the sequence (b n 1)(b 1 n). Restricting modulo t n+1 t n+1 gives you a linear map ρ[n]g:A n→A n ρ g[n]:A n→A n (where A n=t C n−1[t]A n=t C n−1[t] has dimension n n) defined by ρ[n]g(f)=ρ g(f)(mod t n+1)ρ gn=ρ g(f)(mod t n+1) whose matrix is simply the n×n n×n submatrix in the topleft corner of the infinite matrix of ρ g ρ g. Its eigenvalues are the b k 1 b 1 k for k=1…n k=1…n. Now suppose you are looking at a g g whose b 1 b 1 is a primitive n n th root of unity ζ n ζ n. Then ρ[n]g ρ g[n] has eigenvalues ζ k n ζ n k for k=1…n k=1…n, and since they are all distinct this is diagonalisable, and since their n n th power is 1 1, (ρ[n]g)n(ρ g[n])n is the identity of A n A n. Going back to A A, this proves that the topleft n×n n×n block in the matrix of ρ n g ρ g n is I n I n, and so that for any f∈A f∈A, f∘g∘n≡f(mod t n+1)f∘g∘n≡f(mod t n+1) Applying this to t∈A t∈A you get g∘n≡t(mod t n+1)g∘n≡t(mod t n+1) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Nov 23, 2016 at 21:16 merciomercio 51.3k 2 2 gold badges 84 84 silver badges 134 134 bronze badges 2 Instructive approach! Most of the creative work here is finding a proper framework to formulate the problem. If this is done the answer can be seen at a glance. Very nice! (+1)Markus Scheuer –Markus Scheuer 2016-11-24 07:27:32 +00:00 Commented Nov 24, 2016 at 7:27 wonderful proof !Marsan –Marsan 2016-11-26 15:33:47 +00:00 Commented Nov 26, 2016 at 15:33 Add a comment| This answer is useful 1 Save this answer. +50 This answer has been awarded bounties worth 50 reputation by math110 Show activity on this post. Attention [2016-11-20]: This answer is not correct as it is based upon a wrong assumption x m|f(x)−ζ m x m>1 x m|f(x)−ζ m x m>1 instead of x 2|f(x)−ζ m x m>1 x 2|f(x)−ζ m x m>1 I have offered a bounty for compensation in order to support a correct answer to OPs question. Let m>1 m>1 be a positive integer and f(x)f(x) a polynomial with complex coefficients and degree n≥m n≥m. We denote with ζ m ζ m the following m m-th root of unity ζ m=exp(2 π i m)ζ m=exp⁡(2 π i m) Claim: The following is valid for m>1 m>1 x m|f(x)−ζ m x⟹x m+1∣∣f(m)−x(1)(1)x m|f(x)−ζ m x⟹x m+1|f(m)−x with f(m)(x)f(1)(x)f(0)(x):=f(m−1)(f(x))m>1:=f(x):=x f(m)(x):=f(m−1)(f(x))m>1 f(1)(x):=f(x)f(0)(x):=x We introduce some more settings for convenience. Since x m|f(x)−ζ m x x m|f(x)−ζ m x there is a polynomial q(x)=∑j=0 n−m a j x j q(x)=∑j=0 n−m a j x j of degree n−m n−m with x m q(x)resp.f(x)=f(x)−ζ m x=x m q(x)+ζ m x(2)x m q(x)=f(x)−ζ m x resp.(2)f(x)=x m q(x)+ζ m x Approach: The idea is to repeatedly apply (2) and so reduce m m in f(m)f(m) until we see that (1) is valid. In order to keep the calculation manageable we will consequently simplify expressions modulo x m+1 x m+1. We do not go the shortest way, but add some intermediate steps to easier see what is going on and to motivate the claim (9) which is central for proving the answer. Step: m→(m−1)m→(m−1) We obtain f(m)(x)−x=f(m−1)(f(x))−x=f(m−1)(x m q(x)+ζ m x)−x≡f(m−1)(x m a 0+ζ m x)−x(mod x m+1)(3)(4)f(m)(x)−x=f(m−1)(f(x))−x(3)=f(m−1)(x m q(x)+ζ m x)−x(4)≡f(m−1)(x m a 0+ζ m x)−x(mod x m+1) Comment: In (3) we substitute the RHS of (2) for f(x)f(x). In (4) we note that f(x)=x m q(x)+ζ m x=x m(a 0+a 1 x+⋯+a n−m x n−m)+ζ m≡x m(a 0)+ζ m(mod x m+1)f(x)=x m q(x)+ζ m x=x m(a 0+a 1 x+⋯+a n−m x n−m)+ζ m≡x m(a 0)+ζ m(mod x m+1) This behaviour is also valid, when we consider compositions of f f. This will become more obvious when we calculate the next steps. Step: (m−1)→(m−2)(m−1)→(m−2) We obtain from (4) f(m)(x)−x≡f(m−2)(f(x m a 0+ζ m x))−x≡f(m−2)((x m a 0+ζ m x)m q(x m a 0+ζ m x)+ζ m(x m a 0+ζ m x))−x≡f(m−2)(x m q(x m a 0+ζ m x)+ζ m x m 0+ζ 2 m x)−x≡f(m−2)(x m a 0+ζ m x m a 0+ζ 2 m x)−x(mod x m+1)(mod x m+1)(mod x m+1)(mod x m+1)(5)(6)(7)f(m)(x)−x≡f(m−2)(f(x m a 0+ζ m x))−x(mod x m+1)≡f(m−2)((x m a 0+ζ m x)m q(x m a 0+ζ m x)(5)+ζ m(x m a 0+ζ m x))−x(mod x m+1)(6)≡f(m−2)(x m q(x m a 0+ζ m x)+ζ m x 0 m+ζ m 2 x)−x(mod x m+1)(7)≡f(m−2)(x m a 0+ζ m x m a 0+ζ m 2 x)−x(mod x m+1) Comment: In (5) we note the only contribution of (x m a 0+ζ m x)m(mod x m+1)(x m a 0+ζ m x)m(mod x m+1) is ζ m m x m ζ m m x m and since ζ m m=1 ζ m m=1 we get x m x m. In (6) we note the only contribution of x m q(x m a 0+ζ m x)(mod x m+1)x m q(x m a 0+ζ m x)(mod x m+1) is the constant part a 0 a 0 of q q multiplied with x m x m. Observation: When looking at (4) and (7) we might see a pattern, but to be sure we add one step more. Step: (m−2)→(m−3)(m−2)→(m−3) We obtain from (7) and continue in the same way as in the step before f(m)(x)−x≡f(m−3)(f(x m a 0+ζ m x m a 0+ζ 2 m x))−x(mod x m+1)≡f(m−3)((x m a 0+ζ m x m a 0+ζ 2 m x)m q(x m a 0+ζ m x m a 0+ζ 2 m x)+ζ m(x m a 0+ζ m x m a 0+ζ 2 m x))−x(mod x m+1)≡f(m−3)(x m q(x m a 0+ζ m x m a 0+ζ 2 m x))+ζ m(x m a 0+ζ m x m a 0+ζ 2 m x)−x(mod x m+1)≡f(m−3)(x m a 0+ζ m x m a 0+ζ 2 m x m a 0+ζ 3 m x)−x(mod x m+1)(8)f(m)(x)−x≡f(m−3)(f(x m a 0+ζ m x m a 0+ζ m 2 x))−x(mod x m+1)≡f(m−3)((x m a 0+ζ m x m a 0+ζ m 2 x)m q(x m a 0+ζ m x m a 0+ζ m 2 x)+ζ m(x m a 0+ζ m x m a 0+ζ m 2 x))−x(mod x m+1)≡f(m−3)(x m q(x m a 0+ζ m x m a 0+ζ m 2 x))+ζ m(x m a 0+ζ m x m a 0+ζ m 2 x)−x(mod x m+1)(8)≡f(m−3)(x m a 0+ζ m x m a 0+ζ m 2 x m a 0+ζ m 3 x)−x(mod x m+1) We see from (4),(7) and (8) a pattern which we will prove next. In fact we could start the answer with the next step. Step: (m−k)→(m−k−1)(m−k)→(m−k−1) We show the following is valid for 1≤k≤m−1 1≤k≤m−1: f(m−k)(a 0 x m∑j=0 k−1 ζ j m+ζ k m x)≡f(m−k−1)(a 0 x m∑j=0 k ζ j m+ζ k+1 m x)(mod x m+1)(9)f(m−k)(a 0 x m∑j=0 k−1 ζ m j+ζ m k x)(9)≡f(m−k−1)(a 0 x m∑j=0 k ζ m j+ζ m k+1 x)(mod x m+1) We obtain f(m−k)(a 0 x m∑j=0 k−1 ζ j m+ζ k m x)≡f(m−k−1)(f(a 0 x m∑j=0 k−1 ζ j m+ζ k m x))≡f(m−k−1)((a 0 x m∑j=0 k−1 ζ j m+ζ k m x)m q(x a 0 x m∑j=0 k−1 ζ j m+ζ k m x)+ζ m(a 0 x m∑j=0 k−1 ζ j m+ζ k m x))≡f(m−k−1)(x m q(x a 0 x m∑j=0 k−1 ζ j m+ζ k m x)+(ζ m(a 0 x m∑j=0 k−1 ζ j m+ζ k m x))≡f(m−k−1)(x m a 0+ζ m(a 0 x m∑j=0 k−1 ζ j m+ζ k m x))≡f(m−k−1)(a 0 x m∑j=0 k ζ j m+ζ k+1 m x)(mod x m+1)(mod x m+1)(mod x m+1)(mod x m+1)(mod x m+1)f(m−k)(a 0 x m∑j=0 k−1 ζ m j+ζ m k x)≡f(m−k−1)(f(a 0 x m∑j=0 k−1 ζ m j+ζ m k x))(mod x m+1)≡f(m−k−1)((a 0 x m∑j=0 k−1 ζ m j+ζ m k x)m q(x a 0 x m∑j=0 k−1 ζ m j+ζ m k x)+ζ m(a 0 x m∑j=0 k−1 ζ m j+ζ m k x))(mod x m+1)≡f(m−k−1)(x m q(x a 0 x m∑j=0 k−1 ζ m j+ζ m k x)+(ζ m(a 0 x m∑j=0 k−1 ζ m j+ζ m k x))(mod x m+1)≡f(m−k−1)(x m a 0+ζ m(a 0 x m∑j=0 k−1 ζ m j+ζ m k x))(mod x m+1)≡f(m−k−1)(a 0 x m∑j=0 k ζ m j+ζ m k+1 x)(mod x m+1) and the claim follows. Putting all together With the help of (9) we can show OPs claim (1). We obtain f(m)(x)−x≡f(m−1)(x m a 0+ζ m x)−x(mod x m+1)≡f(1)(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)−x(mod x m+1)≡(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)m q(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)+ζ m(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)−x(mod x m+1)≡x m q(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)+ζ m(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)−x(mod x m+1)≡x m a 0+ζ m(a 0 x m∑j=0 m−2 ζ j m+ζ m−1 m x)−x(mod x m+1)≡a 0 x m∑j=0 m−1 ζ j m+ζ m m x−x(mod x m+1)≡0(mod x m+1)(10)(11)(12)(13)(14)(15)f(m)(x)−x(10)≡f(m−1)(x m a 0+ζ m x)−x(mod x m+1)(11)≡f(1)(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)−x(mod x m+1)≡(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)m q(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)(12)+ζ m(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)−x(mod x m+1)≡x m q(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)(13)+ζ m(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)−x(mod x m+1)≡x m a 0+ζ m(a 0 x m∑j=0 m−2 ζ m j+ζ m m−1 x)−x(mod x m+1)(14)≡a 0 x m∑j=0 m−1 ζ m j+ζ m m x−x(mod x m+1)(15)≡0(mod x m+1) and the claim follows. Comment: In (10) we apply the result (4) of the first step to derive f(m−1)f(m−1) from f(m)f(m). In (11) we apply the main result (9) m−2 m−2 times to reduce f(m−1)f(m−1) to f(1)=f f(1)=f. In (12) we apply (2), the representation f(x)=x m q(x)+ζ m x f(x)=x m q(x)+ζ m x. In (13) to (14) we do simplifications (mod x m+1)(mod x m+1) similarly to the steps before. In (15) we note ζ m m=1 ζ m m=1 so that ζ m m x−x=0 ζ m m x−x=0 and we also use ∑j=0 m−1 ζ j m=1−ζ m m 1−ζ m=0∑j=0 m−1 ζ m j=1−ζ m m 1−ζ m=0 Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Nov 20, 2016 at 8:20 answered Nov 19, 2016 at 21:22 Markus ScheuerMarkus Scheuer 113k 7 7 gold badges 106 106 silver badges 251 251 bronze badges 2 @inequality: Many thanks for accepting my answer and granting the bounty! :-)Markus Scheuer –Markus Scheuer 2016-11-20 06:56:42 +00:00 Commented Nov 20, 2016 at 6:56 1 @Nemo: Oh! Yes, you're right! I've misread the claim. Thanks for pointing at it. I've offered a bounty for compensation.Markus Scheuer –Markus Scheuer 2016-11-20 08:18:29 +00:00 Commented Nov 20, 2016 at 8:18 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions polynomials See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 4prove: coefficients of f(x)f(x) are rational numbers 2How to prove that this polynomial has no more than s s repeated roots 5How prove this polynomial such 1 f(x i)=h(x i),i=1,2,⋯,s 1 f(x i)=h(x i),i=1,2,⋯,s. 1How prove this f(n)=a n f(n)=a n for any n∈Z+n∈Z+ ,then k k is positive integers 2An inequality regarding the coefficients of a real polynomial 0monic polynomial integer coefficients all of roots is a 1,a 2,⋯,a n a 1,a 2,⋯,a n and such |a i|<1(i=1,2,⋯,n−1)|a i|<1(i=1,2,⋯,n−1) 3Basic polynomial questions 0What is wrong with this proof that the set of all polynomials with rational coefficients is not countable? Hot Network Questions Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? An odd question Lingering odor presumably from bad chicken What’s the usual way to apply for a Saudi business visa from the UAE? how do I remove a item from the applications menu Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Non-degeneracy of wedge product in cohomology How to use \zcref to get black text Equation? How different is Roman Latin? ICC in Hague not prosecuting an individual brought before them in a questionable manner? What "real mistakes" exist in the Messier catalog? What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? Is it ok to place components "inside" the PCB Is it safe to route top layer traces under header pins, SMD IC? Exchange a file in a zip file quickly Proof of every Highly Abundant Number greater than 3 is Even Matthew 24:5 Many will come in my name! Why are LDS temple garments secret? How to home-make rubber feet stoppers for table legs? Numbers Interpreted in Smallest Valid Base Bypassing C64's PETSCII to screen code mapping Passengers on a flight vote on the destination, "It's democracy!" If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
190681
https://ocw.mit.edu/courses/5-111-principles-of-chemical-science-fall-2008/5b6cf0bcaeb5fdcbb332a1c7571fe1e3_I3g7KRIvQPI.pdf
MITOCW | ocw-5-111-f08-lec24_300k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Please settle down and take a look at this question. OK, let's take 10 seconds. I think that it's a simple math mistake is between one and two at least. So, the trick here is you know the p h and the p k a and you want to find the ratio so you can subtract and do the log. So maybe we'll have this question later or something similar and we can try this one again. So we're going to talk about buffers again today. I just feel the need to take a moment and reflect on the historic events of the last 24 hours, and talk about how it will affect chemistry. So some of you may have voted for the first time. Some of you may have worked on a campaign for the first time. Some of you may have been very active in a campaign for the first time, either for Obama or McCain, that you got involved. And I thought just to put this election in a little bit of the historic perspective in terms of about being an undergraduate student or a student and working on a political campaign or being part of a political movement. So, my father was very active as a political student activist. But the difference between some of you and my father was that he was a political activist at the University of Hamberg in Germany in the 1930's in Hitler's Germany. So he was the leader of the left wing student organization. That was something that put one's life at risk to take on that role at that time. So, things were heating up a little bit and the gestapo were discussing some of the activities with the left wing student leaders at college campuses in Germany. And some of them, after the discussions, no one knew where they went, they seemed to disappear. Now my father was very concerned about this and he decided to lay low for a while, and so he thought I'll do a semester at another university. And he told his parents that if the gestapo came looking for him, that they should send him a telegram saying "Your Aunt Millie is sick." Since he did not have an Aunt Milly, he knew that that would mean get out now. So he went to another university and he was doing a semester there, and someone he knew told him you really need to go into hiding. But he didn't really trust this person, so we packed a bag with a few clothes and some toiletries, but he didn't actually leave. Then the next day he came home and there was a telegram under his door. So, you can guess what the telegram said. He grabbed the bag that was already packed and headed down the stairs. The gestapo was coming up the stairs. My father's name was Heinz Leopold Lushinski and the gestapo said to him, "Do you know Herr Lushinski? And my father said, "Yes, of course, he lives on the top floor." The gestapo went up, my father went down, and he didn't go back to Germany for 30 years. So he came to the United States as a political refugee and became a citizen. He voted in every election, every possibility, he was very, very active. My family was very, very active in politics. He gave money every year to the American Civil Liberties Union to protect civil liberties, and he also gave money to the American Rifle Association. He always liked to have a plan b. So, it was sometimes a little humbling to be the only child of this man. He was in his 50's when I was born, and I thought how can I live up to something like this? Am I ever going to risk my life for what I believe in. If given that choice would I do the right thing? And I don't know if I'll ever get an answer to that question, but I talked to my father about this and he said all I need to do is work hard, find something that I love doing, some way that I can contribute, and that's what's really important -- contributing is really important. So I was drawn to teaching, and I love teaching here at MIT because you all are so talented and smart, and it is really an honor and a privilege to be involved in your education. But I feel that in the last 24 hours, we have all received an additional call to service. That president elect Obama said in the campaign that his top priorities are going to be scientific research, coming up with clean energy technologies, and improving healthcare. He called to scientists and engineers. And last night the American people said yes, we like that vision, and they elected him president. So we have been called, you have been called, he has reached out to students and said, students of science and engineering, you need to contribute. And it's been a while since any president has really called to action, scientists and engineers. And last time that happened, a man went on the moon. So let's see what we can do this time. The next challenge is clean energy, healthcare. It's going to be really important for sciences and engineers to get involved, and at the core an energy technologies, and at the core of medicine is chemistry. So you are in the right place right now. You are going to be the generation that needs to solve these problems, because if you don't solve the energy problem and don't come up with clean alternatives, there isn't going to be much of a planet left for another generation to try to solve those problems. So it's going to be your job and your job is starting right now with the education that you can get at MIT. So, it's actually somewhat interesting that today, the day after this election, we are going to talk about one of the units that students in this class have had the most difficulty with over the years, acid based titrations. This has been the undoing of some chemistry individuals. It has been the undoing of some grades of A. It has been the undoing, perhaps, of some interest in chemistry. But I would like to say today, at this moment, it will not be your undoing, it will be your triumph. Every year I challenge students to do the best job on acid based titration ever, and people have been doing well. This might be the last time I teach in the fall. You have actually had the highest grades so far in this class, in the history of the class that I know of, and so this is the challenge. So right after this election, your challenge is to conquer chemistry starting one acid and one base at a time. So, ready to do some acid based titrations? Who are the naysayers in this crowd? Just a few people up there. All right. I have to tell you that what I'm going to tell you about acid based titrations will seem like it makes pretty good sense as I'm saying it. But often, people inform me that when they actually go to work the problems on the test, it's a little less clear on what they're supposed to be doing. So the key to acid based titrations is really to work problems. And so we have, for your benefit, assigned problems for the problem-set due Friday. And so after today, you should be set to do all of the problems on the problem-set. And in terms of acid based titration, you will need a lot of this knowledge again in organic chemistry, biochemistry, if you go to medical school -- I used to TA medical students, they didn't know how to do this. And I said "Who taught you freshmen chemistry?" So it's good to learn to this now here today, work problems, take the next test, and guaranteed it'll be on the final again. So you'll learn it now, you'll get lots of points, both on the final and the third exam. All right, so acid based titrations, they're not that hard, but there are not a lot of equations to use, and I think that people in chemistry are used to what equation do I use. No, it's really about thinking about what's going on in the problem, and as the problem proceeds, as more, say, strong base is added, the problem changes. So it's figuring out where you are in the titration and knowing what sort of steps to apply. So here are some titration curves, and one thing you may be asked to do is draw a titration curve, so you should be familiar with what they look like. So we talked last time about strong acids and strong bases. So if you have a strong base, you're going to have a basic p h, and then as you add the strong acid, you will go to the equivalence point, equivalence point when you've added the same amount of moles of acid as there is base or base as there is acid, equal number of moles. And when you mix a strong acid in a strong base, you form a salt, and the salt is neutral in p h, because the conjugate of a strong acid or a strong base, is ineffectual, it doesn't affect the p h, it's neutral. So we have p h 7, and then you continue to add, in this case, a more strong acid, and the p h goes down. So for the other titration it's pretty much the same, except you start at acidic p h's, go up to neutral p h, and then go basic. So we talked about these last time and we worked a couple of problems, but now we're going to move into the slightly more difficult type of problem, which has to do with when you have a weak acid or a weak base being titrated. So let's look at the difference of the curve to start off with. So here we have the strong acid and the strong base, and here we have a weak acid and a strong base. One thing you may notice right off is that the equivalence point has a different p h. So, a strong acid and strong base again, mix, you form a salt that's neutral, p h 7. But if you're titrating a weak acid in a strong base, the conjugate of the strong base will be ineffective, but the conjugate of the weak acid will act as a base. So the p h then, at the equivalence point, when you've added equal number of moles of your strong base as you had weak acid, then you'll have the conjugate base around, and the p h will be greater than 7. So in working the problems, if you get an answer with this type of titration problem that's different than that for p h at the equivalence point, you're going to know that you did something wrong, you need to go back and check your math. Another big difference has to do with the curve shape down here, and so you notice a difference over here than over there. And in a titration that involves a weak acid in a strong base, you have a part of the curve that's known as a buffering region, and the p h is fairly flat in this buffering region as shown down here. So that's in contrast, there's no such buffering region on this side. So here the p h will go up, it'll level off, and then go up again. And this, for some of you, is probably the frustration in doing acid based titrations in lab, because you're adding and nothing's happening and nothing's happening and nothing's happening, and you're in this region, then all of a sudden you add just a little more and you're up here. So notice how steep that is over here. So sometimes when you're in the buffering region, it seems like you're never going to reach the end of the titration and then it'll happen all too quickly. So buffering region, remember a buffer is something that has a conjugate, weak acid and weak base pair, and then in a buffering region, the p h pretty much stays fairly constant in that region. It acts as a buffer, neutralizing the p h, maintaining the p h by being a source or sink of protons, and so here the p h then is staying constant in that buffering region. So those are some of the differences between the type of curves. Another point that I will mention or term I will mention that has to do with weak acid in strong base or a weak base in strong acid is this 1/2 equivalence point concept. So 1/2 equivalence point you've added 1/2 of the amount of strong base that you need to get to the equivalence point, and that's right in the middle of that buffering region. So that's another point where you'll be asked to calculate the p h. So now let's look at different points in a titration. So, first let's walk through and just think about what is happening. So when we start in this titration of a weak acid in a strong base, before we've added any of the strong base, all we have is a weak acid. So it is a weak acid in water type problem. And so here I've drawn our acid, and the acid has its proton, which is going to give up when you start doing the titration. So that's what we have at zero volume. Then we start adding our strong base, and the strong base is going to react with the acid, one-to-one stoichiometry, it's a strong base. It'll pull off protons off the same number of moles of the strong acid as the number of moles of the strong base that were added. And so then, you'll start to have a mixture of your conjugates, your weak acid and your conjugate base. So the base is a minus here. And so if you have a mixture of a weak acid in its conjugate base, that's a buffer, and so you'll move in to the buffering region here. So that's at any volume that is greater than zero and less than the equivalence point is going to be around in that region. Then we have a special category of the buffering region, which is when you've added the volume to get to the 1/2 equivalence point. And when you've done that, you will have converted 1/2 of the weak acid it its conjugate base, so you'll have equal number of moles of your weak acid as moles of the conjugate base -- 1/2 has been converted. And so that's a special category right there. Then you get to the equivalence point. At the equivalence point, you've added the same number of moles of strong base as the number of moles of weak acid you have, so you've converted all of your weak acid to it's conjugate base. So all you have is conjugate base now, and so that's controlling the p h, so the p h should be greater than 7. So that's a weak base in water problem. And if you keep going, then you're going to end up with a strong base in water problem. The weak base will still be around, but it will be negligibly affecting the p h compared to the fact that you're dumping strong acid into your titration. And so that's this part of the curve. So you see that in one type of problem, one titration problem, you actually have a lot of sub problems, or sub types of problems, you'll have weak acid buffer, special category of buffer, a conjugate base or a salt issue, and then a strong base. And this is one of the things that people have trouble with in the titrations, because we may not ask you to do all the points, we may just sort of jump in somewhere, and say okay, what is the p h at the equivalenced point, and you need to think about what's happened to get to the equivalence point. Or we may jump in and ask you about a region that would be in the buffering region, and you have to remember that at that point you should have some of the weak acid and also some of the conjugate bases being formed. So, it seems like there are a lot of different things, but there are only five types of problems. But in a titration curve, you run into a lot of those different types at different points in the problem. So now let's go the other direction and consider titration of a weak base with a strong acid. So here's what that curve would look like. You're going to start basic, of course, because you're starting with a weak base, you haven't added any strong acid yet. As you add strong acid, the p h will decrease. Because it is a weak base, you will be forming some of its conjugate as you add the strong acid, and so you'll go through a buffering region again where the curve would be flat, where the p h will be pretty much the same for region of time. Then the curve will drop again and you'll get to the equivalence point. At the equivalence point, you've added the same amount of moles of strong acid as you had weak base, so all of your weak base is converted to its conjugate acid, and so you should be acidic at the equivalence point, and then the curve goes down. be acidic at the equivalence point, and then the curve goes down. So again, we can think about this in terms of what is happening. In the beginning it's just a weak base in water problem, but as you add strong acid, you were pronating some of your base and forming its conjugate acid here, and you're in the going to be in the buffering region. Then at the 1/2 equivalence point, you've added enough moles of strong acid to convert 1/2 of the weak base to its conjugate, so those are going to be equal to each other -- the number of moles of the weak base and the number of moles of its conjugate acid. At the equivalence point, you've converted all of the weak base you started with to its conjugate acid, so it'll be a weak acid in water problem, and then at the end it's strong acid. So the trick is to recognizing what type of problem you're being asked to do, and a lot of times if people get a question and they just write down OK, at this point in the titration curve, it's going to be a weak base in water problem. And just writing that down, most of the time if you get that far, you do the rest of the problem correctly. So just identifying the type, there are only 5, of problems gets you a long way to getting the right answer. So let's do an example. We're going to titrate a weak acid with a strong base. We have 25 mils of 0.1 molar acid with 0.15 moles of a strong base, n a o h, we're given the k a for the acid. First we start with 0 mils of the strong base added. So what type of problem is this? It's a weak acid problem. So we know how to write the equation for a weak acid or for an acid in water. We have the acid in water going to hydronium ions and a conjugate base. So weak acid. For weak acid, we're going to use our k a, and we're going to set up our equilibrium expression. So here we have 0.1 molar of our acid. We're going to have some of that go away in the equilibrium, forming hydronium ion and some conjugate base, and so we know we have expressions for the concentrations at equilibrium. And we can use our k a, k a for acid, it's a weak acid problem, and we can look at products over reactants. So, see, now we're doing a titration problem, but you already know how to do this problem because we've seen a weak acid in water problem before. So we have x squared over 0.10 minus x here. We can assume x is small, and get rid of this minus x, and then later go back and check it, so that just makes the math a little bit easier. And we can solve for x and then we can check -- we can take this value, 0.00421 over 0.1 and see whether that's less than 5%, it's close but it is. So that assumption is OK. If it wasn't, what would we have to do? Quadratic equation. All right, so now, here's a sig fig question. Tell me how many sig figs this p h actually has. OK, 10 seconds. So, in the first part of the problem we had a concentration that had 2 significant figures, the 0.10 molar. Sometimes later, people have extra significant figures that they're carrying along, but we had those 2, and so we're going to have 2 after the decimal point then in the answer of the p h. So again, the number of significant figures that are limiting are going to be the number after the decimal point. All right, so we have one p h value, and now we're going to move on. So let me just put our one p h value down. We have volume of strong base, and p h over here, and we're starting here with zero moles added. We have a p h of 2 . 38. It's a weak acid, so it should be an acidic p h, which it is. All right, so now let's move into the titration problem, and now 5 . 0 mils of the strong base have been added, and we need to find what the p h is now. So it's a strong base, so it's going to react almost completely, that's our assumption. If it's strong, it goes completely. And so, the number of moles of the strong base that we add will convert all of the same number of moles of our acid over to its conjugate. So we can just do a subtraction then. So first, we need to know the initial moles of the acid that we had. We had 25 mils, 0.10 molar. We calculate the number of moles for the hydroxide added, we added 5 mils, it was 0.15 molar, and so we can calculate the number of moles of the strong base that were added. So the strong base will react completely with the same number of moles of the weak acid. And we're going to do then -- we have the moles of the weak acid here, minus the number of moles of the strong base we've added, and so we're going to have 1 . 75 times 10 to the minus 3 moles of the weak acid left. So, then how many moles of the conjugate base will be formed by this reaction? What do you think? Same number. So 0.75 times 10 to the minus 3. So always remember that in these titration problems, nothing has been added yet, you're at zero mils added. Some amount of some subtractions are going to have to occur because something has happened. You've converted something, things are different than when you started. All right, so now we have weak acid and we have moles of its conjugate, what type of problem is this? If you have a weak acid and its conjugate base -- buffer, right. So we're going to do a buffer problem and we need to know the molarity first. So we have moles over volume -- again, the volume, you had 25 mils to begin with, you added 5 more. So you have to have the total volume 30 mils, and we can calculate then the concentrations of both. Now we can set up our equilibrium table, and this looks like a buffer problem because it is, and by looking like a buffer problem you something over here, you have your weak acid over here, but you have something over here now, it's not zero now, we're starting with some conjugate base. So we have 0.0583 minus x on one side, and we 0.025 molar plus x on the other side. We can use k a again. This is set up as an acid in water going to hydronium ions and conjugate base, so we can use our k a, set things up, and we can always say let's see if x is small, make an assumption, check it later. That'll simplify the math. So we get rid of the plus x and the minus x. Again, we're saying that if x is small, the initial concentrations are going to be more or less the same as the concentrations after the equilibration occurs. And we can calculate 4 . 13 times 10 to the minus 4, as x, that is a pretty small number. And we have to check it, and yup, it's small enough, it's under 5%, so that's OK. So now we can plug this in. X is our hydronium ion concentration minus log of the hydronium ion concentration is p h, and we can calculate p h to 3 . 38 -- again, we're limited by two significant figures in the concentration. So now we've added 5 mils down here, and our p h has gone up a little bit, it's now at 3 . 38 over here. There's another option for a buffer problem. What's the one equation in this unit? Our friend Henderson Hasselbalch. And yes, you can use that here too, assuming that you check the assumption and it's OK. Most people will prefer to do this because it is a bit easier. So, you weren't given, though, the p k a in this problem, you were given the k a, so pretty easy to calculate -- minus log of the k a is the p k a. So you can calculate that, put that in. You have your concentrations and it should be concentrations, but you may notice that if you actually had moles the volume would cancel here. So here are the concentrations, but with the same volume, the volume term does cancel. It makes this a little faster and it gives the same answer, which is great. To use Henderson Hasselbalch you also need the 5% rule to be true, because Henderson Hasselbalch is assuming that x is small. It's assuming that the initial concentrations and the concentrations after equilibrium are about the same. So we can check the assumption. We can back-calculate the hydronium ion concentration, which would be x, and see if it's small, we already know it is, so it's OK. So there are 2 options for buffer problems, but do not use the Henderson Hasselbalch equation when it isn't in the buffering region, it doesn't hold then. So again, you check the assumption, and if it's OK, it's fine. If not, you need to use option one and you need to use the quadratic equation. All right, so buffering region. Now we're at the special kind of problem in the buffering region, the 1/2 equivalence point. So here you've added 1/2 the number of moles of the strong base to convert 1/2 the moles of the weak acid to its conjugate. So at this point, the concentration of h a equals the concentration of a minus -- equal number of moles in the same volume, those are equal. You can use Henderson Hasselbalch here, and find that if they're equal, you're talking about minus log of 1, so the p h is going to equal the p k a. And you're done with this type of problem. I have been known to put 1/2 equivalence problems on an exam, because exams are often long, you have only 50 minutes, there's lots of different type of problems, and this problem should not take you a long amount of time. You do not have to prove to me that this is true. All you need to remember, 1/2 equivalence point, p h equals p k a, and if you calculate the p k a, you're done. So this is a short type of problem. If you remember the definition of 1/2 equivalence point, it's easy to do. So now we have another number, so 3 . 75, and we're working on our curve. Now let's move to the equivalence point. At the equivalence point, you've added the same number of moles of your strong base as you had weak acid. So you've converted all of your weak acid to its conjugate base. So the p h should be greater than 7. Now all you have is conjugate base, that's basic, p h should be greater than 7. So when you are doing this titration, you have your weak acid and your strong base. You're going to be forming a salt here, and a salt problem, you can tell me about salts. And so, just remind me, what does the n a plus contribute to the p h here. It's going to be neutral. And what about this guy down here? Yeah, so it's going to be basic. So, the sodium, anything group 1, group 2, no effect on p h, they're neutral. But if you have a conjugate base of a weak acid, that's going to be basic. Salt problems, really just part of what you already know about. So always check your work. If your p h doesn't make sense from what you know, you might have made a math mistake. So let's calculate the actual p h at the equivalence point. We know that it should be basic, but what is it going to be? So first, we need to know how much of the strong base we had to add, because we need to know about all the moles. So how much of this did we need to add. So we needed to add enough of the strong base that you converted all of the moles of the weak acid to its conjugate. So we had 2 . 5 times 10 to the minus 3 moles of our weak acid. So that's all going to be converted to the moles of the conjugate base, and so that's going to be equal to the number of moles we needed to do it. So we needed 2 . 5 times 10 to the minus 3 moles of our strong base to do that complete conversion. We know the concentration of the base was 0.15 . So we would have needed 1 . 67 times 10 to the minus 2 liters of this concentration added to reach the equivalence point. So then the total volume that we're going to have at the equivalence point is the 25 mils that we had to begin with, plus this 16 . 7 mils to make this final, total volume. And remember, you always need to think, what is the total volume, how much has been added to get to this point in the titration curve. Then we can calculate molarity, so we know how many moles of conjugate base have been formed, and we know the new volume, so we can calculate the concentration of the conjugate base. So now, you can help me solve this problem. Set up an equation for me to solve it. Let's take 10 seconds. That's the best score we've had today. Yup. So now we're talking about a conjugate base. So we have converted all of the weak acid to the conjugate base, and so it's a weak base in water problem, so we're going to talk about a k b. If you were only given the k a for this problem, how would you find k b -- what interconnects k a and k b? K w, right. So you can calculate, here it's given to you, but you could calculate it if you had a calculator, and you would find that this is true. Now it's a weak base in water problem. We're not in the buffering region anymore. We've converted all of our weak acid to the conjugate. So it's a weak base in water problem. So we have x squared, 0.06, that was the concentration we calculated, minus x. So again, think about what type of problem it is. So again, weak base in water problem -- x squared over 0.06 minus x. And we can assume that x is small, and calculate a value for x, which is 0.83 times 10 to the minus 6, and then we're going to calculate p o h, because now x is the hydroxide ion concentration. Because in a weak base in water problem, here in this type of problem, the base, and here is your acid -- the conjugate of this acid is the base, hydroxide, and the conjugate of this weak base is its conjugate acid over here, so now when we are solving for x, we're solving for hydroxide ion in concentration, so we're calculating a p o h, which then we can calculate a p h from. So we can take 14 minus 5 . 74 and get our value. And it's bigger than neutral, it's 8, it's basic, and that makes sense, it is a weak base in water problem. So, let's see, it's 8 . 26, so now we're up here in our curve, and we're at 8 . 26, and that's going to be greater than 7 for this type of problem. So that makes sense, it's good. Greater than 7 is what we want to see. So now, you've gone too far -- you've passed the equivalence point, and you keep adding your strong base in. Now you still have some of the weak conjugate base around. So you still have this around, but you only have 1 . 83 times 10 to the minus 6 molar of it. So very little amount -- x is small. So your p h is going to be dictated by the amount of extra strong base you're adding. So this is similar, then, to a strong acid or strong base in water problem. So if you're 5 mils past the equivalence point, 5 mils times your concentration of a strong base, so you have extra, 7 . 5 times 10 to the minus 4 moles extra. So then you need to calculate a concentration of that, and so you remember the whole volume -- you're 5 mils past, you had 25 miles to start with, and you had to add 16 . 7 mils to get to the equivalence point. And you have, that's your total volume, you get a concentration, that's your concentration of hydroxide, it reacts completely, you don't have to do any equilibrium table here. It's going complete, it's a strong base. You could try adding that value of your other weak base to this, but remember, that's times 10 to the minus 6, so it's not going to be significant with significant figures. So you can just use this value --plug it in to p o h, calculate it, and then calculate p h. And so now we're somewhere up here at p h 12 . 21, 5 mils past. And there we've worked a titration problem. So let's review what we saw. In the beginning, zero mils of the strong base, we have a weak acid in water problem. We moved into the buffering region where we had our weak acid and the conjugate base of that weak acid. At the equivalence point, we've converted all of the weak acid to the conjugate base, so it's a weak base problem. And then beyond the equivalence point, it's a strong base problem. That's what we've just worked. So, we can check these all off now. You know how to do all of these types of problems. And there are not that many, you just need to figure out where to apply what. And if you can do that, you're all set, this unit will be easy for you, and you can go through and make me very happy on the exam. There's nothing -- well, there are few things in life as beautiful to me as a perfectly worked titration problem. It really, it brings me joy, and I've had people write on the exam sometimes, "I hope that my solution to this brings you joy." And I will often write, "Yes, it does," and put a smiley face. Because it really is nice to see these beautifully worked. I know, I'm a little nerdy and geeky, but after yesterday, being smart and a nerd and a geek is cool again. All right, so let me just tell you where we're going. We have five more minutes, and actually that's perfect, because I can get through some rules in those 5 minutes. So let's do 5 minutes of rules. Oxidation reduction doesn't have a lot of rules, so five minutes is actually all we need to do that. Oxidation reduction involves equilibrium, it involves thermodynamics. I like it because it's really important for reactions occurring in the body, and acid bases as well -- p k a's are really important to that. And so, between acid base and oxidation reduction, you cover the way a lot of enzymes work. So let me give you five minutes of rules, and that will serve you well in this unit. Some of these are pretty simple. For free elements, each atom has an oxidation number of 0, so this would be 0. So, oxidation number of 0 in a free element. For ions that are composed of one atom, the oxidation number is equal to the charge of the atom, so lithium plus 1 ions would have an oxidation number of plus 1. Again, pretty straightforward. Group one and group two make your lives easy. They seem to have a lot of consistent rules. Group one metals in the periodic table have oxidation numbers of 1. Group two metals have oxidation numbers of plus 2. Aluminum is plus 3 in all its compounds. Pretty simple. Now we get to things that are a little more complicated but still useful, oxygen. Oxygen is mostly minus 2, but there are exceptions to that, such as in peroxides where it can have an oxidation number of minus 1, and if it's with a group one metal, it can be minus 1. Remember, group one, and actually group two here, that's plus 1, always plus 1, always plus 1, always plus 2, and so hydrogen has to accommodate that. So usually plus 1, except when it's in a binary complex with these particular metals that are in group one or group two. Fluorine, almost always minus 1, or always minus 1 -- other halogens, a chloride, bromide, iodide, also usually negatives, but if they're with oxygen, then it changes. So, here is an example. And in neutral molecules, the sum of the oxidation numbers must be 0. When the molecule has a charge, the sum of the oxidation numbers must be equal to that charge. So, let's do a quick example. Hydrogen, in this case, is going to be what? Plus 1, so it's not with a group one, group two metal here. So what does that leave for nitrogen? And that makes the sum, plus 1, which is equal to the sum of that molecule, so that works. So we might not have known nitrogen, but we can figure it out if we know the rules for hydrogen and we know what it all has to equal up to. And so, this unit is sometimes a relief after oxidation reduction, because it's all about simple adding and subtracting, it's not so bad. OK, oxidation numbers do not have to be integers. Example here, you have superoxide, what would its oxidation number be? Minus 1/2. And those are the rules, and then on Friday, we'll come back and we'll look at some examples.
190682
https://www.teacherspayteachers.com/Product/Graphing-Logarithmic-Functions-Cheat-Sheet-and-Video-3731508
Graphing Logarithmic Functions Cheat Sheet and Video Description This reference sheet for graphing logarithmic functions walks students through identifying x and y shifts and the base, identifying the parent function, creating a table for the parent function, shifting the parent table, plotting the points from the shifted table and sketching in the vertical asymptote. Students can color the reference sheet to personalize it for their notebook or binders. The sheet can also be enlarged into a poster as an anchor chart for graphing logs. Includes video link. You may also like: Solving Logarithm Equations Digital Math Escape Room Reflective Algebra 2 Homework Graphing Exponential Functions Cheat Sheet Algebra 2 Word Wall - print and digital Graphing Exponential Functions Activity - print and digital Graphing Logarithmic Functions Cheat Sheet and Video Save even more with bundles Reviews Questions & Answers Standards
190683
https://en.wikipedia.org/?title=Trigonometric_identity&redirect=no
Trigonometric identity - Wikipedia Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Trigonometric identity [x] Add languages Add links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Add interlanguage links Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item From Wikipedia, the free encyclopedia Redirect to: List of trigonometric identities Retrieved from " Hidden category: Redirects connected to a Wikidata item This page was last edited on 3 May 2006, at 12:25(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search Trigonometric identity Add languagesAdd topic
190684
https://www.youtube.com/watch?v=EPoNTqc6dME
Maxima and minima on a CLOSED interval (KristaKingMath) Krista King 273000 subscribers 149 likes Description 16421 views Posted: 27 Jul 2015 ► My Applications of Derivatives course: The process for finding the extrema on a closed interval is the same as the process for finding extrema on an open interval. You'll take the derivative of the function, and then set the derivative equal to 0 to solve for critical points. Because you're looking at a closed interval, you should only consider the critical points within the interval. Any critical points outside the interval can be ignored. Then you'll use the first derivative test to characterize the critical points inside the interval, and both endpoints of the interval. Because the interval is closed, you can call the critical point or endpoint with the highest value the global maximum, and you can call the critical point or endpoint with the lowest value the global minimum. All other critical points or endpoints will be called local maxima and local minima, which of course will be the local extrema of the function on the closed interval. ● ● ● GET EXTRA HELP ● ● ● If you could use some extra help with your math class, then check out Krista’s website // ● ● ● CONNECT WITH KRISTA ● ● ● Hi, I’m Krista! I make math courses to keep you from banging your head against the wall. ;) Math class was always so frustrating for me. I’d go to a class, spend hours on homework, and three days later have an “Ah-ha!” moment about how the problems worked that could have slashed my homework time in half. I’d think, “WHY didn’t my teacher just tell me this in the first place?!” So I started tutoring to keep other people out of the same aggravating, time-sucking cycle. Since then, I’ve recorded tons of videos and written out cheat-sheet style notes and formula sheets to help every math student—from basic middle school classes to advanced college calculus—figure out what’s going on, understand the important concepts, and pass their classes, once and for all. Interested in getting help? Learn more here: FACEBOOK // TWITTER // INSTAGRAM // PINTEREST // GOOGLE+ // QUORA // 6 comments Transcript: in this video we're talking about how to find the extrema of a function on a closed interval and in this particular problem we've been given the function f ofx is equal to sin s of X and we've been asked to find the extrema of this function over the closed interval 0 to 3 so what are we talking about when we say the extreme of a function first of all we're talking about the maxima and Minima of the function so its highest points and its lowest points we're looking for the extrema of this function but only those extrema that lie inside this closed interval 0 three so this is like any other optimization problem where you're asked to find local Max and local Min except that we have this closed interval and when that's the case we're going to have to test the end points of the interval xal 0 and xal 3 to say whether or not those end points represent the highest value that the function attains or the lowest value that the function attains inside the interval in other words we're going to have to pay attention to critical points but also to the end points of the interval in this case xal 0 and xal 3 so the first thing we're going to do is we're going to find the derivative of our original function f ofx let's first rewrite our function f ofx as Sin of xan squared and this doesn't change the value of the function at all we've just brought the exponent outside these two functions are actually exactly the same but this is easier for us to take the derivative of because we can see that it's now a power function so to take the derivative we're going to say frime of X is the derivative and we're going to use power rule which means that we're going to going to bring the exponent down in front here so we're going to bring the exponent down in front two we're going to apply chain Rule and remember chain rule tells us take the derivative of the outside function first ignoring the inside function completely and then once you've done that multiply by the result of the inside function so in this case our inside function is s of X so we're just going to ignore that for a second we're going to deal with the larger power function that's defined by this exponent which means we bring the two out in front we just leave the sin x alone for a second and then according to power rule we subtract one from the exponent so 2 - 1 gives us one but now according to chain rule we have to multiply by the derivative of the inside function so the derivative of s of X is cosine of x so we have to multiply by cosine of x so when we simplify this there's no reason to write this one as an exponent that's redundant so we can just call this two sin of x cosine of x this is our derivative function now in order to find critical points we need to set this derivative function equal to zero so we're going to set it equal to zero we can go ahead and divide both sides by 2 and get 0 = sin of x cosine of x now in the same way that you use Zero Theorem to factor one side and set each factor individually equal to zero here we have two factors one is s of X and one is cosine of x so we can set those individually equal to zero because if s of X is equal to zero this right hand side will all be zero and the equation will be true alternately if cosine of x is equal to 0 then we'll multiply it by S of X we'll get zero on the right hand side and the equation will be true so we can say this equation is true if sin of X is equal to Zer and or if cosine of x is equal to zero and then we can solve these equations individually so what we can say if we look at a unit circle is that s of X is going to be equal to 0 if x is equal to 0 Pi 2 pi 3 Pi Etc and it would also be equal to Zer at piun -2 Pi so we can have it going in both directions for cosine of x equals 0 that equation is going to be true when X is equal to < / 2 3 pi over 2 5 pi over 2 dot dot dot in that direction or in this direction here piun / 2 -3 piun / 2 Etc so all of these vales use are potential critical points but here's where the interval comes in for the first time we only care about the critical points that lie inside of this interval xal 0 to xal 3 if there's a critical point for the function but it's outside of the interval it's irrelevant because we're only interested in Maxima and Minima that lie inside of this interval so what we can do is cross out any critical points that don't lie in this interval so first of all -2 Pi well that's less than zero so is pi so these are gone zero here is at the left end point of the interval which means it can't be a critical point inside of this interval because the function can't change direction there because we're not looking at the part of the function that's defined to the left of x equals 0 so we can't have a critical point here it's already represented by the end point now we know Pi is equal to 3.14 well the right end point of our interval is three which means Pi is greater than three it lies outside of the interval so this isn't going to be relevant either and when we multip L 3.14 by 2 or by 3 those are going to be irrelevant also they're all going to be greater than three or outside of the interval on the right side here so what about these values over here well netive pi over 2 that's a negative number that's going to be to the left of zero outside of our interval so that's gone pi over 2 that value if we do it on our calculator or in our heads it's about 1.57 that is inside of our interval 0 to 3 so that one's going to stay but 3 pi/ 2 is greater than four already that's going to be outside of our interval on the right hand side also so anything larger here is going to lie outside of the interval therefore the only critical point that we care about is the value x equals Pi so now what we want to do as with any extrema problem we want to go ahead and draw ourselves a little simple number line here and we always want to define the end points of the interval we've been given so we'll say the end points 0 and three and we'll include any critical numbers that are still still inside of the interval in this case the only one is Pi / 2 so we'll put that right in the middle here Pi / 2 now we know that X = < over two is probably going to be a critical point where the function changes Direction but in order to say whether it's a local maximum or a local minimum we need to figure out what the function is doing to the left of Pi / 2 and what it's doing to the right of pi/ 2 in other words whether it's increasing or decreasing on the left and right hand sides so in order to do that we're going to use test values to the left and right of Pi 2 so Pi / 2 is about 1.57 which means we could use a test value over here of 1 and a test value over here of two because 1 is between 0 and pi over 2 and two is between pi over 2 and 3 but we're going to be plugging our test values into our derivative function 2 sinx cine X so it'll be easier if we use values in terms of Pi so let's use here pi over 4 because pi over 4 is greater than Z but less than Pi / 2 and over here we'll use 3 pi over 4 because that value is greater than Pi over2 but less than three okay so now we're going to be using the first derivative test to figure out the increasing and decreasing behavior of the original function and remember to use the first derivative test we have to use the first derivative we have to plug our test values into the first derivative so we're going to be plugging these values into F Prime so let's go ahead first and say F Prime of Pi over4 our first test value and when we do that we'll plug it into our derivative function and we'll get 2 sin of < / 4 cine of < over 4 now if we simplify here s of piun over 4 is < TK 2 over 2 cosine of piun over 4 is also s < TK 2 over two and when we simplify this here what we're going to get if we multiply across the numerators Square < TK of two Square < TK of two is just two so we can cancel those out multiply by two and then we're going to get this two to cancel with this two from the numerator and denominator and this two in the denominator to cancel with this two in the numerator so what we're going to end up with is one the value we get specifically is irrelevant the only thing that matters is whether or not this result here is positive or negative and the fact that this is a positive value we have positive one means that the function is increasing over the interval 0 to piun / 2 now we need to test 3 piun over 4 so we'll say F Prime of 3 piun over4 our other test value is going to be equal to 2 sin of 3i over 4 cosine of 3 piun over 4 and in this case s of 3 piun over 4 is < tk2 over 2 but cosine of 3 piun over 4 is < tk2 / 2 2 so when we simplify here < TK 2 < TK 2 is going to give us two so we can put a two out here to replace those two then we're going to get this two to cancel with this one and this one to cancel with this one and you can see we're just going to be left with -1 and again the specific value is not important what's important is that we got a negative value which means that the original function is decreasing in this interval what the first derivative test gives us then is a really clear picture of what's happening at Pi / 2 as the function is getting close to pi over 2 on the left hand side it's increasing it's going up it gets to pi over 2 and then it changes Direction and it starts going down which means we can literally see the top of this peak here where the function Peaks at pi over 2 the function has a local maximum at Pi / 2 so because the function is increasing in the entire interval 0 to Pi / 2 and then decreasing in the entire interval pi over 2 to three because there were no other critical points there were only these two intervals so pi over 4 told us the behavior of the function over the entire interval 0 to piun / 2 and 3 Pi over4 told us the behavior of the original function over the entire interval pi over 2 to 3 so we've got increasing and decreasing so the function's value at x equals 0 could not possibly be larger than the function's value at Pi / 2 because it's starting low down here and increasing the entire time until it gets to pi/ 2 then it starts decreasing the entire time until it gets to to x = 3 so the function's value at x = 0 and x = 3 could not possibly be larger than the function's value at Pi / 2 so what we can conclude then is that the function has its largest value in the interval at pi/ 2 so we can say Global maximum and we don't even have to say local because since the interval is closed and we know we're only looking at this particular part of the function we know this is the absolute maximum so we can call it the absolute Max or the global Max and that's at x = < / 2 we're going to plug that value into the original function at the end of the problem to get the corresponding y value so we can give the coordinate point of the global maximum of this function in the interval but now we need to check the end points of the interval xal 0 and xal 3 we just need to plug those end points into the original function to see what the value of the function is at each end point so we're going to take xal 0 and plug it into F ofx so we're going to say f of 0 is going to be equal to and we'll plug it into this version right here s of X squ so we'll say s of 0^ s well s of 0 is 0 0 squar is still 0 so F of 0 is going to be zero what about F of three the value of the function at xal 3 well we plug three into the original function and so we'll say that's going to be S of 3 quantity squared now if we use our calculators to find s of 3an squar what we see is that that it's about 0.199 since 0 is less than 0.199 the lowest value of the function occurs at x equals 0 because that's where we found this zero value here so that means that the global minimum or the absolute minimum is going to be at0 x equals 0 the end point of the interval we already plugged that into the original function and we found zero so in other words the function passes through the origin the 0 and that's the global minimum of the function over the interval so now our last step is just to plug pi/ 2 into the original function to find the associated yval so we're going to say f of Pi / 2 is equal to sin of Pi / 2^ squar well s of Pi / 2 is 1 1^ 2ar is 1 so we can say Pi / 2 comma 1 so the global maximum occurs at x = piun / 2 and the value of the function there is 1 so the largest value that the function attains in the interval is y = 1 and it gets there at Pi / 2 so there's the global maximum and the global minimum and that's how you find the extrema of a function over a closed interval
190685
https://www.sciencedirect.com/topics/neuroscience/left-atrial-appendage
Left Atrial Appendage - an overview | ScienceDirect Topics Skip to Main content Journals & Books Left Atrial Appendage In subject area:Neuroscience The left atrial appendage (LAA) refers to a remnant of the embryonic left atrium that is located in the left atrioventricular sulcus. It has a complex anatomy with multiple lobes and a narrow junction connected to the left atrium. The LAA acts as a decompression chamber during times of increased atrial pressure and is involved in the secretion of ANF. It is important to note that the LAA is a common site for thrombus formation in patients with atrial fibrillation, leading to an increased risk of strokes. AI generated definition based on:Essential Interventional Cardiology (Second Edition), 2008 How useful is this definition? Press Enter to select rating, 1 out of 3 stars Press Enter to select rating, 2 out of 3 stars Press Enter to select rating, 3 out of 3 stars About this page Add to MendeleySet alert Discover other topics 1. On this page On this page Definition Chapters and Articles Related Terms Recommended Publications Featured Authors On this page Definition Chapters and Articles Related Terms Recommended Publications Featured Authors Chapters and Articles You might find these chapters and articles relevant to this topic. Chapter Percutaneous closure of patent foramen ovale, atrial septal defects and the left atrial appendage 2008, Essential Interventional Cardiology (Second Edition)STEPHAN WINDECKER MD, BERNHARD MEIER MD FESC FACC Left atrial appendage The trabeculated left atrial appendage (LAA) is a remnant of the embryonic left atrium, whereas the smooth walled left atrial cavity is formed by the outgrowth of the pulmonary veins. The LAA is lined by endothelium and contains pectinate muscles, which run largely parallel to each other and give rise to the trabeculated surface.17 The LAA lies anterolaterally in the left atrioventricular sulcus and is in close contact with the pulmonary artery superiorly and the left ventricular free wall inferiorly. The anatomy of the LAA is rather complex with a windsocklike configuration consisting of multiple lobes and a narrow junction, which is connected to the left atrium (Figs. 34.8 and 34.9).18 The size of the LAA varies considerably with an orifice measuring 5–27 mm in diameter and a LAA length measuring 16–51 mm. The LAA has been considered a decompression chamber at times of increased atrial at pressure due to its high distensibility, the anatomical location high in the left atrium and the ability to secrete ANF. Atrial fibrillation not only affects remodelling of the left atrium but also of the LAA. Thus, LAA casts of patients with atrial fibrillation have been found more voluminous with larger orifices and fewer branches compared with patients in normal sinus rhythm. In addition, appendage Doppler flow velocities and LAA ejection fraction have been observed in patients with atrial fibrillation. These pathological changes result in stasis and predispose to thrombus formation within the LAA cavity (Fig. 34.10). Of note, transesophageal echocardiographic studies revealed that >90% of all thrombi related to atrial fibrillation originate from the LAA.19 Unfortunately, strokes related to LAA thrombus embolism are larger and more disabling compared to strokes of other etiology presumably related to the relatively large thrombus size nested within the LAA cavity. View chapterExplore book Read full chapter URL: Book2008, Essential Interventional Cardiology (Second Edition)STEPHAN WINDECKER MD, BERNHARD MEIER MD FESC FACC Review article The clinical anatomy of the left atrial structures used as landmarks in ablation of arrhythmogenic substrates and cardiac invasive procedures 2021, Translational Research in AnatomyDamian Dudkiewicz, ... Mateusz K. Hołda 8 Left atrial appendage The left atrial appendage is a remnant of the primitive atrium that protrudes from the postero-lateral aspect of the left atrium. It is important for heart rate control and maintaining atrial pressure. Furthermore, it also plays an important role in cardiac thrombogenesis and arrhythmogenesis . Several morphological factors are responsible for the increased thrombogenicity of the left atrial appendage. Firstly, it is a multi-lobular structure with rich trabeculations, a small orifice and a narrow neck – ideal for thrombus formation . Secondly, the appendage may have electrical activity which contributes to atrial fibrillation . The left atrial appendage comes in different shapes and sizes and its thrombogenic potential is closely related to its morphology [44,45]. Wang et al. have developed a classification system that divides the appendages into four types: a chicken wing type, a cauliflower type, a cactus type and windsock type . Some shapes are less pathogenetic than others. For example, the chicken wing morphology is significantly less likely cause thromboembolic events than other shapes . On the other hand, the cauliflower type is an independent predictor for stroke . Unfortunately, Wang's classification has a lot of discrepancies in both imaging and cadaveric studies. It is believed that the classification system is not well replicable and cannot accurately predict the correlation between different types of appendages and their risk factor for stroke . Recently, a simple classification system was designed to help estimate different thrombogenic properties based on the left atrial appendage shape. Three different appendage body types were distinguished: type I – the cauliflower (present in 36.5% of cases); type II – the chicken wing (present in 37.5% of cases) and type III – the arrowhead (present in 26.0% of cases) (Fig. 4) . Interestingly, the total volume and the orifice sizes were similar between appendage types. It was shown that age significantly affects the size of the left atrial appendage. It causes appendage enlargement through the progressive transformation of the orifice geometry from a round to a more oval-shaped opening . Changes in the shape of the orifice may have negative implications for interventions. The irregular appendage openings can complicate transcatheter procedures targeted since there may be a device mismatch and residual leaks . Sign in to download hi-res image Fig. 4. Three-dimensional reconstructions segmented from contrast enhanced computed tomography of the heart showing representative for each left atrial appendage (LAA) type (Mimics Innovation Suite 22, Materialise). Show more View article Read full article URL: Journal2021, Translational Research in AnatomyDamian Dudkiewicz, ... Mateusz K. Hołda Review article Embolic stroke of undetermined source: beyond atrial fibrillation 2022, Neurología (English Edition)A. Arauz, ... A. Baranchuk Morphology of the left atrial appendage The left atrial appendage (LAA) is an embryonic remnant of the primordial left atrium that acts as a reservoir during conditions of fluid overload.25 It is the main source of cardiac thrombi in patients with AF, due to the blood stasis caused by its morphology.50 Manning et al.51 found that up to 98% of all atrial thrombi formed during AF originated in the LAA. In a multicentre retrospective study of 359 patients with AF undergoing brain and cardiac MRI studies, Anselmino et al.50 classified LAA morphology into 4 types: chicken wing, cauliflower, cactus, and wind sock. Silent cerebral ischaemic lesions were detected in 295 patients (84.8%), with a median of 23 lesions. The population under study was stratified by quartiles according to the number of lesions: ≤ 6, 7-23, 24-43, and ≥ 44. An association was found between the number of silent cerebral ischaemic lesions and LAA morphology. Cauliflower morphology has been associated with greater numbers of lesions.26,50 Similarly, Di Biase et al.25 found that chicken-wing LAAs are associated with a 79% reduction in the risk of stroke or TIA (OR: 0.21, 95% CI, 0.05-0.91; P=.036). Previous studies suggest that non-chicken-wing LAA may increase the risk of embolic events in patients with ESUS, constituting an echocardiographic marker of risk of recurrence.25,26 View article Read full article URL: Journal2022, Neurología (English Edition)A. Arauz, ... A. Baranchuk Chapter Intracardiac Echocardiography for Electrophysiology 2014, Cardiac Electrophysiology: From Cell to Bedside (Sixth Edition)Mathew D. Hutchinson, David J. Callans Left Atrial Appendage (LAA) Visualization Characterization of the LAA is routinely performed before LA ablation in patients with inadequate preoperative anticoagulation and/or persistent AF. The advent of LAA occlusion devices provides another potential niche for ICE imaging. The LAA can be viewed with ICE from several different imaging planes: (1) from the right atrium across the atrial septum; (2) from the left atrium; (3) from the coronary sinus; or (4) from the pulmonary artery. The recent Intra-Cardiac Echocardiography–guided Cardioversion to Help Interventional Procedures Study (ICE-CHIP) study prospectively compared LAA imaging with transesophageal echo (TEE) versus phased array ICE; the study found incomplete LAA imaging with ICE in 15% of patients, as well as a lower sensitivity to detect LAA thrombus compared with TEE.3 The comparative image quality in ICE-CHIP was potentially biased by the exclusive use of a right atrial imaging plane with ICE. Other reports have suggested that the aforementioned alternative imaging planes allow imaging more proximate to the LAA, and thus provide enhanced tissue characterization. View chapterExplore book Read full chapter URL: Book2014, Cardiac Electrophysiology: From Cell to Bedside (Sixth Edition)Mathew D. Hutchinson, David J. Callans Chapter Masses 2018, Intraoperative and Interventional Echocardiography (Second Edition)Donald C. Oxorn MD, Catherine M. Otto MD CASE 11-1 Left atrial appendage TEE is often requested before electrical cardioversion or catheter ablation for atrial fibrillation to evaluate the left atrial appendage for the presence of thrombus. Adequate visualization of the atrial appendage requires at least two orthogonal views, using a high-frequency (5 MHz or higher) transducer and with the image zoomed to show the appendage anatomy. This case shows normal views of the left atrial appendage in a patient undergoing coronary artery bypass grafting surgery. Fig 11.1. In this midesophageal biplane view, atrial appendage is visualized in two-chamber plane (left) with mitral valve obliquely cut. Note normal curved triangular shape of atrial appendage. Using biplane mode, line for second image plane is aligned in center of atrial appendage to show orthogonal view (middle panel). In far right image, normal left atrial appendage has been opened. White arrows (as well as black arrows in middle TEE image) indicate normal pectinate muscles or trabeculations; it is important to recognize normal variation in size and appearance of these so as not to mistake them for atrial thrombi. Fig 11.2. With probe rotated to patient’s left side, left upper pulmonary vein as well as prominent ridge of tissue between appendage and left upper pulmonary vein (red arrows) are seen. This ridge can be very prominent in some patients and may cause reverberation artifacts that might be mistaken for thrombus in appendage. Small pericardial effusion (PE) around lateral wall of appendage is noted. In right frame, corresponding 3D image is seen; in video, appendage is noted to be fibrillating. Asterisk indicates pectinate muscle. View chapterExplore book Read full chapter URL: Book2018, Intraoperative and Interventional Echocardiography (Second Edition)Donald C. Oxorn MD, Catherine M. Otto MD Chapter Cardiac Anatomy and Pathology 2017, Clinical Cardiac Pacing, Defibrillation and Resynchronization Therapy (Fifth Edition)Siew Yen Ho The Left Atrium As with the right atrium, the left atrium has three components and shares its septum. The atrial appendage is characteristically a small finger-like cul-de-sac in human hearts where thrombi may form.33 In most hearts the appendage extends from between the anterior and lateral walls of the left atrium and its tip is directed anterosuperiorly, overlapping the left border of the right ventricular outflow tract or the pulmonary trunk and the main stem of the left coronary or the circumflex artery (Fig. 1-8A, B). It is not uncommon to find the tip of the appendage directed laterally and backward, although in a few hearts the tip portion passes behind the arterial pedicle to sit in the transverse pericardial sinus. The external appearance of the left atrial appendage is that of a slightly flattened tube with crenellations, often with one or more bends, and often terminates in a pointed tip. Due to its slightly flattened shape, the lower surface usually overlies the left ventricle, whereas the upper surface is beneath the fibrous pericardium. There is no terminal crest. The pectinate muscles are frond-like muscle bundles mostly confined to the endocardial surface of the atrial appendage. The junction of the appendage with the body of the left atrium, also described as the os or orifice, is usually oval-shaped. When viewed from within the left atrial chamber, the os is situated anterior to the orifices of the left pulmonary veins but is separated from them by a ridge-like structure (Fig. 1-8B, C). In reality, the lateral ridge is a slight infolding of the left atrial wall that is filled with epicardial tissues including the remnant of the vein of Marshall, nerve bundles, and in some hearts, the sinus node artery. In contrast to the right atrium, the body of the atrium including the septal component is fairly smooth-walled. The venous component receives the pulmonary veins and the vestibular component surrounds the mitral orifice, but there are no anatomic landmarks that mark the border between the two, although frequently a few pits or crevices are seen in the inferior wall at the border zone. Although mainly smooth on the endocardial surface, the atrial walls are composed of differently aligned myocardial bundles with marked regional variations in thickness.11,28,34 There are usually four pulmonary veins entering the left atrium, but there are also considerable variations.35 The transition between atrial wall and venous wall is smooth. When the veins are funnel-shaped as they enter the atrium, it is difficult to define the orifices precisely. It is common to find extensions of atrial muscle on the adventitial side of the veins, especially around the superior veins. The orifices of the right pulmonary veins are alongside the plane of the atrial septum with the orifice of the right upper vein lying behind the entrance of the superior caval vein into the right atrium (see Fig. 1-8C). The course of the coronary sinus is related to the inferior aspect of the left atrial wall on its epicardial surface, but the sinus is not located immediately adjacent to the hinge line of the leaflets of the mitral valve (Fig. 1-8D). It has a variable relationship with the mitral annulus and with the circumflex artery. It runs along the epicardial aspect of the vestibular portion of the atrial wall but at varying distances from the mitral annulus along its course.36 Show more View chapterExplore book Read full chapter URL: Book2017, Clinical Cardiac Pacing, Defibrillation and Resynchronization Therapy (Fifth Edition)Siew Yen Ho Chapter Heart and Neurologic Disease 2021, Handbook of Clinical NeurologyAshwin Bhirud, Smit Vasaiwala Surgical left atrial appendage closure The left atrial appendage (LAA) has been identified as the origin of greater than 90% of thrombi in patients with NVAF (Blackshear and Odell, 1996). Surgical closure of the LAA by excision or exclusion has become a standard component of the MAZE procedure to treat AF during concomitant cardiac surgery. Observational and small randomized studies suggest significant improvement in stroke risk without OAC (Cox et al., 1999; Tsai et al., 2014; Caliskan et al., 2017). However, incomplete surgical closure is a persistent issue that complicates study results and practical management. Up to 60% of surgical left atrial appendage closure (LAAC) with excision or exclusion results in incomplete closure, which has been associated with undiminished risk of stroke (Kanderian et al., 2008). An RCT of LAA amputation or ligation in AF patients undergoing cardiac surgery, the left atrial appendage occlusion Study III (LAAOS III) trial, is underway. View chapterExplore book Read full chapter URL: Handbook2021, Handbook of Clinical NeurologyAshwin Bhirud, Smit Vasaiwala Chapter Heterotaxy and Isomerism of the Atrial Appendages 2011, Diagnosis and Management of Adult Congenital Heart Disease (Second Edition)Elisabeth Bédard, ... Hideki Uemura The atrium Morphologically right and morphologically left atria can be differentiated by studying the anatomy of their atrial appendages and the morphology of the atrial septum1: • Anatomy of the atrial appendages (Fig. 53-1) • The morphologically right atrial appendage is a broad structure, and the pectinate muscles extend around the muscular AV vestibules.3 • The morphologically left atrial appendage is a narrow finger-shaped structure to which the pectinate muscles are confined; there is continuity between the vestibule of the AV junction and the smooth-walled venous component of the atrium, uninterrupted by the presence of pectinate muscles.3 • Morphology of the atrial septum • The morphologically right side of the atrial septum contains the rim of the oval fossa, whereas its flap is on the left side. View chapterExplore book Read full chapter URL: Book2011, Diagnosis and Management of Adult Congenital Heart Disease (Second Edition)Elisabeth Bédard, ... Hideki Uemura Chapter LAA Line and Appendage Amputation/Exclusion 2017, Surgical Treatment of Atrial FibrillationJonathan M. Philpott, ... Ralph Damiano Abstract The importance of a routine and confident left atrial appendage (LAA) amputation and closure or exclusion cannot be overstated. It is a foundational step in any Maze-III/IV, and a critical element in any patient with a history of AF, as it will have tremendous reductions in potential thromboembolic events and strokes for the life of each patient. The electrical significance of the appendage and its potential to support potential reentry circuits in its body or base are illustrated, and the electrical goals of interrupting them with a line of scar that courses down the base of the appendage and anchors on the left PVI are reviewed. Step-by-step details are then reviewed highlighting good technical tips for a solid and uncomplicated hand sewn closure or the use of a FDA approved LAA exclusion clip. These techniques should allow the learning surgeon to proceed with the creation of the LAA circuit interruption line and exclusion of the appendage base safely and routinely. View chapterExplore book Read full chapter URL: Book2017, Surgical Treatment of Atrial FibrillationJonathan M. Philpott, ... Ralph Damiano Chapter Transcatheter Therapies for Structural Heart Diseases 2018, Practical CardiologyHamidreza Sanati MD Left Atrial Appendage Occlusion Stroke is the third leading cause of death in developed countries and the number one reason for disability worldwide. A great proportion of cardioembolic strokes occur in patients with atrial fibrillation (AF), and AF-related strokes are associated with worse outcomes compared with non–AF-related strokes,11 signifying the importance of effective preventive strategies. The left atrial appendage (LAA) is thought to be the source of thrombus and emboli in the majority of patients with AF who have a stroke.12,13Anticoagulation with warfarin is very effective and reduces the risk of stroke compared with aspirin and the combination of aspirin and clopidogrel and is therefore considered the standard anticoagulant agent in these patients. Unfortunately, warfarin-related bleeding complications are common and sometimes serious, including intracranial hemorrhage. In addition, the lack of patient adherence is a common problem, and therapeutic levels cannot be appropriately maintained in many cases. Novel oral anticoagulants (e.g., dabigatran, rivaroxaban, apixaban) have been recently proposed to overcome shortcomings of conventional anticoagulant therapy. They are at least noninferior to warfarin in preventing stroke, but the risk of bleeding, especially major bleeding (other than intracranial hemorrhage), has not been reduced.14 LAA occlusion is now feasible and could be performed with high success rate and a low rate of complications (Fig. 21.3). Several types of devices are now in use; of them, the Watchman device (Boston Scientific, Natick, MA) has been approved by the FDA. According to the latest guidelines, percutaneous LAA occlusion may be considered in patients with a high stroke risk and contraindications for long-term oral anticoagulation (class IIb).15 Different categories of patients with nonvalvular AF in whom the candidacy for LAA occlusion might be evaluated are summarized in Box 21.2.16 View chapterExplore book Read full chapter URL: Book2018, Practical CardiologyHamidreza Sanati MD Related terms: Atrial Fibrillation Atrial Tachycardia Warfarin Superior Vena Cava Pulmonary Vein Endocardium Cerebral Hemorrhage Coronary Sinus Antithrombotic Thrombus View all Topics Recommended publications The Annals of Thoracic SurgeryJournal The Journal of Thoracic and Cardiovascular SurgeryJournal Autonomic NeuroscienceJournal Heart RhythmJournal Browse books and journals Featured Authors Khandheria, Bijoy K.Aurora St. Luke's Medical Center, Milwaukee, United States Citations22,972 h-index74 Publications220 Kubota, IsaoYamagata University Faculty of Medicine, Yamagata, Japan Citations9,799 h-index47 Publications116 Nishiyama, SatoshiYamagata University Faculty of Medicine, Yamagata, Japan Citations1,944 h-index25 Publications33 Tamura, HarutoshiYamagata University Faculty of Medicine, Yamagata, Japan Citations801 h-index17 Publications25 About ScienceDirect Remote access Advertise Contact and support Terms and conditions Privacy policy Cookies are used by this site. Cookie settings All content on this site: Copyright © 2025 or its licensors and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
190686
https://byjus.com/physics/drag-coefficient/
Drag force is the resistance force of fluid, the force that counterattacks or resists the motion of a body in a fluid. This article will discuss the concept of drag coefficient in detail. | | | Table of Contents What is Drag Force? The Formula for Drag Force. What is the Drag Coefficient? How are Drag Coefficients Calculated? How are Drag Coefficients Used? FAQs | Table of Contents What is Drag Force? The resistance force of fluid is known as drag force; this force always acts opposite to the motion of an object which is moving submerged in a certain fluid. Therefore, drag force can also be defined as the force that counterattacks or resists the motion of a body in a fluid. The fundamental nature of drag force is to act in the opposite direction of the flow velocity. Air resistance can be considered as an example of the drag force because it always resists the terminal speed of an object falling from a certain height. Drag force is also of reactive nature, similar to kinetic friction because it points in the opposite direction to the motion of the object through the fluid, and it only exists when the object is moving. Drag force can be divided into two types that are skin drag and form drag. When fluids are being pushed out of the way by an object in motion, then the resistance produced is known as form drag through the fluid. Whereas, skin drag is basically caused by the sliding of the fluid along the surface of the moving object, and it is a kinetic frictional force. The value of drag force is directly proportional to the density of fluid, square of the velocity, cross-section area and the drag coefficient: The Formula for Drag Force Therefore, mathematically, (\begin{array}{l}D = \left ( C_{d}\rho AV \right )/2\end{array} ) Where, D denotes the drag force, Cd denotes the coefficient of drag, ρ denotes the density of the medium in kgm−3, V denotes the velocity of the body in ms−1 and A denotes the cross-sectional area in m2. What is the Drag Coefficient? When an object moves through a fluid, then to compute its resistance, the coefficient used is known as the Drag coefficient, denoted by Cd. The coefficient of drag is dimensionless, which is helpful in calculating aerodynamic drag and the impact of shape, inclination and conditions of flow in aerodynamics. Basically, unsharpened and bulky objects will have a high drag coefficient, and streamlined objects will have a lower drag coefficient. To understand this, you can visualise an object of a teardrop shape; the object will have a low value of Cd, approximately 0.06, due to which and as the air flows around it, it remains attached. On the other hand, a large impact of turbulent air will be created by a plane and flat surface that is perpendicular to the airflow; this will result in a higher value of Cd. How are drag coefficients calculated? To calculate the aerodynamic or hydrodynamic force on an object, drag coefficients are used, and this is given by the drag equation below: (\begin{array}{l}F_{d} = \frac{1}{2} \rho v^{2}C_{d}A\end{array} ) Where: Fd denotes drag force (N) ρ denotes density (kg/m³) v denotes velocity (m/s²) Cd denotes drag coefficient A denotes the frontal area (m²) You can reorganise the aerodynamic drag equation to compute the drag coefficient if you know the drag force on an object at a certain speed. You can use the equation given above again to recalculate the drag force for different sizes and velocities once you have established the drag coefficient for a specific geometry. For sizing engines or battery capacities, this method can be used particularly. However, it’s significant to remember that the drag coefficient fluctuates with the Reynolds number. The Reynolds number is the fraction (ratio) between the inertial forces and the viscous forces of fluid, and it is another dimensionless quantity. It fundamentally describes the change in behaviour of air with pressure, velocity, temperature, and the type of fluid. For instance, when you move along a race car, the Reynolds number changes. Therefore, the Reynolds number at the rearmost part of the car will differ from the Reynolds number in a radiator duct. Suggested Videos Equation of Continuity Drag Force Concept How are drag coefficients used? Irrespective of the size or velocity of an object, drag coefficients allow aerodynamicists to analyse its aerodynamic efficiency. This means that you can make a comparison of the aerodynamics of a car racer with a building. Even though they are very different, they both have a normalised drag coefficient. Throughout the design process, drag coefficients play a major role in determining which design has the highest performance. You can do this by ranking the designs in the order of drag coefficient. Inspiration can be drawn from other aerodynamic shapes with low drag coefficients irrespective of the industry they come from. Inspired by the hydrodynamics of a fish, Mercedes once designed a road car. So, whether you are helping a biker attain a top speed or developing a drone that can fly as far as possible on a single charge, your objective will be to reduce the drag coefficient. | | | Related Articles Fluid Dynamics Kinetic Energy Fluid flows | Related Articles Frequently Asked Questions – FAQs What is the drag coefficient? When an object moves through a fluid, then to compute its resistance, the coefficient used is known as the Drag coefficient, denoted by Cd. What is drag force? The resistance force of fluid is known as drag force; this force always acts opposite to the motion of an object which is moving submerged in a certain fluid. What is kinetic friction? The force acting between two moving surfaces is known as kinetic friction. What is static friction? The friction present between two or more objects that are not moving with respect to each other is known as static friction. What is the formula of drag force? D = (Cd × ρ × V2 × A)/2 Where, D denotes the drag force Cd denotes the coefficient of drag ρ denotes the density of the medium in kgm−3 V denotes the velocity of the body in ms−1 A denotes the cross-sectional area in m2 Stay tuned to BYJU’S for more such interesting and informative articles. Also, register to “BYJU’S – The Learning App” for loads of interactive, engaging Physics-related videos and unlimited academic assistance. Comments Leave a Comment Cancel reply Your Mobile number and Email id will not be published. Required fields are marked Request OTP on Voice Call Website Post My Comment Register with BYJU'S & Download Free PDFs Register with BYJU'S & Watch Live Videos
190687
https://math.stackexchange.com/questions/1337665/find-all-complex-numbers-z-abi-such-that-z3-8
Find all complex numbers $z=a+bi$ such that $z^3=8$. - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Find all complex numbers z=a+b i z=a+b i such that z 3=8 z 3=8. Ask Question Asked 10 years, 3 months ago Modified10 years, 3 months ago Viewed 5k times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. Find all complex numbers z=a+b i z=a+b i such that z 3=8 z 3=8. I'll be happy if someone say me with what steps I have to start solving this problem. complex-numbers Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jun 24, 2015 at 14:21 Henry 171k 10 10 gold badges 139 139 silver badges 297 297 bronze badges asked Jun 24, 2015 at 14:13 Музаффар ШакаровМузаффар Шакаров 407 1 1 gold badge 5 5 silver badges 11 11 bronze badges 5 3 Hint: write a=2 e i θ a=2 e i θ.vadim123 –vadim123 2015-06-24 14:15:50 +00:00 Commented Jun 24, 2015 at 14:15 i dont understand it Музаффар Шакаров –Музаффар Шакаров 2015-06-24 14:16:37 +00:00 Commented Jun 24, 2015 at 14:16 z=a+bi, just a typo and then z 3=8 z 3=8, just check your exercise user190080 –user190080 2015-06-24 14:17:01 +00:00 Commented Jun 24, 2015 at 14:17 The problem is that you are using the letter a a to mean two different things, when you write a=a+b i a=a+b i.vadim123 –vadim123 2015-06-24 14:17:06 +00:00 Commented Jun 24, 2015 at 14:17 Oh. ok my apologise Музаффар Шакаров –Музаффар Шакаров 2015-06-24 14:18:06 +00:00 Commented Jun 24, 2015 at 14:18 Add a comment| 4 Answers 4 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. One approach: You have z 3−8=0 z 3−8=0. Factor this to (z−2)(z 2+2 z+4)=0(z−2)(z 2+2 z+4)=0 and then solve in the usual ways Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jun 24, 2015 at 14:20 HenryHenry 171k 10 10 gold badges 139 139 silver badges 297 297 bronze badges 0 Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Just compute (a+b i)3(a+b i)3 and set the real part equal to 8 8 and the imaginary part equal to 0 0. Then you have two equations and two unknowns. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jun 24, 2015 at 14:14 EbearrEbearr 906 8 8 silver badges 15 15 bronze badges 2 It helps that one of the solutions is obvious Henry –Henry 2015-06-24 14:17:53 +00:00 Commented Jun 24, 2015 at 14:17 @Henry Agreed, in this case it does.Ebearr –Ebearr 2015-06-24 14:21:36 +00:00 Commented Jun 24, 2015 at 14:21 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. Hint: Write z=2 w z=2 w. Then z 3=8 z 3=8 iff w 3=1 w 3=1. Can you solve this? Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jun 24, 2015 at 14:26 lhflhf 222k 20 20 gold badges 254 254 silver badges 585 585 bronze badges 2 It seems,that z=2 z=2 but can we get another two answer from this?Музаффар Шакаров –Музаффар Шакаров 2015-06-24 14:29:02 +00:00 Commented Jun 24, 2015 at 14:29 I assumed you knew about roots of unity...lhf –lhf 2015-06-24 14:30:11 +00:00 Commented Jun 24, 2015 at 14:30 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. z 3−8=0 z 3−8=0 (z−2)(z 2+2 z+4)=0(z−2)(z 2+2 z+4)=0 z=2 z=2 or z=−1±(−3)1/2 z=−1±(−3)1/2 equation may be solved using quadratic equation Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Jun 24, 2015 at 15:02 TravisJ 7,566 7 7 gold badges 28 28 silver badges 42 42 bronze badges answered Jun 24, 2015 at 14:28 SikanderSikander 225 2 2 silver badges 10 10 bronze badges 1 2 This is already an answer.Ebearr –Ebearr 2015-06-24 14:29:18 +00:00 Commented Jun 24, 2015 at 14:29 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions complex-numbers See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 1Find all the complex numbers that satisfy this quotient. 5Find all complex numbers z z such that |z|=1|z|=|1−z||z|=1|z|=|1−z| 1How to find all complex numbers z z such that z 3=z¯z 3=z¯? 0Sketch all complex numbers such that 1Find complex numbers α α,β β,γ γ such that... 3Find all complex numbers z z such that z 4∈R z 4∈R 2Find all complex numbers z z such that |z|=1|z|=1 and I((z+1)2020)=0 ℑ((z+1)2020)=0 Hot Network Questions How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? Is direct sum of finite spectra cancellative? Analog story - nuclear bombs used to neutralize global warming Do we need the author's permission for reference In the U.S., can patients receive treatment at a hospital without being logged? How to home-make rubber feet stoppers for table legs? Calculating the node voltage how do I remove a item from the applications menu ICC in Hague not prosecuting an individual brought before them in a questionable manner? Why include unadjusted estimates in a study when reporting adjusted estimates? Drawing the structure of a matrix Discussing strategy reduces winning chances of everyone! How do you emphasize the verb "to be" with do/does? The geologic realities of a massive well out at Sea Do sum of natural numbers and sum of their squares represent uniquely the summands? If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Numbers Interpreted in Smallest Valid Base Bypassing C64's PETSCII to screen code mapping Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? How can the problem of a warlock with two spell slots be solved? Implications of using a stream cipher as KDF Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Checking model assumptions at cluster level vs global level? How to locate a leak in an irrigation system? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
190688
https://brainly.com/question/18744889
[FREE] Using the graph of f(x) and g(x) , where g(x) = f(k \cdot x) , determine the value of k . Graph of - brainly.com Search Learning Mode Cancel Log in / Join for free Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions Log in Join for free Tutoring Session +74,9k Smart guidance, rooted in what you’re studying Get Guidance Test Prep +13,4k Ace exams faster, with practice that adapts to you Practice Worksheets +7,7k Guided help for every grade, topic or textbook Complete See more / Mathematics Textbook & Expert-Verified Textbook & Expert-Verified Using the graph of f(x) and g(x), where g(x)=f(k⋅x), determine the value of k. Graph of two lines: f(x) passes through (3,5) and (4,10) g(x) passes through (2 1​,0) and (1,10) A. 4 B. 4 1​ C. −4 1​ D. -4 1 See answer Explain with Learning Companion NEW Asked by richardtat1615 • 10/28/2020 0:04 / 0:15 Read More Community by Students Brainly by Experts ChatGPT by OpenAI Gemini Google AI Community Answer This answer helped 7319940 people 7M 5.0 2 Upload your school material for a more relevant answer The value of k, where g(x) = f(k⋅x), is determined by comparing corresponding points on the graphs of both functions. Given that g(1)=f(4), we conclude that k⋅x=4 when x=1, leading to the calculation that k=4. Explanation To determine the value of k where g(x) = f(k⋅x), we can look at the two points given for each function, f(x) and g(x), on their respective graphs. For f(x), we have the points (3, 5) and (4, 10), and for g(x), we have the points (1/2, 0) and (1, 10). Firstly, note that for g(x), when x=1, g(x)=10, which corresponds to the second point of f(x), where x=4 and f(x)=10. Because g(1)=f(4), it's clear that k must be such that k⋅x=4 when x=1. So, we can set up an equation k⋅x=4 and substitute x=1 to find k. This gives us: \t1⋅k=4 Solving for k: \tk=4 Therefore, the value of k is 4. This means the graph of g(x) is a horizontal compression of the graph of f(x) by a factor of 4. Answered by peter501 •32.8K answers•7.3M people helped Thanks 2 5.0 (2 votes) Textbook &Expert-Verified⬈(opens in a new tab) This answer helped 7319940 people 7M 0.0 0 Fundamentals of Calculus - Joel Robbin, Sigurd Angenent Mathematics for Biomedical Physics - Jogindra M Wadehra Statistics - Barbara Illowsky, Susan Dean Upload your school material for a more relevant answer The value of k in the equation g(x) = f(k · x) is 4. This is determined by matching the outputs of the two functions at corresponding inputs. Thus, the correct answer is A. 4. Explanation To find the value of k in the equation g(x)=f(k⋅x), we need to analyze the given points of the functions f(x) and g(x). Step 1: Identify the Points For f(x), the points provided are (3,5) and (4,10). For g(x), the points are (2 1​,0) and (1,10). Step 2: Analyze Corresponding Values We can see that when x=1 in g(x), we have g(1)=10. Now, let's look at f(x): this means we need to find what value of x in f(x) gives us 10. From the points of f(x): At x=4, f(4)=10. Step 3: Set Up the Equation To relate the inputs of g(x) and f(x), we set up the equation from the point g(1)=f(k⋅1). This results in: k⋅1=4 This simplifies to: k=4. Conclusion Thus, the value of k is 4. This indicates the graph of g(x) is a horizontal compression of the graph of f(x) by a factor of 4. Examples & Evidence For example, if we consider another constant k, say 2, then g(x) would equal f(2 · x), which compresses the graph of f(x) differently than when k equals 4. Similarly, if k were 2 1​, we would see a stretch rather than a compression in the graph of g(x). The relationship between the functions g(x) and f(x) through scaling by k shows that the output values must correspond for specific input values. This corroborates the dependency of g on f through the constant k, confirmed by their specified points. Thanks 0 0.0 (0 votes) Advertisement richardtat1615 has a question! Can you help? Add your answer See Expert-Verified Answer ### Free Mathematics solutions and answers Community Answer 4.6 12 Jonathan and his sister Jennifer have a combined age of 48. If Jonathan is twice as old as his sister, how old is Jennifer Community Answer 11 What is the present value of a cash inflow of 1250 four years from now if the required rate of return is 8% (Rounded to 2 decimal places)? Community Answer 13 Where can you find your state-specific Lottery information to sell Lottery tickets and redeem winning Lottery tickets? (Select all that apply.) 1. Barcode and Quick Reference Guide 2. Lottery Terminal Handbook 3. Lottery vending machine 4. OneWalmart using Handheld/BYOD Community Answer 4.1 17 How many positive integers between 100 and 999 inclusive are divisible by three or four? Community Answer 4.0 9 N a bike race: julie came in ahead of roger. julie finished after james. david beat james but finished after sarah. in what place did david finish? Community Answer 4.1 8 Carly, sandi, cyrus and pedro have multiple pets. carly and sandi have dogs, while the other two have cats. sandi and pedro have chickens. everyone except carly has a rabbit. who only has a cat and a rabbit? Community Answer 4.1 14 richard bought 3 slices of cheese pizza and 2 sodas for $8.75. Jordan bought 2 slices of cheese pizza and 4 sodas for $8.50. How much would an order of 1 slice of cheese pizza and 3 sodas cost? A. $3.25 B. $5.25 C. $7.75 D. $7.25 Community Answer 4.3 192 Which statements are true regarding undefinable terms in geometry? Select two options. A point's location on the coordinate plane is indicated by an ordered pair, (x, y). A point has one dimension, length. A line has length and width. A distance along a line must have no beginning or end. A plane consists of an infinite set of points. Community Answer 4 Click an Item in the list or group of pictures at the bottom of the problem and, holding the button down, drag it into the correct position in the answer box. Release your mouse button when the item is place. If you change your mind, drag the item to the trashcan. Click the trashcan to clear all your answers. Express In simplified exponential notation. 18a^3b^2/ 2ab New questions in Mathematics Which expression has the same value as the expression shown below? −8 3​−8 7​ A. 8 3​+8 7​ B. −8 3​+8 7​ C. 8 3​+(−8 7​) D. −8 3​+(−8 7​) Hasmeet walks once round a circle with diameter 80 metres. There are 8 points equally spaced on the circumference of the circle. Find the distance Hasmeet walks between one point and the next point. f(x)=sec x Show that f′′(x)=2 sec 3 x−sec x Consider the following incomplete deposit slip: | Description | Amount ($) | | ---: | Cash, including coins | 150 | 75 | | Check 1 | 564 | 81 | | Check 2 | 2192 | 43 | | Check 3 | 4864 | 01 | | Subtotal | ???? | ?? | | Less cash received | ???? | ?? | | Total | 7050 | 50 | | | | | How much cash did the person who filled out this deposit slip receive? Which sequence is generated by the function f(n+1)=f(n)−2 for f(1)=10? A. −2,8,18,28,38,… B. 10,8,6,4,2,… C. 8,18,28,38,48,… D. −10,−12,−14,−16,−18,… Previous questionNext question Learn Practice Test Open in Learning Companion Company Copyright Policy Privacy Policy Cookie Preferences Insights: The Brainly Blog Advertise with us Careers Homework Questions & Answers Help Terms of Use Help Center Safety Center Responsible Disclosure Agreement Connect with us (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) Brainly.com
190689
https://www.cuemath.com/questions/if-a-polynomial-has-three-terms-x2-12x-36-which-factoring-method-can-be-considered/
If a polynomial has three terms, x2 + 12x + 36, which factoring method can be considered? Solution: The perfect square trinomial formula helps in solving the complex trinomial function. A perfect square trinomial function is the one that is obtained by squaring the binomial expression. Given, the polynomial has 3 terms. By using perfect square trinomial method, The expression given is x2 + 12x + 36. The first and last term of this expression are perfect squares. Perfect square of x is x2. Perfect square of 6 is 36. By using formula, (a2 + 2ab + b2) = (a + b)2 The above expression can be written as x2+ 12x + 36 = (x + 6)2 There are other methods also we can use to factot the given polynomial i.e Using Perfect square method Splitting the middle term method. Therefore, by the perfect square trinomial method the factor of the expression is (x + 6). If a polynomial has three terms, x2 + 12x + 36, which factoring method can be considered? Summary: If a polynomial has three terms, x2 + 12x + 36, the perfect square trinomial method can be considered to find the factor of the expression.
190690
https://www.college-physics.com/book/mechanics/horizontally-launched-projectiles/
Horizontally Launched Projectiles - College Physics Search What is College-Physics? Contents 1 Mechanics Uniform Motion Uniformly Accelerated Motion Uniform Circular Motion Vertically Launched Projectiles Horizontally Launched Projectiles Non-Horizontally Launched Projectiles and their Trajectories 2 Electric Field Electric Charge Electrical Conductor / Insulator Coulomb's Law Electrostatic Induction Electroscope Electric Fields I Electric Fields II Faraday Cage Cathode Ray Tube Capacitor Oil Drop Experiment 3 Magnetic Field Permanent Magnets and Electromagnets Homogenous Magnetic Field Magnetic Field Lorentz Force Mass und Charge of an Electron Hall Effect Wien Filter Mass Spectrometry 4 Oscillations Harmonic Oscillator Damped Oscillator 5 Waves Light Models Basic Properties Phase Shift Coherence Interference Standing Wave Beat Reflection Single-Slit Diffraction 2-Slit Diffraction Diffraction Grating 6 Quantum Mechanics Photoelectric Effect Energy, Mass and Momentum of Photons X-Ray Bragg's Law Compton Scattering 7 Nuclear Physics Atom Structure Ionizing Radiation Alpha Particles Beta Particles Gamma Ray 8 Astronomy Newton's Law of Universal Gravitation Gravitational Fields I Gravitational Fields II Escape Velocity Satellite Orbits Kepler's Laws of Planetary Motion Horizontally Launched Projectiles previous page: next page: Vertically Launched ProjectilesNon-Horizontally Launched Projectiles and their Trajectories Introduction The horizontal launch of a projectile is when it is thrown parallel to the horizon, so it moves with a horizontal takeoff speed only under the influence of its own weight. Experiment A projectile is launched horizontally on a hill (h 0=80 m h 0=80 m) with the initial velocity v 0=40 m s v 0=40 m s. It moves in launch direction while falling faster and faster towards the ground. Lade Animation... (0%) Reset Start Legende Geschwindigkeit Beschleunigung h 0 h 0 Results Launching a projectile horizontally results in a combination of a uniform motion along the x-axis and a uniformly accelerated motion along the y-axis. Thus the movement (trajectory) can be presented in a y(x)y(x) diagram: Lade Animation... (0%) y y in m m x x in m m Determining the trajectory To derive the trajectory the following laws are needed: Uniform motion x=v 0⋅t x=v 0⋅t Uniformly accelerated motion y=h 0−g 2⋅t 2 y=h 0−g 2⋅t 2 Now, the equation for the x-axis is solved for t t and put into the equation for the y-axis: x=v 0⋅t⇒t=x v 0 x=v 0⋅t⇒t=x v 0 y y y y(x)=h 0−g 2⋅t 2=h 0−g 2⋅(x v 0)2=h 0−g 2⋅x 2(v 0)2=h 0−g 2(v 0)2⋅x 2 y=h 0−g 2⋅t 2 y=h 0−g 2⋅(x v 0)2 y=h 0−g 2⋅x 2(v 0)2 y(x)=h 0−g 2(v 0)2⋅x 2 Characteristics Using the following graphs we can determine some characteristics of the horizontal launch. In the graph s(x)s(x) on the left, the trajectory (see above) and the maximum range are shown. In graph on the right s(t)s(t) the falling time is shown. Lade Animation... (0%) y y in m m x x in m m y y in m m t t in s s y(x)=h 0−g 2(v 0)2⋅x 2 y(x)=h 0−g 2(v 0)2⋅x 2 Maximum range The maximum range is reached when the body hits the ground, that is when y(x)y(x) is equal to zero: y(x)0 h 0 2(v 0)2 h 0 g x m a x=h 0−g 2(v 0)2⋅x 2=h 0−g 2(v 0)2⋅(x m a x)2=g 2(v 0)2⋅(x m a x)2=(x m a x)2=v 0⋅2 h 0 g−−−−√=v 0⋅t F y(x)=h 0−g 2(v 0)2⋅x 2 0=h 0−g 2(v 0)2⋅(x m a x)2 h 0=g 2(v 0)2⋅(x m a x)2 2(v 0)2 h 0 g=(x m a x)2 x m a x=v 0⋅2 h 0 g=v 0⋅t F y(t)=h 0−g 2⋅t 2 y(t)=h 0−g 2⋅t 2 Falling time The body falls until it hits the ground, that is y(t)y(t) is equal to zero: y(t)0 h 0 2 h 0 g t F=h 0−g 2⋅t 2=h 0−g 2⋅(t F)2=g 2⋅(t F)2=(t F)2=2 h 0 g−−−−√y(t)=h 0−g 2⋅t 2 0=h 0−g 2⋅(t F)2 h 0=g 2⋅(t F)2 2 h 0 g=(t F)2 t F=2 h 0 g Velocity-time The velocity along the x-axis v 0 v 0 is constant and the velocity along the y-axis increases uniformly because of the gravitational acceleration. Uniform motion v x=v 0=k o n s t.v x=v 0=k o n s t. Uniformly accelerated motion v y=−g⋅t v y=−g⋅t The instantaneous velocity in the direction of flight is determined using the Pythagorean theorem of the velocity components. v(t)=(v x)2+(v y)2−−−−−−−−−−√=(v 0)2+g 2⋅t 2−−−−−−−−−−−√v(t)=(v x)2+(v y)2=(v 0)2+g 2⋅t 2 Sources Wikipedia: Article about "Trajectory of a projectile" Wikipedia: Article about "Range of a projectile" previous page: next page: Vertically Launched ProjectilesNon-Horizontally Launched Projectiles and their Trajectories Deutsche Version: Artikel über "Waagerechter Wurf" Feedback Do you have questions to this article or found an error? Give feedback... Support Do you like this website? Then support us :) Name (optional) Email (optional) Spamschutz = Daten werden gesendet College-Physics © 2025, Partner: Abi-Mathe, Abi-Chemie, Deutsche website: Abi-Physik Lernportal Privacy PolicyLegal Disclosure
190691
https://www.saem.org/about-saem/academies-interest-groups-affiliates2/cdem/for-students/online-education/m4-curriculum/group-m4-approach-to/headache
Visit us on Twitter LinkedIn Facebook YouTube Instagram Join/Renew Careers Profile Logout Headache Home / About SAEM / Academies, Interest Groups, & Affiliates / CDEM / For Students / CDEM Curriculum / M4 Curriculum / Headache Author Credentials Author: Christopher Fowler, DO, University of Arkansas for Medical Sciences, Little Rock AR Editor: Matthew Tews, DO, MS, Medical College of Georgia at Augusta University Last Updated: September, 2019 Case Study 35 yo female with a past medical history of hypertension presents with two hours of headache that was gradual in onset but is now 10/10. Reports a sharp pulsating pain in the front portion of her head. She reports that she has had several prior headaches but none have ever been this bad. Denies any family medical history. She has tried Tylenol at home with little improvement in her symptoms. Denies fever or neck stiffness however she has been persistently nauseous and had two episodes of vomiting. On exam she is resting quietly in a darkened room. Cranial nerves are grossly intact and she demonstrates no focal neurologic complaint. The remainder of her exam is unremarkable. Objectives By the end of this module, the student will be able to: Understand the difference between primary and secondary headache disorders List the emergent differential diagnosis of headache Understand the “Red Flag” symptoms for headache Explain the importance of a complete neurological exam in the evaluation of all headache patients Recognize the causes of secondary headaches Describe treatment regimens for primary headache disorders Introduction Headache is a common Emergency Department complaint. The causes of headache can range from benign to life threatening and these patients can deteriorate quickly, which makes thorough evaluation of these patients critical. Nearly 47% of adults report headaches at some time in their life and there are approx. 2.1 million ED visits per (2-4% of ED visits). While the differential for headache is large, a systematic approach to history and physical exam will allow for effective evaluation of these patients and determination of necessary diagnostic testing and therapeutic interventions. Initial Actions and Primary Survey As with all patients presenting to the Emergency Department, assessing ABCs is the first priority. The majority of patients presenting to the ED with headache will not require immediate intervention of airway, breathing or circulation. Primary survey should include a brief assessment for gross neurological function and an assessment of mental status. Use of the GCS coma scale can be an effective tool as this can be a measurement over time during re-evaluation. Patients with headache and abnormal mental status may require immediate intervention. Primary survey should also assess for signs or symptoms of CNS infection with sepsis. Additionally, all patients presenting with headache following trauma should undergo full trauma assessment with cervical spine immobilization. These patients will require frequent revaluation of their neurologic state and mental status, as well as the effectiveness of any interventions. Presentation Patients with headache can present with a wide variety of complaints, associated symptoms, varying levels of pain and duration of symptoms. Obtaining a thorough history is crucial to differentiate causes of headache. Seek to understand the circumstances of the onset of the pain. Below are some key historical features that should be obtained during the history portion of the encounter. Was the headache sudden or gradual onset? Associated with activity/exertion or at rest? Aggravating or alleviating factors? Family history of headaches or vascular abnormalities? Associated symptoms may include: fever, neck pain/stiffness, photophobia, extremity or facial numbness/weakness, vision changes, speech or gait changes, nausea/vomiting There are several high risk clinical features or “Red Flag Symptoms”. Positive findings in any of these should prompt a more detailed evaluation. New onset Neurological findings Sudden onset or worst at onset Fever or immune compromise (HIV/AIDS, Cancer) Elderly Progressive headache Jaw claudication, muscle aches, temporal artery pain Multiple patients with headache (CO toxicity) Pregnancy or post pregnancy Clotting disorder (primary or acquired) Trauma Eye pain Cervical Manipulation with facial pain or sudden onset headache Dizziness with headache Many patients will report a history of prior headaches and be able to explain whether and how their current headache is different from prior. Inquire about what treatments they have attempted at home, if any. Inquire if others in the home have similar symptoms or if they recently started using the heater or furnace, both of which suggest carbon monoxide poisoning. A thorough neurological exam is essential for all patients with headache. Include testing of motor and sensory function, cranial nerves, reflexes, pronator drift, rapid alternating movements, finger-to-nose and heel-to-shin testing, Rhomberg test, gait assessment and mini mental status evaluation. Perform a complete pupillary and fundoscopic exam to assess for asymmetric pupils, findings suggestive of acute angle closure glaucoma (minimally reactive mid-dilated pupils with ciliary flush), or findings suggestive of increased intracranial pressure (papilledema or loss of spontaneous venous pulsations). In patients with possible temporal arteritis, assess for tenderness in the temporal area. Differential Diagnosis When assessing patients with headache, it is important to consider both the most common etiologies of headache, as well as the life-threatening etiologies of headache. Broadly, headaches can be classified into two general categories; primary and secondary Primary Headache Of headache that are classified as primary, a large majority (nearly 90%) are migraine, tension or cluster headache. The exact pathophysiology of these types of headache is poorly understood. Migraine headache are frequently episodic and may have preceding auras, visual disturbances, photophobia/phonophobia or scotomas (visual field defects). Many patients will report a history of similar headache and may report that the headache is similar to prior episodes. Occasionally patients will present with neurologic deficits. If these symptoms have never been present with prior migraines, assume they are new and consider the diagnosis of acute CVA. Tension headaches are common and have a variety of presentations based on the age and sex of the patient. Tension-type headaches are more common in females, are bilateral and frequently radiate from the back, neck, shoulders to the top and sides of the cranium. Symptoms may be worse with stress, lack of sleep, position or movement. Tension headaches are generally gradual in onset and reach maximal intensity over hours to days. Many patients report ineffective management with typical over-the-counter medications. Cluster headaches are typically located behind the eye and are generally exquisitely sharp and intense in pain. These patients may have neurologic signs related to the cranial nerves, such as lacrimation, ptosis, miosis or facial sweating. These are typically short-lived and limited in progression. Other etiologies of primary headache are: Fever-associated headache Sinusitis Temporomandibular joint disease Trigeminal neuralgia Secondary Headache or Emergent Headaches Secondary headaches are the result of an intracranial process causing the development of the headache. Individuals will often have high risk features from their history. The rapid assessment and diagnosis of these conditions is crucial as there is a potential for deterioration of the patient. The differential for emergent headaches is extensive and some will be considered in other chapters (see links below). Subarachnoid hemorrhage Epidural hemorrhage Subdural hemorrhage Intracranial hemorrhage Stroke (although ischemic stroke uncommonly presents with headache) CNS infection (meningitis/encephalitis/abscess) CNS mass/increased intracranial pressure Idiopathic intracranial hypertension (aka pseudotumor cerebrii) Venous thrombosis Carbon monoxide poisoning Acute angle closure glaucoma Temporal arteritis Diagnostic Testing As with all testing done in the Emergency Department, diagnostic tests should be determined based off of history and physical exam findings. There are some general guidelines for evaluation of patients with headache. Computed tomography (CT) imaging is often the initial imaging test of choice for evaluating headache in the ED. A CT head without contrast of the head is a quick method to evaluate for possibly emergent causes of a headache. Patients with suspected SAH, SDH, intraparenchymal hemorrhage or epidural hematoma should be evaluated with CT head for evidence of bleeding (which will show up brightly on a CT of the head if acute blood). A CT head with contrast is rarely used in the evaluation of headaches caused by potential intracranial bleeding as it can obscure the presence of blood. CT head imaging with contrast is used in patients whom required evaluation for headaches related vascular compromise, infection or space occupying lesion. The use of CT head imaging should be considered in patients who present with “Red Flag” symptoms, have new onset headache or changes in the nature of their headache. Additional testing should correlate to the suspected diagnosis. For example, if meningitis/encephalitis is suspected, a lumbar puncture with associated cerebral spinal fluid studies would be indicated. Routine blood work (CBC, BMP etc.) are likely to be less useful in headache diagnosis unless an infectious source are suspected or the patient is on anticoagulants. For patients with a history of headache that are presenting with typical headache features additional labs or imagine may be of low utility in diagnosis of headache. As a general guideline, if you have a high index of suspicion for an emergent cause of the headache, additional work-up is often required. Treatment Treatment of headaches can also be broken down based on the final diagnosis. We will focus on the initial treatment options for primary headaches. Simply because the pain improved after medications does not mean that a benign process is present. Patients with emergent headaches may report improvement or resolution of their pain and symptoms with medication. A complete workup should still be considered. The stepwise approach to the treatment of headaches follows. First line management is typically undertaken with oral analgesic agents. Oral medications are typically the fastest way to administer analgesia and are often effective. However, many patients will often have already attempted these medications prior to presenting to the Emergency Department. A majority of patients will require intravenous administration of medications to achieve a sufficient level of relief. A brief discussion of various classes of intravenous medications are provided. Non-steroidal anti-inflammatories drugs (NSAIDs) These include ibuprofen, naproxen, meloxicam and ketorolac. These medications interrupt the production of inflammatory and pain inducing prostaglandins. Ketorolac is typically administered in intramuscular (IM) or intravenous (IV) routes, with higher efficacy being achieved through the IV route. While not technically a NSAID, acetaminophen is a very effective medication for treatment. The mechanism of action for acetaminophen is not completely understood. Dopamine antagonists These medications are a widely utilized class of medications in treating headaches. Medications such as prochlorperazine (Compazine), metoclopramide (Reglan), haloperidol (Haldol) are common agents in the ED. These medications do carry the risk of developing extrapyramidal symptoms (EPS) including akathisia, acute dyskinesia, dystonic reactions and tardive dyskinesia. The co-administration of anti-cholinergic medications such as diphenhydramine (Benadryl) are used to manage extrapyramidal symptoms. Dopamine antagonists have greatest efficacy if they are administered through IV route. They can be administered in IM routes, however efficacy tends to decrease with this route. Oral administrations tend to have the lowest rates of efficacy in treating headaches. Triptans Triptan are a class of medications are often used in managing migraine headaches. Medications like sumatriptan (Imitrex), rizatriptan (Maxalt) and zolmitriptan (Zolmig) are used commonly as outpatient treatments. These medications are serotonin receptor agonists in the brain. These medications are also known as abortive medications because when taken at the earliest sign of migraine onset, they can stop progression to a full migraine. These often are not utilized in the ED since many patients will have had ongoing headaches and may have already tried these at home. Additional medications Other medications that can be used in managing acute headache include steroids, anti-epileptic, narcotic medications and ergotamines. Dexamethasone (Decadron) can be effective in preventing recurrence of primary headaches. Narcotic medications can be used if other modalities have not been effective. These tend to be less effective, place patients at risk for rebound headaches and could lead to dependency issues. Sometimes patients with headaches from causes such as cluster headaches or persistent migraines need admission and intravenous medications such as high dose steroids (cluster headaches) or Depakote (persistent migraines) If headache is unable to be resolved, a neurology consult may be indicated for additional management. For emergent headaches from some secondary cause, additional treatments will be indicated depending on the initial diagnosis. This will also be covered in the specified sections. Pearls and Pitfalls Obtain a thorough history and physical examination to help determine the etiology of the headache Ask about “Red Flag” symptoms in any patient presenting with a headache Consider secondary causes of headache if there are any differences in the characteristics of the headache in patients with a previous history of a migraine or regular headaches Obtain non-contrast imaging of the head with a CT scan when secondary causes of headache are suspected Utilize first line headache medications for patients who present with a primary headache Consider admission for those patients with a primary headache whose symptoms to not resolve with typical headache medications | | | Case Resolution: Patient has no focal deficits on exam. She has never been imaged previously for her headaches. Given the change in intensity of her headache and infrequency in headaches a CT head without contrast was ordered. An IV was established and the patient was given haloperidol IV with 1000 mL of normal saline. After 30 minutes the patient reported significant improvement in her headache and remains without neurologic symptoms. She was discharged home with PCP follow up and return instructions for worsening of her headache or the development of any new neurologic symptoms. | References C3 - Headache. (n.d.). Retrieved from Bajwa, Zahid H. “Evaluation of Headaches in Adults.” UpToDate, 26 Oct. 2018, Harrigan M, Felix AG. Headache. In: Tintinalli JE, Stapczynski J, Ma O, Yealy DM, Meckler GD, Cline DM. eds. Tintinalli’s Emergency Medicine: A Comprehensive Study Guide, 8e New York, NY: McGraw-Hill; 2016. Accessed January 17, 2019.
190692
https://www.teacherspayteachers.com/Product/Newtons-Second-Law-of-Motion-Worksheet-Printable-PDF-Distance-Learning-6026653
Newton's Second Law of Motion - Worksheet | Printable PDF & Distance Learning Description This worksheet contains various questions to help your students learn (or review) basic concepts about Newton's Second Law of Motion. This engaging worksheet is great for science class practice, a quick assessment tool, a quiz, a science station, a homework assignment, morning work, early finishers, bell ringers, or lesson plan supplements. In this worksheet, students will answer questions about the following terms: Newton's laws of motion Force Net Force Mass Acceleration Newton What’s included in this resource? For updates about sales and new products, please follow my store: Science Worksheets Your feedback is important to me and can help you earn credits on TpT! Please rate this product. Newton's Second Law of Motion - Worksheet | Printable PDF & Distance Learning Save even more with bundles Reviews Questions & Answers Standards
190693
https://www.coursera.org/courses?query=number%20theory
For Individuals For Businesses For Universities For Governments Skip to main content Browse Number Theory Number Theory Courses Online Learn number theory for mathematical research and applications. Understand prime numbers, divisibility, and modular arithmetic. Skip to search results Filter by Subject  Language Required   The language used throughout the course, in both instruction and assessments. Learning Product Required   Build job-relevant skills in under 2 hours with hands-on tutorials. Learn from top instructors with graded assignments, videos, and discussion forums. Learn a new tool or skill in an interactive, hands-on environment. Get in-depth knowledge of a subject by completing a series of courses and projects. Earn career credentials from industry leaders that demonstrate your expertise. Earn career credentials while taking courses that count towards your Master’s degree. Earn your Bachelor’s or Master’s degree online for a fraction of the cost of in-person learning. Earn a university-issued career credential in a flexible, interactive format. Level Required   Duration Required   Skills Required   Subtitles Required   Educator Required   Explore the Number Theory Course Catalog Status: Free Trial Free Trial U University of California San Diego ### Number Theory and Cryptography Skills you'll gain: Cryptography, Key Management, Encryption, Public Key Cryptography Standards (PKCS), Cybersecurity, Arithmetic, Algorithms, Theoretical Computer Science, Computational Thinking, Algebra, Applied Mathematics, Python Programming 4.5 Rating, 4.5 out of 5 stars · 607 reviews Beginner · Course · 1 - 4 Weeks Status: Preview Preview S Stanford University ### Introduction to Mathematical Thinking Skills you'll gain: Mathematical Theory & Analysis, Mathematics and Mathematical Modeling, Calculus, Deductive Reasoning, Logical Reasoning 4.8 Rating, 4.8 out of 5 stars · 3K reviews Intermediate · Course · 1 - 3 Months Status: New New Status: Free Trial Free Trial B Birla Institute of Technology & Science, Pilani ### Mathematics for Engineering Skills you'll gain: Engineering Calculations, Data Analysis, Trigonometry, Engineering Analysis, Probability & Statistics, Computational Logic, Statistical Analysis, Linear Algebra, Logical Reasoning, Deductive Reasoning, Calculus, Analytics, Bayesian Statistics, Statistical Modeling, Artificial Intelligence and Machine Learning (AI/ML), Differential Equations, Statistical Inference, Theoretical Computer Science, Programming Principles, Descriptive Analytics 4.6 Rating, 4.6 out of 5 stars · 154 reviews Beginner · Specialization · 3 - 6 Months Status: New New Status: Free Trial Free Trial U University of Pittsburgh ### Mathematical Foundations for Data Science and Analytics Skills you'll gain: Statistical Analysis, NumPy, Probability Distribution, Matplotlib, Statistics, Pandas (Python Package), Data Science, Probability & Statistics, Probability, Statistical Modeling, Predictive Modeling, Data Analysis, Linear Algebra, Predictive Analytics, Statistical Methods, Mathematics and Mathematical Modeling, Applied Mathematics, Python Programming, Machine Learning, Logical Reasoning Build toward a degree Beginner · Specialization · 1 - 3 Months Status: Free Trial Free Trial U University of California San Diego ### Introduction to Discrete Mathematics for Computer Science Skills you'll gain: Graph Theory, Logical Reasoning, Combinatorics, Computational Logic, Deductive Reasoning, Cryptography, Probability, Key Management, Computational Thinking, Encryption, Network Analysis, Public Key Cryptography Standards (PKCS), Algorithms, Theoretical Computer Science, Python Programming, Data Structures, Cybersecurity, Arithmetic, Computer Programming, Network Routing 4.5 Rating, 4.5 out of 5 stars · 3.7K reviews Beginner · Specialization · 3 - 6 Months Status: Free Trial Free Trial J Johns Hopkins University ### Algebra: Elementary to Advanced Skills you'll gain: Algebra, Mathematical Modeling, Graphing, Arithmetic, Advanced Mathematics, Applied Mathematics, General Mathematics, Mathematical Theory & Analysis, Analytical Skills, Probability & Statistics, Geometry 4.8 Rating, 4.8 out of 5 stars · 758 reviews Beginner · Specialization · 3 - 6 Months What brings you to Coursera today? Status: Free Trial Free Trial J Johns Hopkins University ### Linear Algebra from Elementary to Advanced Skills you'll gain: Linear Algebra, Algebra, Applied Mathematics, Artificial Intelligence and Machine Learning (AI/ML), Mathematical Modeling, Advanced Mathematics, Engineering Analysis, Mathematical Theory & Analysis, Numerical Analysis, Geometry, Graph Theory, Applied Machine Learning, Markov Model, Probability 4.7 Rating, 4.7 out of 5 stars · 197 reviews Beginner · Specialization · 3 - 6 Months Status: Free Trial Free Trial U University of California San Diego ### Introduction to Graph Theory Skills you'll gain: Graph Theory, Combinatorics, Network Analysis, Data Structures, Network Routing, Algorithms, Mathematical Theory & Analysis, Theoretical Computer Science, Program Development 4.5 Rating, 4.5 out of 5 stars · 1.1K reviews Beginner · Course · 1 - 3 Months Status: Preview Preview S Stanford University ### Game Theory Skills you'll gain: Game Theory, Strategic Decision-Making, Mathematical Modeling, Graph Theory, Bayesian Statistics, Behavioral Economics, Probability, Economics, Problem Solving, Algorithms 4.6 Rating, 4.6 out of 5 stars · 4.9K reviews Beginner · Course · 1 - 3 Months Status: Free Trial Free Trial J Johns Hopkins University ### Algebra: Elementary to Advanced - Functions & Applications Skills you'll gain: Mathematical Modeling, Graphing, Algebra, Mathematical Theory & Analysis, Applied Mathematics, Arithmetic 4.8 Rating, 4.8 out of 5 stars · 188 reviews Beginner · Course · 1 - 4 Weeks Status: Preview Preview S Stanford University ### Understanding Einstein: The Special Theory of Relativity Skills you'll gain: Physics, Timelines, Verification And Validation, Scientific Methods, Research, Mechanics, Mathematical Modeling, Systems Of Measurement 4.9 Rating, 4.9 out of 5 stars · 3.1K reviews Beginner · Course · 1 - 3 Months Status: Preview Preview T The Hong Kong University of Science and Technology ### Fibonacci Numbers and the Golden Ratio Skills you'll gain: Arithmetic, Geometry, Mathematical Theory & Analysis, Advanced Mathematics, Combinatorics, Algebra, Mathematical Modeling, Applied Mathematics 4.8 Rating, 4.8 out of 5 stars · 1.2K reviews Beginner · Course · 1 - 4 Weeks Number Theory learners also search Game Theory Business Analysis Market Research Economic Policy Marketing Analytics Business Statistics Business English Business Calculus 1 2 3 4 … 189 In summary, here are 10 of our most popular number theory courses Number Theory and Cryptography: University of California San Diego Introduction to Mathematical Thinking: Stanford University Mathematics for Engineering: Birla Institute of Technology & Science, Pilani Mathematical Foundations for Data Science and Analytics: University of Pittsburgh Introduction to Discrete Mathematics for Computer Science: University of California San Diego Algebra: Elementary to Advanced: Johns Hopkins University Linear Algebra from Elementary to Advanced: Johns Hopkins University Introduction to Graph Theory: University of California San Diego Game Theory: Stanford University Algebra: Elementary to Advanced - Functions & Applications: Johns Hopkins University Skills you can learn in Machine Learning Python Programming (33) Tensorflow (32) Deep Learning (30) Artificial Neural Network (24) Big Data (18) Statistical Classification (17) Reinforcement Learning (13) Algebra (10) Bayesian (10) Linear Algebra (10) Linear Regression (9) Numpy (9) Frequently Asked Questions about Number Theory Number theory is a branch of mathematics that deals with the properties and relationships of numbers, particularly integers. It focuses on studying patterns, properties, and relationships of numbers, including prime numbers, divisibility, modular arithmetic, and theorems such as the Fundamental Theorem of Arithmetic and Fermat's Last Theorem. Number theory has applications in various areas such as cryptography, computer science, and physics, and is an essential foundation for higher-level mathematics.‎ To excel in Number Theory, it is crucial to acquire the following skills: Mathematical Thinking: Develop a strong foundation in abstract mathematical reasoning and critical thinking. Number Theory involves solving complex problems and proofs, which require logical reasoning and analytical skills. Number Systems: Familiarize yourself with different number systems such as natural numbers, integers, rational numbers, irrational numbers, real numbers, and complex numbers. Understanding their properties and relationships is fundamental to Number Theory. Prime Numbers: Study prime numbers extensively. Learn how to identify prime numbers, calculate prime factors, understand divisibility rules, and explore various properties of prime numbers. Modular Arithmetic: Learn about modular arithmetic and its applications in Number Theory. Understand concepts such as congruence, Euclidean algorithm, modular inverses, and Chinese remainder theorem. Diophantine Equations: Gain proficiency in solving Diophantine equations, which involve finding integer solutions for polynomial equations. Acquaint yourself with techniques like factorization, modular arithmetic, and sometimes algebraic number theory. Continued Fractions: Study the theory of continued fractions and their applications in Number Theory. Learn how to convert real numbers into continued fractions and perform operations on them. Cryptography Concepts: Familiarize yourself with basic concepts of cryptography, such as encryption and decryption techniques using Number Theory. Understand how prime numbers play a significant role in modern encryption algorithms like RSA. Analytical and Problem-Solving Skills: Strengthen your ability to analyze complex problems, identify patterns, and devise efficient solutions in Number Theory. Develop your problem-solving skills through practice and exposure to a variety of problems. Remember, Number Theory is a vast field, and each topic may have further subtopics or specialized areas for further exploration. It is vital to have a strong foundation in mathematics as a whole and a willingness to delve into rigorous analytical thinking to master Number Theory.‎ There are several job opportunities available for individuals with Number Theory skills. Some of the potential job roles include: Cryptographer: Number theory plays a crucial role in cryptography, the study and creation of secure communication systems. Cryptographers use number theory concepts to develop encryption algorithms and safeguard sensitive data. Data Scientist: Number Theory is fundamental to the field of Data Science. Professionals in this field use advanced mathematical techniques, including number theory, to derive meaningful insights from large datasets and make data-driven decisions. Computer Programmer: Number theory is essential in developing algorithms and data structures used in computer programming. With number theory skills, one can excel in fields such as software development, algorithm design, and computational mathematics. Financial Analyst: Number theory is applied in financial mathematics and asset valuation. As a financial analyst, one can use number theory skills to analyze complex financial models, study patterns in stock markets, and assess investment opportunities. Operations Research Analyst: Number theory concepts like modular arithmetic and discrete mathematics are highly useful in operations research. Professionals in this role apply number theory to optimize efficiency, solve complex logistics problems, and make data-driven decisions for businesses. Mathematician/Academic Researcher: Number Theory is a branch of pure mathematics, and individuals with expertise in this field can pursue careers in research and academia. Mathematicians specializing in Number Theory work on unsolved problems, develop new theories, and contribute to mathematical advancements. Actuary: Actuaries work in the insurance industry and use number theory concepts to assess and mitigate financial risks. With number theory skills, actuaries can calculate probabilities, analyze complex data, and design insurance policies or pension plans. Statistician: Number theory provides the foundation for statistical analysis and modeling. Statisticians with number theory skills can work in various domains such as market research, public health, or social sciences to collect and interpret numerical data. These are just a few examples of the diverse range of job opportunities available for individuals with Number Theory skills.‎ People who are best suited for studying Number Theory are those who have a strong interest and aptitude in mathematics. They should have a solid foundation in algebra and number systems, as Number Theory involves the study of properties and relationships of numbers. Additionally, individuals who enjoy problem-solving, critical thinking, and logical reasoning would find Number Theory fascinating. This field of study is often pursued by mathematicians, computer scientists, and individuals interested in cryptography or advanced mathematics research.‎ There are several topics related to Number Theory that you can study. Some of these topics include: Prime numbers: This involves understanding and analyzing the properties and patterns of prime numbers, such as their distribution and factors. Modular arithmetic: This is a branch of Number Theory that focuses on the remainder when dividing one number by another. It has applications in encryption and digital cryptography. Diophantine equations: These are polynomial equations with integer solutions. Studying Diophantine equations involves analyzing patterns and finding solutions to these equations. Congruence relations: This topic explores the idea of equivalence between numbers based on their remainders when divided by a fixed number. Congruence relations are useful in solving problems related to divisibility and solving modular equations. Quadratic reciprocity: Quadratic reciprocity theorem is a fundamental result in Number Theory that establishes relationships between quadratic residues and non-residues. Cryptography: Number Theory has extensive applications in encryption and digital security. Studying Number Theory can help you understand the underlying principles behind cryptographic systems. Continued fractions: A continued fraction is an expression that represents a number as a sequence of fractions. Exploring continued fractions can help you understand irrational numbers and their approximations. Arithmetic functions: These functions assign values to numbers based on certain properties, such as the number of prime factors or their divisibility properties. Studying arithmetic functions can provide insights into behavior and patterns of numbers. These are just a few topics that are related to Number Theory. By delving deeper into these subjects, you can gain a comprehensive understanding of Number Theory as a whole.‎ Online Number Theory courses offer a convenient and flexible way to enhance your knowledge or learn new Number theory is a branch of mathematics that deals with the properties and relationships of numbers, particularly integers. It focuses on studying patterns, properties, and relationships of numbers, including prime numbers, divisibility, modular arithmetic, and theorems such as the Fundamental Theorem of Arithmetic and Fermat's Last Theorem. Number theory has applications in various areas such as cryptography, computer science, and physics, and is an essential foundation for higher-level mathematics. skills. Choose from a wide range of Number Theory courses offered by top universities and industry leaders tailored to various skill levels.‎ When looking to enhance your workforce's skills in Number Theory, it's crucial to select a course that aligns with their current abilities and learning objectives. Our Skills Dashboard is an invaluable tool for identifying skill gaps and choosing the most appropriate course for effective upskilling. For a comprehensive understanding of how our courses can benefit your employees, explore the enterprise solutions we offer. Discover more about our tailored programs at Coursera for Business here.‎ Other topics to explore Arts and Humanities 338 courses 1095 courses Computer Science 668 courses Data Science 425 courses Information Technology 145 courses 471 courses Math and Logic 70 courses Personal Development 137 courses Physical Science and Engineering 413 courses Social Sciences 401 courses Language Learning 150 courses
190694
https://www.quora.com/The-three-digit-number-is-given-The-sum-of-all-different-three-digit-numbers-with-the-same-digits-as-the-given-number-including-the-given-number-is-1998-How-many-different-three-digit-numbers-satisfy-the-given
Something went wrong. Wait a moment and try again. Three Digit Combinatorics of Words Math Sum Mathematics Word Problems Combinatorics C Number Theory Decimal Digits 5 The three-digit number is given. The sum of all different three-digit numbers with the same digits as the given number (including the given number) is 1998. How many different three-digit numbers satisfy the given property? · To solve this problem, we start by understanding the properties of three-digit numbers formed by the same digits. Let's denote the three-digit number as abc, where a, b, and c are its digits. The possible three-digit numbers that can be formed with these digits are abc, acb, bac, bca, cab, and cba. Step 1: Count the number of different permutations For a three-digit number formed by distinct digits a, b, and c, there are 3!=6 permutations. If any digits are repeated, the number of distinct permutations would be less. Step 2: Calculate the sum of all permutations The sum of all permutations of th To solve this problem, we start by understanding the properties of three-digit numbers formed by the same digits. Let's denote the three-digit number as abc, where a, b, and c are its digits. The possible three-digit numbers that can be formed with these digits are abc, acb, bac, bca, cab, and cba. Step 1: Count the number of different permutations For a three-digit number formed by distinct digits a, b, and c, there are 3!=6 permutations. If any digits are repeated, the number of distinct permutations would be less. Step 2: Calculate the sum of all permutations The sum of all permutations of the digits a, b, and c can be calculated as follows. Each digit appears in each position (hundreds, tens, units) an equal number of times across all permutations. Each digit appears in the hundreds place in 2! (which is 2) permutations. Each digit appears in the tens place in 2! (which is 2) permutations. Each digit appears in the units place in 2! (which is 2) permutations. Thus, the total contribution to the sum from each digit can be calculated as: Sum=2(a⋅100+b⋅100+c⋅100)+2(a⋅10+b⋅10+c⋅10)+2(a⋅1+b⋅1+c⋅1) This simplifies to: Sum=222(a+b+c) Step 3: Set up the equation According to the problem, this sum equals 1998: 222(a+b+c)=1998 Step 4: Solve for a+b+c Now, we solve for a+b+c: a+b+c=1998222=9 Step 5: Find valid three-digit combinations Next, we need to find how many different combinations of digits a, b, and c can sum to 9, with the constraint that a (the hundreds place) cannot be 0. Case Analysis: Let’s denote a, b, and c as digits where a≥1 (since a cannot be zero) and b and c can range from 0 to 9. We rewrite a as a′=a−1 (where a′≥0). Thus, we have: a′+b+c=8 Now we need to find the non-negative integer solutions to this equation. Using the stars and bars combinatorial method, the number of solutions is given by: (8+3−13−1)=(102)=45 Step 6: Filter out invalid combinations However, we must ensure that all digits are unique. Therefore, we need to count only the cases where a, b, and c are distinct. We can do this by listing combinations of distinct digits that sum to 9, ensuring a≠0. Valid combinations of a, b, and c are: - 1,2,6 - 1,3,5 - 1,4,4 (not allowed since digits must be unique) - 2,3,4 Final Count The valid sets of distinct digits that sum to 9 and can form three-digit numbers are: 1. 1,2,6 2. 1,3,5 3. 2,3,4 Each of these sets can form 3!=6 distinct three-digit numbers. Thus, the total number of different three-digit numbers that satisfy the given property is: Total=6×3=18 Therefore, the answer is: 18 Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) · Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Related questions How many three digit numbers have a sum of 10? The number of three digit number whose middle digit is bigger than the extreme digit? Which of the given number is the smallest two-digit number such that it is not a sum of three different one-digit numbers? How many different three digit combinations can you make from four given digits (not including zero)? In a three-digit number, the digit in the hundred’s place is twice the units digit and the sum of the digits of the number is 11. How many three-digit numbers satisfy this condition? Anil Bapat Lives in Mumbai, Maharashtra, India · Author has 2.8K answers and 3.8M answer views · 4y The three-digit number is given. The sum of all different three-digit numbers with the same digits as the given number (including the given number) is 1998. How many different three-digit numbers satisfy the given property? Let’s consider a 3-digit number abc. It has got another 5 combinations Viz. acb, bac, bca, cab and cba. As per given, as we add these combinations we get 1998. abc+acb+bac+bca+cab The three-digit number is given. The sum of all different three-digit numbers with the same digits as the given number (including the given number) is 1998. How many different three-digit numbers satisfy the given property? Let’s consider a 3-digit number abc. It has got another 5 combinations Viz. acb, bac, bca, cab and cba. As per given, as we add these combinations we get 1998. abc+acb+bac+bca+cab+cba = 1998 We know that a number (like) abc is 100a+10b+c and thus, when we add abc+acb+bac+bca+cab+cba, what we get is 222a+222b+222c i. e. 222(a+b+c). Equating this sum with 1998, we get 222(a+b+c) = 1998 and therefore a+b+c = 1998 / 222 = 9. So, we need to come up with digits a, b abd c such that none of a, b and c is 0. Thus, the possible 7 combinations are 1, 1 and 7: 117+171+117+171+711+711 = 1998 ... Mike Janney Software Engineer at Electronic Arts (company) (2014–present) · 4y So you’re looking for a set of three digits a, b, and c, such that the six numbers abc, acb, bac, bca, cab, cba sum to 1998 - or, 222a + 222b + 222c (a, b, c all single digits) = 1998. Divide through by 222, and you get a + b + c = 9, a far simpler question to answer. Discounting zero as a potential digit (because it would permute into the hundreds’ position, leaving only a two-digit number), and disallowing repetition, the valid sets are {1, 2, 6} (six 3-digit numbers by permuting these digits) {1, 3, 5} (six more) {2, 3, 4} (six more). Okay, great. But we disallowed repetition. Are any numbers WI So you’re looking for a set of three digits a, b, and c, such that the six numbers abc, acb, bac, bca, cab, cba sum to 1998 - or, 222a + 222b + 222c (a, b, c all single digits) = 1998. Divide through by 222, and you get a + b + c = 9, a far simpler question to answer. Discounting zero as a potential digit (because it would permute into the hundreds’ position, leaving only a two-digit number), and disallowing repetition, the valid sets are {1, 2, 6} (six 3-digit numbers by permuting these digits) {1, 3, 5} (six more) {2, 3, 4} (six more). Okay, great. But we disallowed repetition. Are any numbers WITH repetition suitable? Try one repeated digit, with the other different. Now you’re looking at a different set of three numbers: aab, aba, baa which sum to 1998 : 222a + 111b = 1998. This one can only be divided through by 111, yielding 2a + b = 18. Still discounting zero, we can make: {2, 8, 8} (three 3-digit numbers) {4, 7, 7} (three more) {5, 5, 8} (three more) Nice. How about a number where all three digits are the same? You’ve only got one number now, though, because you can’t permute it: aaa = 1998 Nope, doesn’t really work, does it? So I make the final answer (6 + 6 + 6 + 3 + 3 + 3) = 27. Ellis Cave 40+ years as an Electrical Engineer · Author has 7.9K answers and 4.3M answer views · 4y There are two main options: All three digits must be different. Digit duplications are allowed. So we look at the first option - all digits are unique. Generate all the possible combinations of three different digits (not starting with zero) that have all three digits unique (no dups) & store them in n. Then generate all 6 three-permutations of each of those 84 three-digit integers convert them to integers and store them in m: $m=.|:(perm 3){|:sep n=.(#~99&<)10#.3 comb 10 84 3 6 So there are 84 unique three digit integers with all three digits different. List n which contains all the unique three-digi There are two main options: All three digits must be different. Digit duplications are allowed. So we look at the first option - all digits are unique. Generate all the possible combinations of three different digits (not starting with zero) that have all three digits unique (no dups) & store them in n. Then generate all 6 three-permutations of each of those 84 three-digit integers convert them to integers and store them in m: $m=.|:(perm 3){|:sep n=.(#~99&<)10#.3 comb 10 84 3 6 So there are 84 unique three digit integers with all three digits different. List n which contains all the unique three-digit combinations of integers with all three digits unique: n 123 124 125 126 127 128 129 134 135 136 137 138 139 145 146 147 148 149 156 157 158 159 167 168 169 178 179 189 234 235 236 237 238 239 245 246 247 248 249 256 257 258 259 267 268 269 278 279 289 345 346 347 348 349 356 357 358 359 367 368 369 378 379 389 456 457 458 459 467 468 469 478 479 489 567 568 569 578 579 589 678 679 689 789 Also show the first & last few sets of 6 permutations in m of each of those 84 integers. {10#.0 2 1 |:m │123 132 213 231 312 321│124 142 214 241 412 421│125 152 215 251 512 521│126 162 216 261 612 621│127 172 217 271 712 721│128 182 218 281 812 821│129 192 219 291 912 921│134 143 314 341 413 431│135 153 315 351 513 531│136 163 316 361 613 631│ ……. ……. │567 576 657 675 756 765│568 586 658 685 856 865│569 596 659 695 956 965│578 587 758 785 857 875│579 597 759 795 957 975│589 598 859 895 958 985│678 687 768 786 867 876│679 697 769 796 967 976│689 698 869 896 968 986│789 798 879 897 978 987│ So now that we have all the possible sets of 6 permutations of each of the 3 digit combinations. Now we just need to find those sets of 6 that sum to 1998, and list them. ]r=.c#~1998=+/"1 c=.10#.0 2 1|:m 126 162 216 261 612 621 135 153 315 351 513 531 234 243 324 342 423 432 So the answer is that there are 3 sets of 6 three-digit integers that meet all the criteria. Check - add each of the three rows of 6 integers: +/"1 r 1998 1998 1998 Correct. <<<>>> I’ll save the second option for another day. Promoted by Webflow Metis Chan Works at Webflow · Aug 11 What are the best AI website builders now? When it comes to AI website builders, there are a growing number of options, but a few stand out for their power, flexibility, and ability to grow with your needs. Webflow’s AI Site Builder is a top choice for small businesses, in-house teams, and agencies who want the speed of AI and the freedom to fully customize every part of their site. With Webflow, you can: Describe your business or idea and instantly generate a unique, production-ready website—no coding required. Edit visually in a powerful no-code canvas, customize layouts, and add advanced interactions. Collaborate with your team in real When it comes to AI website builders, there are a growing number of options, but a few stand out for their power, flexibility, and ability to grow with your needs. Webflow’s AI Site Builder is a top choice for small businesses, in-house teams, and agencies who want the speed of AI and the freedom to fully customize every part of their site. With Webflow, you can: Describe your business or idea and instantly generate a unique, production-ready website—no coding required. Edit visually in a powerful no-code canvas, customize layouts, and add advanced interactions. Collaborate with your team in real time, streamline feedback, and manage all your content in one place. Publish instantly on enterprise-grade hosting with built-in SEO, security, and the flexibility to scale as you grow. Many other tools offer AI-powered templates or quick site launches, but Webflow stands out by letting you take control—so your site never feels generic, and you can easily update, expand, or redesign as your needs change. Want to see what AI-powered site building can really do? Try Webflow AI Site Builder for free today. Related questions The sum of a three digit number and a two digit number is 199. The five digits among the two numbers are all different. How many such three digit numbers are there? The digit-sum of the three digit number 285 is 15 (2+8+5 = 15) How many three-digit numbers have a digit-sum of 25 or greater? What is the sum of all 3 digit numbers? All four digits of two, 2-digit numbers are different. What is the largest possible sum of two such numbers? The sum of a three-digit number is 16. The third digit is twice the difference between the first and second digits. When the three digits are reversed the number decreases by 297. What is the three-digit number? Atul Garg Enjoy playing with numbers · Author has 326 answers and 108.1K answer views · 4y Let the number be abc Sum of all different numbers with these 3 digits will be a multiple of 222 as it will be 222a+222b+222c Dividing 1998 by 222 gives us 1998/222 that is 9 so we have to find different combination satisfying this condition. 3 combinations satisfying this are (i) 1,3,5,. (ii). 1,2,6 & (iii) 2,3,4 Doug Dillon Ph.D. Mathematics · Author has 12.4K answers and 11.4M answer views · 3y Originally Answered: The three-digit number is given. The sum of all different three-digit numbers with the same digits as the given number (including the given number) is . How many different three-digit numbers satisfy the given property? · The three-digit number is given. The sum of all different three-digit numbers with the same digits as the given number (including the given number) is . How many different three-digit numbers satisfy the given property? If the given number is 100a+10b+c then the sum of all six is 111(a+b+c). You claim that this sum is something. All three digit numbers will produce something as the sum. Promoted by The Penny Hoarder Lisa Dawson Finance Writer at The Penny Hoarder · Updated Sep 16 What's some brutally honest advice that everyone should know? Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included. And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone. Don’t wait like I did. Cancel Your Car Insurance You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix. Don’t waste your time Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included. And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone. Don’t wait like I did. Cancel Your Car Insurance You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix. Don’t waste your time browsing insurance sites for a better deal. A company calledInsurify shows you all your options at once — people who do this save up to $996 per year. If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you. Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify andsee how much you could save on car insurance. Ask This Company to Get a Big Chunk of Your Debt Forgiven A company calledNational Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit. If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum. On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of. Set Up Direct Deposit — Pocket $300 When you set up direct deposit withSoFi Checking and Savings (Member FDIC), they’ll put up to $300 straight into your account. No… really. Just a nice little bonus for making a smart switch. Why switch? With SoFi, you can earn up to 3.80% APY on savings and 0.50% on checking, plus a 0.20% APY boost for your first 6 months when you set up direct deposit or keep $5K in your account. That’s up to 4.00% APY total. Way better than letting your balance chill at 0.40% APY. There’s no fees. No gotchas.Make the move to SoFi and get paid to upgrade your finances. You Can Become a Real Estate Investor for as Little as $10 Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10. An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting. With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers. Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties. So if you want to get started in the world of real-estate investing, it takes just a few minutes tosign up and create an account with the Fundrise Flagship Fund. This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing. Cut Your Phone Bill to $15/Month Want a full year of doomscrolling, streaming, and “you still there?” texts, without the bloated price tag? Right now, Mint Mobile is offering unlimited talk, text, and data for just $15/month when you sign up for a 12-month plan. Not ready for a whole year-long thing? Mint’s 3-month plans (including unlimited) are also just $15/month, so you can test the waters commitment-free. It’s BYOE (bring your own everything), which means you keep your phone, your number, and your dignity. Plus, you’ll get perks like free mobile hotspot, scam call screening, and coverage on the nation’s largest 5G network. Snag Mint Mobile’s $15 unlimited deal before it’s gone. Get Up to $50,000 From This Company Need a little extra cash to pay off credit card debt, remodel your house or to buy a big purchase? We found a company willing to help. Here’s how it works: If your credit score is at least 620, AmONE can help you borrow up to $50,000 (no collateral needed) with fixed rates starting at 6.40% and terms from 6 to 144 months. AmONE won’t make you stand in line or call a bank. And if you’re worried you won’t qualify, it’s free tocheck online. It takes just two minutes, and it could save you thousands of dollars. Totally worth it. Get Paid $225/Month While Watching Movie Previews If we told you that you could get paid while watching videos on your computer, you’d probably laugh. It’s too good to be true, right? But we’re serious. By signing up for a free account with InboxDollars, you could add up to $225 a month to your pocket. They’ll send you short surveys every day, which you can fill out while you watch someone bake brownies or catch up on the latest Kardashian drama. No, InboxDollars won’t replace your full-time job, but it’s something easy you can do while you’re already on the couch tonight, wasting time on your phone. Unlike other sites, InboxDollars pays you in cash — no points or gift cards. It’s already paid its users more than $56 million. Signing up takes about one minute, and you’ll immediately receive a $5 bonus to get you started. Earn $1000/Month by Reviewing Games and Products You Love Okay, real talk—everything is crazy expensive right now, and let’s be honest, we could all use a little extra cash. But who has time for a second job? Here’s the good news. You’re already playing games on your phone to kill time, relax, or just zone out. So why not make some extra cash while you’re at it? WithKashKick, you can actually get paid to play. No weird surveys, no endless ads, just real money for playing games you’d probably be playing anyway. Some people are even making over $1,000 a month just doing this! Oh, and here’s a little pro tip: If you wanna cash out even faster, spending $2 on an in-app purchase to skip levels can help you hit your first $50+ payout way quicker. Once you’ve got $10, you can cash out instantly through PayPal—no waiting around, just straight-up money in your account. Seriously, you’re already playing—might as well make some money while you’re at it.Sign up for KashKick and start earning now! Frank Abbing Former Pensioner at Philips (Electronics Company) (1965–1990) · Author has 1K answers and 226.5K answer views · Updated 4y My computer thinks the answer is 55, but it did some re-thinking! It now thinks the answer is 3: (2nd update: a tiny nicefication) ``` include int convert(int a, int b, int c){ return a100 + b10 + c;} int main(){ int a, b, c, n1, n2, n3, n4, n5, n6, err; for(a = 1; a<10; a++) for(b = a; b<10; b++) for(c = b; c<10; c++){ n1 = convert(a, b, c); n2 = convert(a, c, b); n3 = convert(b, a, c); n4 = convert(b, c, a); n5 = convert(c, a, b); n6 = convert(c, b, a); err = 0; if((n1+n2+n3+n4+n5+n6)!=1998) err++; if(n1==n2 || n1==n3) err++; if(n1==n4 || n1==n5 || n1==n ``` My computer thinks the answer is 55, but it did some re-thinking! It now thinks the answer is 3: (2nd update: a tiny nicefication) ``` include int convert(int a, int b, int c){ return a100 + b10 + c;} int main(){ int a, b, c, n1, n2, n3, n4, n5, n6, err; for(a = 1; a<10; a++) for(b = a; b<10; b++) for(c = b; c<10; c++){ n1 = convert(a, b, c); n2 = convert(a, c, b); n3 = convert(b, a, c); n4 = convert(b, c, a); n5 = convert(c, a, b); n6 = convert(c, b, a); err = 0; if((n1+n2+n3+n4+n5+n6)!=1998) err++; if(n1==n2 || n1==n3) err++; if(n1==n4 || n1==n5 || n1==n6) err++; if(n2==n3 || n2==n4) err++; if(n2==n5 || n2==n6) err++; if(n3==n4 || n3==n5 || n3==n6) err++; if(n4==n5 || n4==n6) err++; if(n5==n6) err++; if(!err) printf("%d %d %d %d %d %d\n", n1, n2, n3, n4, n5, n6); }} ``` It shows three numbers: 126, 135 and 234 and their permutations Richard Polunsky I've been interested in math since I was very young. · Author has 8.6K answers and 2M answer views · 4y I make it 28 numbers that satisfy these requirements. It’s late, so I could be wrong. I’ve got my list to post after someone else either agrees or disputes that count, then we can compare our lists. Promoted by Spokeo Spokeo - People Search | Dating Safety Tool Dating Safety and Cheater Buster Tool · Apr 16 Is there a way to check if someone has a dating profile? Yes, there is a way. If you're wondering whether someone has a dating profile, it's actually pretty easy to find out. Just type in their name and click here 👉 UNCOVER DATING PROFILE. This tool checks a bunch of dating apps and websites to see if that person has a profile—either now or in the past. You don’t need to be tech-savvy or know anything complicated. It works with just a name, and you can also try using their email or phone number if you have it. It’s private, fast, and really helpful if you’re trying to get some peace of mind or just want to know what’s out there. 🔍 HERE IS HOW IT WORK Yes, there is a way. If you're wondering whether someone has a dating profile, it's actually pretty easy to find out. Just type in their name and click here 👉 UNCOVER DATING PROFILE. This tool checks a bunch of dating apps and websites to see if that person has a profile—either now or in the past. You don’t need to be tech-savvy or know anything complicated. It works with just a name, and you can also try using their email or phone number if you have it. It’s private, fast, and really helpful if you’re trying to get some peace of mind or just want to know what’s out there. 🔍 HERE IS HOW IT WORKS: Start by going to this link 👉 UNCOVER DATING PROFILE Enter the person’s name, email address, or phone number. Name and phone number searches usually give the best and most accurate results The site scans billions of public records in just a few seconds. It also scans over 120 dating and social media websites to see if the person has a profile It will ask you a few quick questions to narrow down the results (like location) Once the search is done, you’ll see blurred preview with: Their full name Dating profiles & social media accounts All known phone numbers Current and past addresses A list of family members Any available court or criminal records And more useful background info ⚠️ KEY CALL OUTS ⚠️ Its not free. You will need to pay to see everything, but its pretty cheap. If nothing shows up, it doesn’t always mean they’re in the clear — some people use fake names or burner emails. So it’s worth digging a little deeper just to be sure. If you’re in a situation where you need to know whether someone is still acting single online, this is one of the most effective and low-stress ways to find out. 👉 Check it out here if you’re ready to start your search. ALSO HERE ARE OTHER HELPFUL TOOLS: Dating Research Tool – Search a large database to learn more about who you’re dating. Who’s Texting Your Partner – Discover who your partner is texting or calling, including their name, age, location, and social profiles. Verify People Tool – Confirm if someone is really who they say they are. Find Social Profiles – Locate someone's social media and dating profiles. People Search Directory – Look up someone's phone number and contact details. Dating Safety Check – Review your date’s background to help keep you safe. Virender Bishnoi Mechanical Engineer, Dragon Ball & Naruto Fan · Author has 66 answers and 121.7K answer views · 5y How many 3-digit numbers is the sum of those digits even? As we know sum of two odd numbers is even nd also sum of two even numbers is also even. For odd i will use O and for even i will use symbol E. So for a three digit number following conditions will arise : OOE OEO EOO EEE As from 1 to 10 we have 5 odd and 5 even numbers, so For above conditions OOE : 555=125 Numbers whose digits sum is even OEO : 555=125 Numbers EOO : 455=100 (0 in 100th place will make the number a 2 digit numbers so after excluding 0 we have 4 even numbers) EEE : 455=100 (same explanation as point 3. So total such numbers = 125+125+100+100=450 Related questions How many three digit numbers have a sum of 10? The number of three digit number whose middle digit is bigger than the extreme digit? Which of the given number is the smallest two-digit number such that it is not a sum of three different one-digit numbers? How many different three digit combinations can you make from four given digits (not including zero)? In a three-digit number, the digit in the hundred’s place is twice the units digit and the sum of the digits of the number is 11. How many three-digit numbers satisfy this condition? The sum of a three digit number and a two digit number is 199. The five digits among the two numbers are all different. How many such three digit numbers are there? The digit-sum of the three digit number 285 is 15 (2+8+5 = 15) How many three-digit numbers have a digit-sum of 25 or greater? What is the sum of all 3 digit numbers? All four digits of two, 2-digit numbers are different. What is the largest possible sum of two such numbers? The sum of a three-digit number is 16. The third digit is twice the difference between the first and second digits. When the three digits are reversed the number decreases by 297. What is the three-digit number? How many 3-digit numbers contain 6 and 7? What is the product of the largest three digit number and the least three digit number? Let all four-digit multiples of numbers be with distinct digits. How many numbers satisfy the given conditions? In a three digit number, the sum of the digits is 9 and the unit digit is twice the tens digit adding 99 to the number when the digits are reversed. What is the number? The sum of a two-digit number and the number obtained by reversing the digits 88. If the digits of the number differ by 3, what is the number? How many such numbers are there? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
190695
https://it.wikipedia.org/wiki/Grandezza_scalare
Grandezza scalare - Wikipedia Vai al contenuto [x] Menu principale Menu principale sposta nella barra laterale nascondi Navigazione Pagina principale Ultime modifiche Una voce a caso Nelle vicinanze Vetrina Aiuto Sportello informazioni Pagine speciali Comunità Portale Comunità Bar Il Wikipediano Contatti Ricerca Ricerca [x] Aspetto Aspetto sposta nella barra laterale nascondi Testo Piccolo Standard Grande Questa pagina utilizza sempre caratteri di piccole dimensioni Larghezza Standard Largo Il contenuto è il più ampio possibile per la finestra del browser. Colore (beta) Automatico Chiaro Scuro Questa pagina è sempre in modalità luce. Fai una donazione registrati entra [x] Strumenti personali Fai una donazione registrati entra [x] Mostra/Nascondi l'indice Indice sposta nella barra laterale nascondi Inizio 1 Esempi 2 Note 3 Bibliografia 4 Voci correlate 5 Collegamenti esterni Grandezza scalare [x] 42 lingue العربية Azərbaycanca Беларуская Català کوردی Deutsch English Esperanto Español Eesti فارسی Suomi Français Gaeilge עברית हिन्दी Hrvatski Kreyòl ayisyen Հայերեն Bahasa Indonesia Ido 日本語 Қазақша 한국어 کٲشُر Latina Lombard Македонски മലയാളം ਪੰਜਾਬੀ Polski Русский Simple English Slovenčina ไทย Türkçe Українська اردو Oʻzbekcha / ўзбекча Tiếng Việt 中文 粵語 Modifica collegamenti Voce Discussione [x] italiano Leggi Modifica Modifica wikitesto Cronologia [x] Strumenti Strumenti sposta nella barra laterale nascondi Azioni Leggi Modifica Modifica wikitesto Cronologia Generale Puntano qui Modifiche correlate Link permanente Informazioni pagina Cita questa voce Ottieni URL breve Scarica codice QR Carica su Commons Modifica collegamenti interlinguistici Stampa/esporta Crea un libro Scarica come PDF Versione stampabile In altri progetti Elemento Wikidata Da Wikipedia, l'enciclopedia libera. La temperatura è una grandezza scalare In fisica, una grandezza scalare è una grandezza che viene descritta unicamente, dal punto di vista matematico, da un numero reale, detto anch'esso scalare, spesso associato a un'unità di misura. A differenza delle grandezze vettoriali, non è pertanto sensibile alle dimensioni dello spazio, né al particolare sistema di riferimento o di coordinate utilizzato. La definizione di scalare deriva dal fatto che, moltiplicando un vettore per uno scalare, il modulo del vettore cambia: il vettore viene così ridimensionato, "riscalato" dallo scalare, che funge da fattore di scala. Esempi [modifica | modifica wikitesto] Carica elettrica Massa Volume e superficie Densità (è il rapporto tra due grandezze scalari: massa e volume) Pressione Temperatura Energia e Lavoro (quest'ultimo è il prodotto scalare tra forza e spostamento) Lunghezza d'onda (è il rapporto tra il modulo della velocità di un'onda e la sua frequenza, ossia è il rapporto tra due grandezze scalari). Alcune grandezze possono essere intese sia come scalari che vettoriali a seconda del contesto. Ad esempio la velocità, in generale, non è una grandezza scalare in quanto per definirla si rende necessario, oltre al valore numerico di intensità della velocità (cioè al suo modulo), anche la direzione e il verso; essa è quindi una grandezza vettoriale, ma può essere espressa dal modulo del vettore velocità quando la direzione non è rilevante o è univocamente determinata (ad esempio nel moto rettilineo). In lingua inglese questi due concetti hanno due nomi diversi: speed per la grandezza scalare e velocity per quella vettoriale. La lunghezza intesa come la norma di un vettore è anch'essa uno scalare per il fatto che viene indotta da un prodotto scalare. Note [modifica | modifica wikitesto] ^(EN) Vectors and Scalars, su zahniser.net. URL consultato il 4 agosto 2018. ^Parodi et al, vol. 3,p. 6. ^Salta a: a b cParodi et al, vol. 1,p. 89. ^Parodi et al, vol. 1,p. 439. ^Walker,p. 200. Bibliografia [modifica | modifica wikitesto] Gian Paolo Parodi, Marco Ostili e Guglielmo Mochi Onori, L'Evoluzione della Fisica, vol.1, Paravia, 2006, ISBN978-88-395-1609-1. L'evoluzione della Fisica, vol.3, 2006, ISBN88-395-1611-5. James S. Walker, Corso di Fisica-Volume 1-Meccanica, Linx, 2010, ISBN978-88-6364-036-6. Antonio Caforio e Aldo Ferilli, Fisica!, Le Monnier, 2010, ISBN978-88-00-20945-8. Voci correlate [modifica | modifica wikitesto] Grandezza vettoriale Collegamenti esterni [modifica | modifica wikitesto] (EN) scalar, su Enciclopedia Britannica, Encyclopædia Britannica, Inc. Portale Fisica: accedi alle voci di Wikipedia che trattano di fisica Estratto da " Categoria: Grandezze fisiche [altre] Categoria nascosta: P1417 letta da Wikidata Questa pagina è stata modificata per l'ultima volta il 26 ago 2025 alle 04:01. Il testo è disponibile secondo la licenza Creative Commons Attribuzione-Condividi allo stesso modo; possono applicarsi condizioni ulteriori. Vedi le condizioni d'uso per i dettagli. Informativa sulla privacy Informazioni su Wikipedia Avvertenze Codice di condotta Sviluppatori Statistiche Dichiarazione sui cookie Versione mobile Modifica impostazioni anteprima Ricerca Ricerca [x] Mostra/Nascondi l'indice Grandezza scalare 42 lingueAggiungi argomento
190696
https://people.umass.edu/wqd/strobel/angmom/angmom.htm
Angular Momentum Angular Momentum This page was copied from Nick Strobel's Astronomy Notes. Go to his site at www.astronomynotes.com for the updated and corrected version. Full window version (looks a little nicer). Click button to get back to small framed version with content indexes. Definition To describe how things move we often use the basic quantities of length, mass, and time. Quantities such as velocity, acceleration, force and energy are very powerful ones that help us understand how an object's position will change over time and how it will interact with other things in the universe. Momentum and its cousin angular momentum are other very powerful quantities. Ordinary momentum is a measure of an object's tendency to move at constant speed along a straight path. Momentum depends on speed and mass. A train moving at 20 mph has more momentum than a bicyclist moving at the same speed. A car colliding at 5 mph does not cause as much damage as that same car colliding at 60 mph. For things moving in straight lines momentum is simply mass � speed. In astronomy most things move in curved paths so we generalize the idea of momentum and have angular momentum. Angular momentum measures an object's tendency to continue to spin. An ``object'' can be either a single body or two or more bodies acting together as a single group. angular momentum = mass � velocity � distance (from point object is spinning or orbiting around) Very often in astronomy, the object (or group of objects) we're observing has no outside forces acting on it in a way to produce torques'' that would disturb the angular motion of the object (or group of objects). Atorque'' is simply a force acting along a line that is off the object's spin axis. In these cases, we have conservation of angular momentum. conservation of angular momentum---the total amount of angular momentum does not change with time no matter how the objects interact with one another. A planet's velocity and distance from the Sun will change but the combination of speed�distance will not change unless another planet or star passes close by and provides an extra gravity force. Applications 1) Kepler's Second Law of orbital motion The area swept out by a line connecting an orbiting object and the central point is the same for any two equal periods of times. That line is called a radius vector in the following discussion. The rate of change of the swept-out area does NOT change with time. The line along which gravity acts is parallel to the radius vector. This means that there are no torques disturbing the angular motion and, therefore, angular momentum is conserved. The part of the orbital velocity (v-orbit) perpendicular (at a right angle) to the radius vector (r) is v t. The rate of change of the swept-out area = r�v t/2. To calculate the orbital angular momentum use v t for the velocity. So, the angular momentum = mass � v t � r = mass � 2 � (rate of change of area). That value does not change over time. So if r decreases, v-orbit (and v t) must increase! If r increases, v-orbit (and v t) must decrease. This is just what Kepler observed for the planets! 2) Earth-Moon system The total angular momentum = spin angular momentum + orbital angular momentum. The total angular momentum is CONSTANT. To find the spin angular momentum, subdivide the object into small pieces of mass and find the angular momentum for each of the small pieces. Then add up the angular momentum for all of the pieces. The Earth's spin speed is decreasing so its spin angular momentum is DEcreasing. Therefore, the Moon's orbital angular momentum must compensate by INcreasing. It does this by increasing the Earth-Moon distance. 3) Rapidly spinning neutron stars Originally, a big star has a core 10,000's - 100,000's km in radius (the whole star is even bigger!). Here the radius is used instead of the diameter, because what is important is how far each piece of the core is from the spin axis that goes through the exact center. The core spins at 2 - 10 km/sec at the core's equator. If no external forces produce torques, the angular momentum is constant. During a supernova the outer layers are blown off and the core shrinks to only 10 kilometers in radius! The core angular momentum is approximately = 0.4�M�V�R and the mass M has stayed approximately the same. When the radius R shrinks by factors of 10,000's, the spin speed V must increase by 10,000's of times. Sometimes the neutron star suddenly shrinks slightly (by a millimeter or so) and it spins faster. Over time, though, the neutron star has been producing radiation from its strong magnetic field. This radiation is produced at the expense of the rotational energy and the angular momentum is not strictly conserved---it slowly decreases. Therefore, the neutron star spin speed slowly decreases. 4) Accretion disk in a binary system Gas flowing from one star falls toward its compact companion into an orbit around it. The orbital angular momentum is conserved, so as the gas' distance from the compact companion DEcreases, its orbital speed must INcrease. It forms a rapidly rotating disk-like whirlpool called an accretion disk. Over time some of the gas in the disk gas give torques to other parts of the disk's orbital motions through friction. This causes their angular momentum to decrease. Some of that gas, then, eventually falls onto the compact companion. 5) Forming Galaxy A huge slowly spinning gas cloud collapses. Parts of the roughly spherical gas cloud break up into small chunks to form stars and globular clusters. As the rest of the gas cloud collapses, the inner denser parts collapse more rapidly than the less dense parts. Stars form in the inner denser parts before they form in the outer less dense parts. All the time as the cloud collapses, the spin speed must increase. Since no outside forces produce torques, the angular momentum is conserved. The rapidly spinning part of gas cloud eventually forms a disk. This is because the cloud can collapse more easily in a direction parallel to the spin axis. The gas that is orbiting perpendicular to the spin axis has enough inertia to resist the inward pull of gravity (the gas feels a ``centrifugal force''). The most dense parts of the disk will form stars. Go to Astronomy Notes beginning last updated: 19 August 1997 Is this page a copy of Strobel's Astronomy Notes? Author of original content: Nick Strobel
190697
https://math.stackexchange.com/questions/1540994/let-a-2-3-4-and-b-a-b-list-elements-of-a-times-b
elementary set theory - Let $A = {2,3,4}$ and $B = {a,b}$. List elements of $A\times B$. - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Let A={2,3,4}A={2,3,4} and B={a,b}B={a,b}. List elements of A×B A×B. [closed] Ask Question Asked 9 years, 10 months ago Modified9 years, 10 months ago Viewed 520 times This question shows research effort; it is useful and clear -2 Save this question. Show activity on this post. Closed. This question is off-topic. It is not currently accepting answers. This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level. Closed 9 years ago. Improve this question Checking for revision! Is the correct answer to this: A×B={(2 a),(3 a),(4 a),(2 b),(3 b),(4 b)}A×B={(2 a),(3 a),(4 a),(2 b),(3 b),(4 b)} ? Thanks elementary-set-theory proof-verification Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Nov 22, 2015 at 20:41 user26857 53.4k 14 14 gold badges 76 76 silver badges 166 166 bronze badges asked Nov 22, 2015 at 12:45 Adam JarvisAdam Jarvis 11 3 3 bronze badges 5 Given the sets A A and B B, how is A∗B A∗B defined?João Victor Bateli Romão –João Victor Bateli Romão 2015-11-22 12:48:57 +00:00 Commented Nov 22, 2015 at 12:48 @JoãoVictorBateliRomão the title of this question is all I am provided with in the test paper I am working on Adam Jarvis –Adam Jarvis 2015-11-22 12:50:17 +00:00 Commented Nov 22, 2015 at 12:50 Isn't the notatin A×B A×B?João Victor Bateli Romão –João Victor Bateli Romão 2015-11-22 12:52:27 +00:00 Commented Nov 22, 2015 at 12:52 Yes, apologies. My head stays in programming language arithmetic operators.Adam Jarvis –Adam Jarvis 2015-11-22 12:53:28 +00:00 Commented Nov 22, 2015 at 12:53 For one thing, the elements of A×B A×B are tuples, not products; e.g. (2,a)(2,a), not (2 a)(2 a).Michael Grant –Michael Grant 2015-11-22 20:44:36 +00:00 Commented Nov 22, 2015 at 20:44 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. The terms of the ordered pairs must be separated by a comma. The right answer is {(2,a),(3,a),(4,a),(2,b),(3,b),(4,b)}{(2,a),(3,a),(4,a),(2,b),(3,b),(4,b)}. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Nov 23, 2015 at 1:09 answered Nov 22, 2015 at 12:59 João Victor Bateli RomãoJoão Victor Bateli Romão 804 5 5 silver badges 20 20 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. From comments I see you meant ××, which is the standard notation for Cartesian product defined for two (or more) sets. So for sets A,B A,B A×B={(a,b)∣∀a∈A,∀b∈B}.A×B={(a,b)∣∀a∈A,∀b∈B}. So, your answer is missing the commas. I've added two notes about Cartesian product, since it seems to me, like this concept is new to you. Hopefully it will make it rather clear than confusing for you. Note A To be more precise, what does (a,b)(a,b) mean? It means that the pair a,b a,b is ordered, that is, if a≠b a≠b, then (a,b)≠(b,a)(a,b)≠(b,a). But in set theory, what means ordered, e.g. {1,2,3}={3,1,2}{1,2,3}={3,1,2}. There are probably many (infinitely many, I suppose) ways to define (a,b)(a,b) so that is satisfies the given condition, the standard way I met (called Kuratowski, thanks wiki) is the following (a,b):={{a},{a,b}}.(a,b):={{a},{a,b}}. And it is true if a≠b a≠b, that {{a},{a,b}}=(a,b)≠(b,a)={{b},{a,b}}.{{a},{a,b}}=(a,b)≠(b,a)={{b},{a,b}}. Note B The second note is about associativity of ××. The question is, whether it is true that A×(B×C)=(A×B)×C A×(B×C)=(A×B)×C, for A,B,C A,B,C sets. Suppose now a∈A,b∈B,c∈C a∈A,b∈B,c∈C, and look at both sides. To be precise, on the left we get (a,(b,c)),(a,(b,c)), because first we do B×C B×C, and we get (b,c)(b,c) and then (a,(b,c))(a,(b,c)). However, on the right side we get ((a,b),c).((a,b),c). We see, that (a,(b,c))≠((a,b),c)(a,(b,c))≠((a,b),c), not even for a=b=c a=b=c. But we see, that there is natural correspondence between those two, meaning, that there is natural bijection between A×(B×C)A×(B×C) and (A×B)×C(A×B)×C, which is given by (x,(y,z))↦((x,y),z).(x,(y,z))↦((x,y),z). And therefore we can see A×(B×C)A×(B×C) and (A×B)×C(A×B)×C as the same thing, and instead writing (a,(b,c)),((a,b),c)(a,(b,c)),((a,b),c) we write (a,b,c)(a,b,c) . (We used three sets A,B,C A,B,C, but this can be done for infinitely many.) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Nov 22, 2015 at 20:51 answered Nov 22, 2015 at 13:23 quapkaquapka 1,476 9 9 silver badges 22 22 bronze badges Add a comment| Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions elementary-set-theory proof-verification See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 1Ordered Pairs and Set Theory Related 1Set theory list all possible elements? 0Finding |∅∖B||∅∖B| and |B×∅||B×∅| 0List the elements in the equivalence class E(9,2)E(9,2) 0List elements of these sets: 0Which one of the sets P(X×X)×P(X×X)P(X×X)×P(X×X) and P(P(X))P(P(X)) has more elements? 1Proof-verification: A×B⊂C×D⇒A⊂C A×B⊂C×D⇒A⊂C and B⊂D B⊂D. 2Proof for Let A, B be nonempty sets and let C, D be sets. If A×B⊆C×D A×B⊆C×D, then A⊆C A⊆C and B⊆D B⊆D. 0Let A A and B B be sets with |A|=5|A|=5 and |B|=7|B|=7. Let n=|A×B|n=|A×B|. 1Finding number of elements in A 1×⋯×A n A 1×⋯×A n by induction Hot Network Questions How do you emphasize the verb "to be" with do/does? Overfilled my oil Xubuntu 24.04 - Libreoffice Can you formalize the definition of infinitely divisible in FOL? How different is Roman Latin? alignment in a table with custom separator The geologic realities of a massive well out at Sea My dissertation is wrong, but I already defended. How to remedy? For every second-order formula, is there a first-order formula equivalent to it by reification? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Does "An Annotated Asimov Biography" exist? Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? в ответе meaning in context Should I let a player go because of their inability to handle setbacks? Does the curvature engine's wake really last forever? The rule of necessitation seems utterly unreasonable Discussing strategy reduces winning chances of everyone! Why do universities push for high impact journal publications? RTC battery and VCC switching circuit I have a lot of PTO to take, which will make the deadline impossible What were "milk bars" in 1920s Japan? Passengers on a flight vote on the destination, "It's democracy!" Why multiply energies when calculating the formation energy of butadiene's π-electron system? Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
190698
https://www.youtube.com/watch?v=NYapxKLdUlo
Three boxes numbered I, II, III contain t is and a ball is drawn from it. If that it is from box II. E-SHTAM 3490 subscribers 31 likes Description 1952 views Posted: 16 Feb 2023 Probability is a fascinating subject that has practical applications in numerous fields, including science, engineering, economics, and many others. It is the branch of mathematics that deals with the study of random events, and it has many real-world applications. In this video, we will explore the concept of conditional probability, which is the probability of an event occurring given that another event has already occurred. Specifically, we will look at the following problem: Three boxes numbered I, II, III contain the balls as follows. Box I has two white balls and one red ball, Box II has two red balls and one white ball, and Box III has three white balls. One box is randomly selected, and a ball is drawn from it. If the ball is red, then find the probability that it is from Box II. We will use a variety of probability concepts and tools to solve this problem, and we will explain each step of the process in detail. Problem Statement: The problem we are trying to solve can be stated as follows: We have three boxes labeled I, II, and III, which contain a certain number of balls. Box I has two white balls and one red ball, Box II has two red balls and one white ball, and Box III has three white balls. We randomly select one of the boxes, and then we draw a ball from that box. If the ball we draw is red, what is the probability that it came from Box II? Solution: To solve this problem, we will need to use a few probability concepts and tools. First, we need to understand the concept of conditional probability, which is the probability of an event occurring given that another event has already occurred. We will also need to use Bayes' theorem, which is a formula that helps us calculate conditional probabilities. Step 1: Determine the Probability of Selecting Each Box The first step in solving this problem is to determine the probability of selecting each box. Since we are randomly selecting one of the three boxes, the probability of selecting each box is equal to 1/3. Therefore, the probability of selecting Box I is 1/3, the probability of selecting Box II is 1/3, and the probability of selecting Box III is 1/3. Step 2: Determine the Probability of Drawing a Red Ball The second step is to determine the probability of drawing a red ball. To do this, we need to calculate the probability of drawing a red ball from each box and then use the law of total probability to find the overall probability of drawing a red ball. The probability of drawing a red ball from Box I is 1/3, since there is one red ball out of a total of three balls in the box. The probability of drawing a red ball from Box II is 2/3, since there are two red balls out of a total of three balls in the box. The probability of drawing a red ball from Box III is 0, since there are no red balls in the box. To find the overall probability of drawing a red ball, we use the law of total probability, which states that the probability of an event occurring is equal to the sum of the probabilities of the event occurring given each possible condition. In this case, the event we are interested in is drawing a red ball, and the possible conditions are selecting each of the three boxes. Therefore, the overall probability of drawing a red ball is: P(Red) = P(Red|Box I) P(Box I) + P(Red|Box II) P(Box II) + P(Red|Box III) P(Box III) = (1/3) (1/3) + (2/3) (1/3) + 0 probability conditionalprobability Bayestheorem randomevents mathematics statistics dataanalysis datascience probabilitytheor#mathproblem#probabilityproblem#boxproblem#redbahitesball#ballproblem#boxselection#redballprobability#probabilitycalculaton#probabilitysolution#probabilitycalculus#probabilityformula#mathematicalsolution#mathematicalproblem#probabilityconcepts#probabilitytools probabilityexample#conditionalprobabilityproblem#conditionalprobabilitycalculation#Bayestheoremproblem#Bayestheoremcalculation#Bayestheoremsolution#probabilityproblemvideo#mathematicalproblemvideo#statisticsproblemvideo#dataanalysisvideo#datasciencevideo#mathematicaltools#statisticsconcepts#datascienceconcepts#probabilityapplication#mathematicalapplication#statisticsapplication#datascienceapplication probabilities probabilityproblemsolving mathproblemvideo mathematicalsolutionvideo statisticsproblem statisticsproblemvideo dataanalysisproblem dataanalysisproblemvideo datascienceproblem datascienceproblemvideo mathematicalmodeling statisticalanalysis probabilityanalysis probabilityformulae mathematicalformulae mathematicalproblemvideo statisticsproblemvideo dataanalysisproblemvideo datascienceproblemvideo boxexperiment mathematicalapproach statisticalapproach dataanalysisapproach datascienceapproach mathematicsapproach probabilityproblemvideo mathematicalproblemvideo statisticsproblemvideo dataanalysisproblemvideo datascienceproblemvideo Bayesformula probabilitytree probabilitydistribution#mathematicalconcepts#statisticsconcepts#datsisconcepts 8 comments Transcript: welcome to my online email classes so before watching this video I request all of you subscribe this channel give a like there are three boxes numbered one two three contains the following words one box one law white balls one black balls to red balls three I like a box too low why touch Circle or two black one red one box three low y to four black five red three one box is randomly selected one box is randomly selected and the ball is drawn from it find the probability the ball is red and it is from box two box Stone inchi is okay so T law first given the tangent start chain solution given that a cut table a table this condition same to same risk is let B1 B2 B3 be the events of selecting three boxes where the events of selecting three boxes one two three B1 B2 B3 are the events of selecting three boxes one two three respectively okay the B1 key probability one box is randomly selected foreign okay one two three respectively bro first box select 820 chance first box is [Music] three boxes low first box is slow foreign foreign if second the bag to select a transporter one by three and a second box select a chance okay second box only out of three choices one by three okay one by three okay download okay a red ball Theater events okay Ali back selection is foreign 1 plus 2 plus 3 6 balls only okay red ball is three by six is off foreign foreign foreign red ball from box 2. questions foreign B2 can the B21 R by B2 R by B2 e can there is a procedure okay back to select auto it is same to same P of B2 select China b 2 lone C red ball right P of B3 selecting a B3 launch a red ball ground okay even ask to know if P of B2 download back selector one by three together one by four foreign foreign foreign contain balls with different colors as shown below different colors foreign foreign you know any chances foreign foreign if five or six turns up five and six again tons of power either five or six tons of entire molecules two by six there are three foreign foreign ball is drawn is found to be red Smo okay one by three one by three chances foreign given that okay Pro B1 is chosen either one or two turns up B2 is chosen here 4 turns up B3 is chosen if five or six turns up and a bagel currency P of A1 A1 A2 A3 B1 selector atom B2 select.com B3 select out any A1 A2 A3 P of a 1 equal to two by six until one by three p of a two until two by six center one by three p of a three and two by six one by three is a chances two plus two plus one five uh five balls covered two by five so only two by five foreign four plus three seven plus two nine two by nine okay hello hello so next is okay okay so mono eventually first first red ball ravali second box launching reverse is fine first first back selector Auto selecting a backlaunch red ball.com fine foreign by three a twenty one by three darling one by three is foreign first bag one Select Auto okay erase table and explain okay back one selector Auto Select in a bag one launch red ball router please okay uh first back selector Auto okay third back selected third bag A3 download denominator from base theorem because the decorations okay one by three seconds that is okay and once again I am repeating you please subscribe this channel and enable Bell notification thank you very much bye
190699
https://federalism.org/encyclopedia/no-topic/new-jersey-plan/
Skip to main content New Jersey Plan August 9, 2023 Share Tweet Share Pin Author: Charles Grapski On June 14, 1787, William Paterson, delegate from New Jersey, rose in the Convention on behalf of a coalition of delegates who desired to offer a “purely federal” plan as an alternative to the “supreme” “national government” proposed by the “Virginia Plan.” Representing, in addition to the majority of his state delegation, those of Connecticut, Delaware, and New York (and thus not simply composed of the “small states”) as well as one delegate from Maryland, Paterson’s plan—offered the following day in the form of nine resolutions—is commonly referred to as the “New Jersey Plan.” Paterson’s proposal was more faithful to the instructions of Congress establishing the Convention for “the sole and express purpose of revising the Articles of Confederation.” Whereas the Virginia Plan proposed the effective replacement of the Articles with a new form of “national” government, the New Jersey Plan sought to merely amend the Articles, retaining its “federal” principle. Under the existing Articles, each state had entered into a treaty, or a “firm league of friendship,” with each other—retaining its “sovereignty, freedom and independence, and every Power, Jurisdiction and Right, which is not by this confederation expressly delegated” (Article II). There was a general consensus that the powers of the Congress under the Articles were insufficient to provide for the “exigencies of government, and the preservation of the Union.” Accordingly the New Jersey Plan proposed to simply amend, rather than replace, the Articles—granting additional powers to Congress, particularly for the raising of revenue and the regulation of trade. It also sought to strengthen its ability to enforce its enactments with the addition of an executive, elected by Congress, and a judicial branch, and declaring that the acts and treaties of Congress were to be “the supreme law of the respective States.” Significantly, the existing representation in Congress—based upon an equality of states rather than proportionality—was to be retained. By the time Paterson and his allies had organized and offered their plan, however, Virginia’s proposal had already formed the organizing thread around which the Convention’s deliberations had taken shape. For the next three days the Convention considered both plans, during which time Alexander Hamilton, rising in objection to both, offered a plan of his own—strengthening the central, national government further than even the Virginia Plan had proposed. The Convention never formally considered Hamilton’s alternative and on June 19 voted—with Paterson’s coalition in the minority—to proceed with the deliberations based upon the Virginia Plan. The advocates of the more radical plan had prevailed in establishing a new, national foundation for the American government. Although the Convention effectively rejected the New Jersey Plan with this vote, the proposal was forwarded on to the committees that drafted the final language of the Constitution. However, its proponents were not without some significant success in shaping the final product. Through what is commonly referred to as the Great Compromise, the proponents of the New Jersey Plan prevailed in establishing equality of representation for the states in one of the two branches of Congress (the Senate). SEE ALSO: Connecticut Compromise; Constitutional Convention of 1787; Virginia Plan Bibliography Max Farrand, ed., Records of the Federal Convention, 4 vols. (New Haven, CT: Yale University Press, 1937); and James H. Hutson, ed., Supplement to the Records of the Federal Convention (New Haven, CT: Yale University Press, 1987). Subscribe to our Newsletter "" indicates required fields The Center for the Study of Federalism is a nonpartisan, interdisciplinary research and education institution dedicated to supporting and advancing scholarship and public understanding of federal theories, principles, institutions, and processes as practical means of organizing power in free societies. Among other things, the Center publishes Publius: The Journal of Federalism and provides grants for scholarly research on federalism. Subscribe to Publius Privacy Policy Sitemap This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. © 2025 Center for the Study of Federalism. All Rights Reserved. Close Menu Exploring Federalism Why Federalism? Where is Federalism in the Constitution? Federalism Explained Publius: The Journal of Federalism Who was Publius – The Real Guy? Encyclopedia – Constitutional Provisions Fiscal Federalism Historical Events Institutions – Intergovernmental Relations Legislation Models and Theories of Federalism Policy Areas – Supreme Court Cases Historical Figures Federalism Scholars No Topic Library – News American Federalism Other Federal Countries Comparative Federalism – Federalism Digests Books Podcasts CSF Projects – The Federalism Report Collaborations Publius: The Journal of Federalism Teaching Federalism Digests Why Federalism? Where is Federalism in the Constitution? Federalism Podcasts U.S. State and Local Firsts Teaching Resources Annotated Resources Teaching Awards About Mission Statement Grants & Awards History of CSF CSF Fellows Elazar Tributes Contact Us Apply for Research Grants "" indicates required fields