One of the best things about taking physics classes is that the equations you learn are directly applicable to the real world. Every so often, while reading a book or watching a movie, I’m seized by the sudden urge to check it for plausibility. A few scratches on a piece of paper later and I will generally know one way or the other.
One of the most amusing things I’ve found doing this is that the people who come up with the statistics for Pokémon definitely don’t have any sort of education in physics.
Takes Onix. Onix is a rock/ground Pokémon renowned for its large size and sturdiness. Its physical statistics reflect this. It’s 8.8 metres (28′) long and 210kg (463lbs).
Surely such a large and tough Pokémon should be very, very dense, right? Density is such an important tactile cue for us. Don’t believe me? Pick up a large piece of solid medal. Its surprising weight will make you take it seriously.
Let’s check if Onix would be taken seriously, shall we? Density is equal to mass divided by volume. We use the symbol ρ to represent density, which gives us the following equation:
We already know Onix’s mass. Now we just need to calculate its volume. Luckily Onix is pretty cylindrical, so we can approximate it with a cylinder. The equation for the volume of a cylinder is pretty simple:
Where π is the ratio between the diameter of a circle and its circumference (approximately 3.1415…, no matter what Indiana says), r is the radius of a circle (always one half the diameter), and h is the height of the cylinder.
Given that we know Onix’s height, we just need its diameter. Luckily the Pokémon TV show gives us a sense of scale.
Judging by the image, Onix probably has an average diameter somewhere around a metre (3 feet for the Americans). This means Onix has a radius of 0.5 metres and a height of 8.8 metres. When we put these into our equation, we get:
For a volume of approximately 6.9m3. To get a comparison I turned to Wolfram Alpha which told me that this is about 40% of the volume of a gray whale or a freight container (which incidentally implies that gray whales are about the size of standard freight containers).
Armed with a volume, we can calculate a density.
Okay, so we know that Onix is 30.4 kg/m3, but what does that mean?
Well it’s currently hard to compare. I’m much more used to seeing densities of sturdy materials expressed in tonnes per cubic metre or grams per cubic centimetre than I am seeing them expressed in kilograms per cubic metre. Luckily, it’s easy to convert between these.
There are 1000 kilograms in a ton. If we divide our density by a thousand we can calculate a new density for Onix of 0.0304t/m3.
How does this fit in with common materials, like wood, Styrofoam, water, stone, and metal?
From this chart, you can see that Onix’s density is eerily close to Styrofoam. Even the notoriously light balsa wood is five times denser than him. Actual rock is about 85 times denser. If Onix was made of granite, it would weigh 18 tonnes, much heavier than even Snorlax (the heaviest of the original Pokémon at 460kg).
While most people wouldn’t be able to pick Onix up (it may not be dense, but it is big), it wouldn’t be impossible to drag it. Picking up part of it would feel disconcertingly light, like picking up an aluminum ladder or carbon fibre bike, only more so.
How did the creators of Pokémon accidently bestow one of the most famous of their creations with a hilariously unrealistic density?
I have a pet theory.
I went to school for nanotechnology engineering. One of the most important things we looked into was how equations scaled with size.
Humans are really good at intuiting linear scaling. When something scales linearly, every twofold change in one quantity brings about a twofold change in another. Time and speed scale linearly (albeit inversely). Double your speed and the trip takes half the time. This is so simple that it rarely requires explanation.
Unfortunately for our intuitions, many physical quantities don’t scale linearly. These were the cases that were important for me and my classmates to learn, because until we internalized them, our intuitions were useless on the nanoscale. Many forces, for example, scale such that they become incredibly strong incredibly quickly at small distances. This leads to nanoscale systems exhibiting a stickiness that is hard on our intuitions.
It isn’t just forces that have weird scaling though. Geometry often trips people up too.
In geometry, perimeter is the only quantity I can think of that scales linearly with size. Double the length of the sides of a square and the perimeter doubles. The area, however does not. Area is quadratically related to side length. Double the length of a square and you’ll find the area quadruples. Triple the length and the area increases nine times. Area varies with the square of the length, a property that isn’t just true of squares. The area of a circle is just as tied to the square of its radius as a square is to the square of its length.
Volume is even trickier than radius. It scales with the third power of the size. Double the size of a cube and its volume increases eight-fold. Triple it, and you’ll see 27 times the volume. Volume increases with the cube (which again works for shapes other than cubes) of the length.
If you look at the weights of Pokémon, you’ll see that the ones that are the size of humans have fairly realistic weights. Sandslash is the size of a child (it stands 1m/3′ high) and weighs a fairly reasonable 29.5kg.
(This only works for Pokémon really close to human size. I’d hoped that Snorlax would be about as dense as marshmallows so I could do a fun comparison, but it turns out that marshmallows are four times as dense as Snorlax – despite marshmallows only having a density of ~0.5t/m3)
Beyond these touchstones, you’ll see that the designers of Pokémon increased their weight linearly with size. Onix is a bit more than eight times as long as Sandslash and weighs seven times as much.
Unfortunately for realism, weight is just density times volume and as I just said, volume increases with the cube of length. Onix shouldn’t weigh seven or even eight times as much as Sandslash. At a minimum, its weight should be eight times eight times eight multiples of Sandslash’s; a full 512 times more.
Under the Partial Test Ban Treaty (PTBT), all nuclear tests except for those underground are banned. Under the Non-Proliferation Treaty (NPT), only the permanent members of the UN Security Council are legally allowed to possess nuclear weapons. Given the public outcry over fallout that led to the PTBT and the worries over widespread nuclear proliferation that led to the NPT, it’s clear that we require something beyond pinky promises to verify that countries are meeting the terms of these treaties.
But how do we do so? How can you tell when a country tests an atomic bomb? How can you tell who did it? And how can one differentiate a bomb on the surface from a bomb in the atmosphere from a bomb in space from a bomb underwater from a bomb underground?
I’m going to focus on two efforts to monitor nuclear weapons: the national security apparatus of the United States and the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission’s International Monitoring System (IMS). Monitoring falls into five categories: Atmospheric Radionuclide Monitoring, Seismic Monitoring, Space-based Monitoring, Hydroacoustic Monitoring, and Infrasound Monitoring.
Atmospheric Radionuclide Monitoring
Nuclear explosions generate radionuclides, either by dispersing unreacted fuel, as direct products of fission, or by interactions between neutrons and particles in the air or ground. These radionuclides are widely dispersed from any surface testing, while only a few fission products (mainly various radionuclides of the noble gas xenon) can escape from properly conducted underground tests.
For the purposes of minimizing fallout, underground tests are obviously preferred. But because they only emit small amounts of one particular radionuclide, they are much harder for radionuclide monitoring to detect.
Detecting physical particles is relatively easy. There are 80 IMS stations scattered around the world. Each is equipped with an air intake and a filter. Every day, the filter is changed and then prepared for analysis. Analysis involves waiting a day (for irrelevant radionuclides to decay), then reading decay events from the filter for a further day. This gives scientists an idea of what radioactive elements are present.
Any deviations from the baseline at a certain station can be indicative of a nuclear weapon test, a nuclear accident, or changing wind patterns bringing known radionuclides (e.g. from a commercial reactor) to a station where they normally aren’t present. Wind analysis and cross validation with other methods are used to corroborate any suspicious events.
Half of the IMS stations are set up to do the more difficult xenon monitoring. Here air is pumped through a material with a reasonably high affinity for xenon. Apparently activated charcoal will work, but more sophisticated alternatives are being developed. The material is then induced to release the xenon (with activated charcoal, this is accomplished via heating). This process is repeated several times, with the output of each step pumped to a fresh piece of activated charcoal. Multiple cycles ensure that only relatively pure xenon get through to analysis.
Once xenon is collected, isotope analysis must be done to determine which (if any) radionuclides of xenon are present. This is accomplished either by comparing the beta decay of the captured xenon with its gamma decay, or looking directly at gamma decay with very precise gamma ray measuring devices. Each isotope of xenon has a unique half-life (which affects the frequency with which it omits beta- and gamma-rays) and a unique method of decay (which determines if the decay products are primarily alpha-, beta-, or gamma-rays). Comparing the observed decay events to these “fingerprints” allows for the relative abundance of xenon nuclides to be estimated.
There are some background xenon radionuclides from nuclear reactors and even more from medical isotope production (where we create unstable nuclides in nuclear reactors for use in medical procedures). Looking at global background data you can see the medical isotope production in Ontario, Europe, Argentina, Australia and South Africa. I wonder if this background effect makes world powers cautious about new medical isotope production facilities in countries that are at risk of pursuing nuclear weapons. Could Iran’s planned medical isotope complex have been used to mask nuclear tests?
Not content merely to host several monitoring stations and be party to the data of the whole global network of IMS stations, the United States also has the WC-135 “Constant Phoenix” plane, a Boeing C-135 equipped with mobile versions of particulate and xenon detectors. The two WC-135s can be scrambled anywhere a nuclear explosion is suspected to look for evidence. A WC-135 gave us the first confirmation that the blast from the 2006 North Korean nuclear test was indeed nuclear, several days before the IMS station in Yellowknife, Canada confirmed a spike in radioactive xenon and wind modelling pinpointed the probable location as inside North Korea.
Given that fewer monitoring stations are equipped with xenon radionuclide detectors and that the background “noise” from isotope production can make radioactive xenon from nuclear tests hard to positively identify, it might seem like nuclear tests are easy to hide underground.
That isn’t the case.
A global network of seismometers ensures that any underground nuclear explosion is promptly detected. These are the same seismometers that organizations like the USGS (United States Geological Survey) use to detect and pinpoint earthquakes. In fact, the USGS provides some of the 120 auxiliary stations that the CTBTO can call on to supplement its fifty seismic monitoring stations.
Seismometers are always on, looking for seismic disturbances. Substantial underground nuclear tests produce shockwaves that are well within the detection limit of modern seismometers. The sub-kiloton North Korean nuclear test in 2006 appears to have been registered as equivalent to a magnitude 4.1 earthquake. A quick survey of ongoing earthquakes should probably show you dozens that have been detected that are less powerful than even that small North Korean test.
This probably leads you to the same question I found myself asking, namely: “if earthquakes are so common and these detectors are so sensitive, how can they ever tell nuclear detonations from earthquakes?”
It turns out that underground nuclear explosions might rattle seismometers like earthquakes do, but they do so with characteristics very different from most earthquakes.
First, the waveform is different. Imagine you’re holding a slinky and a friend is holding the other end. There are two mains ways you can create waves. The first is by shaking it from side to side or up and down. Either way, there’s a perspective from which these waves will look like the letter “s”.
The second type of wave can be made by moving your arm forward and backwards, like you’re throwing and catching a ball. These waves will cause moving regions where the slinky is bunched more tightly together and other regions where it is more loosely packed.
These are analogous to the two main types of body waves in seismology. The first (the s-shaped one) is called an S-wave (although the “S” here stands for “shear” or “secondary” and only indicates the shape by coincidence), while the second is called a P-wave (for “pressure” or “primary”).
Earthquakes normally have a mix of P-waves and S-waves, as well as surface waves created by interference between the two. This is because earthquakes are caused by slipping tectonic plates. This slipping gives some lateral motion to the resulting waves. Nuclear explosions lack this side to side motion. The single, sharp impact from them on the surrounding rocks is equivalent to the wave you’d get if you thrust your arm forward while holding a slinky. It’s almost all P-wave and almost no S-wave. This is very distinctive against a background of earthquakes. The CTBTO is kind enough to show what this difference looks like; in this image, the top event is a nuclear test and the bottom event is an earthquake of a similar magnitude in a similar location (I apologize for making you click through to see the image, but I don’t host copyrighted images here).
There’s one further way that the waves from nuclear explosions stand out. They’re caused by a single point source, rather than kilometers of rock. This means that when many seismic stations work together to find the cause of a particular wave, they’re actually able to pinpoint the source of any explosion, rather than finding a broad front like they would for an earthquake.
The fifty IMS stations automatically provide a continuous stream of data to the CTBTO, which sifts through this data for any events that are overwhelmingly P-Waves and have a point source. Further confirmation then comes from the 120 auxiliary stations, which provide data on request. Various national and university seismometer programs get in on this too (probably because it’s good for public relations and therefore helps to justify their budgets), which is why it’s not uncommon to see several estimates of yield soon after seismographs pick up on nuclear tests.
Space Based Monitoring
This is the only type of monitoring that isn’t done by the CTBTO Preparatory Commission, which means that it is handled by state actors – whose interests necessarily veer more towards intelligence gathering than monitoring treaty obligations per se.
The United States began its space based monitoring program in response to the Limited Test Ban Treaty, which left verification explicitly to the major parties involved. The CTBTO Preparatory Commission was actually formed in response to a different treaty, the Comprehensive Test Ban Treaty, which is not fully in force yet (hence why the organization ensuring compliance with it is called the “Preparatory Commission”).
The United States first fulfilled its verification obligations with the Vela satellites, which were equipped with gamma-ray detectors, x-ray detectors, electromagnetic pulse detectors (which can detect the electro-magnetic pulse from high-altitude nuclear detonations) and an optical sensor called a bhangmeter.
Bhangmeters (the name is a reference to a strain of marijuana, with the implied subtext that you’d have to be high to believe they would work) are composed of a photodiode (a device that produces current when illuminated), a timer, and some filtering components. Bhangmeters are set up to look for the distinctive nuclear “double flash“, caused when the air compressed in a nuclear blast briefly obscuring the central fireball.
The bigger a nuclear explosion, the larger the compression and the longer the central fireball is obscured. The timer picks up on this, estimating nuclear yield from the delay between the initial light and its return.
The bhangmeter works because very few natural (or human) phenomena produce flashes that are as bright or distinctive as nuclear detonations. A properly calibrated bhangmeter will filter out continuous phenomena like lightning (or will find them too faint to detect). Other very bright events, like comets breaking up in the upper atmosphere, only provide a single flash.
There’s only been one possible false positive since the bhangmeters went live in 1967; a double flash was detected in the Southern Indian Ocean, but repeated sorties by the WC-135s detected no radionuclides. The event has never been conclusively proved to be nuclear or non-nuclear in origin and remains one of the great unsolved mysteries of age of widespread atomic testing.
By the time of this (possible) false positive, the bhangmeters had also detected 41 genuine nuclear tests.
The Vela satellites are no longer in service, but the key technology they carried (bhangmeters, x-ray detectors, and EMP detectors) lives on in the US GPS satellite constellation, which does double duty as its space-based nuclear sentinels.
One last note of historical errata: when looking into unexplained gamma-ray readings produced by the Vela satellites, US scientists discovered gamma-ray bursts, an energetic astronomical phenomenon associated with supernovas and merging binary stars.
Undersea explosions don’t have a double flash, because steam and turbulence quickly obscure the central fireball and don’t clear until well after the fireball has subsided. It’s true that radionuclide detection should eventually turn up evidence of any undersea nuclear tests, but it’s still useful to have a more immediate detection mechanism. That’s where hydroacoustic monitoring comes in.
There are actually two types of hydroacoustic monitoring. There’s six stations that use true underwater monitoring with triplets of hydrophones (so that signal direction can be determined via triangulation) which are very sensitive, but also very expensive (as hydrophones must be installed at a depth of approximately one kilometer, where sound transmission is best). There’s also five land based stations, which use seismographs on steeply sloped islands to detect the seismic waves underwater sounds make when they hit land. Land based monitoring is less accurate, but requires little in the way of specialized hardware, making it much cheaper.
In either case, data is streamed directly to CTBTO headquarters in Vienna, where it is analyzed and forwarded to states that are party to the CTB. At the CTBTO, the signal is split into different channels based on a known library of undersea sounds and explosions are separated from natural phenomena (like volcanos, tsunamis, and whales) and man-made noises (like gas exploration, commercial shipping, and military drills). Signal processing and analysis – especially of hydrophone data – is a very mature field, so the CTBTO doesn’t lacks for techniques to refine its estimates of events.
Infrasound monitoring stations are the last part of the global monitoring system and represent the best way for the CTBTO (rather than national governments with the resources to launch satellites) to detect atmospheric nuclear tests. Infrasound stations try to pick up the very low frequency sound waves created by nuclear explosions – and a host of other things, like volcanos, planes, and mining.
A key consideration with infrasound stations is reducing background noise. For this, being far away from human habitation and blocked from the wind is ideal. Whenever this cannot be accomplished (e.g. there’s very little cover from the wind in Antarctica, where several of the sixty stations are), more infrasound arrays are needed.
The components of the infrasound arrays look very weird.
What you see here are a bunch of pipes that all feed through to a central microbarometer, which is what actually measures the infrasound by detecting slight changes in air pressure. This setup filters out a lot of the wind noise and mostly just lets infrasound through.
Like the hydroacoustic monitoring system, data is sent to the CTBTO in real time and analyzed there, presumably drawing on a similar library of recorded nuclear test detonations and employing many of the same signal processing techniques.
Ongoing research into wind noise reduction might eventually make the whole set of stations much more sensitive than it is now. Still, even the current iteration of infrasound monitoring should be enough to detect any nuclear tests in the lower atmosphere.
The CTBTO has a truly great website that really helped me put together this blog post. They provide a basic overview of the four international monitoring systems I described here (they don’t cover space-based monitoring because it’s outside of their remit), as well as pictures, a glossary, and a primer on the analysis they do. If you’d like to read more about how the international monitoring system works and how it came into being, I recommend visiting their website.
This post, like many of the posts in my nuclear weapon series came about because someone asked me a question about nuclear weapons and I found I couldn’t answer quite as authoritatively as I would have liked. Consequently, I’d like to thank Cody Wild and Tessa Alexanian for giving me the impetus to write this.
It can be hard to grasp that radio waves, deadly radiation, and the light we can see are all the same thing. How can electromagnetic (EM) radiation – photons – sometimes penetrate walls and sometimes not? How can some forms of EM radiation be perfectly safe and others damage our DNA? How can radio waves travel so much further than gamma rays in air, but no further through concrete?
It all comes down to wavelength. But before we get into that, we should at least take a glance at what EM radiation really is.
Electromagnetic radiation takes the form of two orthogonal waves. In one direction, you have an oscillating magnetic field. In the other, an oscillating electric field. Both of these fields are orthogonal to the direction of travel.
These oscillations take a certain amount of time to complete, a time which is calculated by observing the peak value of one of the fields and then measuring how long it takes for the field to return to that value. Luckily, we only need to do this once, because the time an oscillation takes (called the period) will stay the same unless acted on by something external. You can invert the period to get the frequency – the number of times oscillations occur in a second. Frequency uses the unit Hertz, which are just inverted seconds. If something has the frequency 60Hz, it happens 60 times per seconds.
EM radiation has another nifty property: it always travels at the same speed, a speed commonly called “the speed of light”  (even when applied to EM radiation that isn’t light). When you know the speed of an oscillating wave and the amount of time it takes for the wave to oscillate, you can calculate the wavelength. Scientists like to do this because the wavelength gives us a lot of information about how radiation will interact with world. It is common practice to represent wavelength with the Greek letter Lambda (λ).
Put in a more mathy way: if you have an event that occurs with frequency f to something travelling at velocity v, the event will have a spatial periodicity λ (our trusty wavelength) equal to v / f. For example, if you have a sound that oscillates 34Hz (this frequency is equivalent to the lowest C♯ on a standard piano) travelling at 340m/s (the speed of sound in air), it will have a wavelength of (340 m/s)/(34 s-1) = 10m. I’m using sound here so we can use reasonably sized numbers, but the results are equally applicable to light or other forms of EM radiation.
Wavelength and frequency are inversely related to each other. The higher the frequency of something, the smaller its wavelength. The longer the wavelength, the lower the frequency. I’m used to people describing EM radiation in terms of frequency when they’re talking about energy (the quicker something is vibrating, the more energy it has) and wavelength when talking about what it will interact with (the subject of the rest of this post).
With all that background out of the way, we can actually “look” at electromagnetic radiation and understand what we’re seeing.
Wavelength is very important. You know those big TV antennas houses used to have?
Turns out that they’re about the same size as the wavelength of television signals. The antenna on a car? About the same size as the radio waves it picks up. Those big radio telescopes in the desert? Same size as the extrasolar radio waves they hope to pick up.
Even things we don’t normally think of as antennas can act like them. The rod and cone cells in your eyes act as antennas for the light of this very blog post . Chains of protein or water molecules act as antennas for microwave radiation, often with delicious results. The bases in your DNA act as antennas for UV light, often with disastrous results.
These are just a few examples, not an exhaustive list. For something to be able to interact with EM radiation, you just need an appropriately sized system of electrons (or electrical system; the two terms imply each other). You get this system of electrons more or less for free with metal. In a metal, all of the electrons are delocalized, making the whole length of a metal object one big electrical system. This is why the antennas in our phones or on our houses are made of metal. It isn’t just metal that can have this property though. Organic substances can have appropriately sized systems of delocalized electrons via double bonding .
EM radiation can’t really interact with things that aren’t the same size as its wavelength. Interaction with EM radiation takes the form of the electric or magnetic field of a photon altering the electric or magnetic field of the substance being interacted with. This happens much more readily when the fields are approximately similar sizes. When fields are the same size, you get an opportunity for resonance, which dramatically decreases the loss in the interaction. Losses for dissimilar sized electric fields are so high that you can assume (as a first approximation) that they don’t really interact.
In practical terms, this means that a long metal rod might heat up if exposed to a lot of radio waves (wavelengths for radio waves vary from 1mm to 100km; many are a few metres long due to the ease of making antennas in that size) because it has a single electrical system that is the right size to absorb energy from the radio waves. A similarly sized person will not heat up, because there is no single part of them that is a unified electrical system the same size as the radio waves.
Microwaves (wavelengths appropriately micron-sized) might heat up your food, but they won’t damage your DNA (nanometres in width). They’re much larger than individual DNA molecules. Microwaves are no more capable of interacting with your DNA than a giant would be of picking up a single grain of rice. Microwaves can hurt cells or tissues, but they’re incapable of hurting your DNA and leaving the rest of the cell intact. They’re just too big. Because of this, there is no cancer risk from microwave exposure (whatever paranoid hippies might say).
Gamma rays do present a cancer risk. They have a wavelength (about 10 picometres) that is similar in size to electrons. This means that they can be absorbed by the electrons in your DNA, which kick these electrons out of their homes, leading to chemical reactions that change your DNA and can ultimately lead to cancer.
Wavelength explains how gamma rays can penetrate concrete (they’re actually so small that they miss most of the mass of concrete and only occasionally hit electrons and stop) and how radio waves penetrate concrete (they’re so large that you need a large amount of concrete before they’re able to interact with it and be stopped ). Gamma rays are stopped by the air because air contains electrons (albeit sparsely) that they can hit and be stopped by. Radio waves are much too large for this to be a possibility.
When you’re worried about a certain type of EM radiation causing cancer, all you have to do is look at its wavelength. Any wavelength smaller than that of ultraviolet light (about 400nm) is small enough to interact with DNA in a meaningful way. Anything large is unable to really interact with DNA and is therefore safe.
Epistemic Status: Model. Looking at everything as antenna will help you understand why EM radiation interacts with the physical world the way it does, but there is a lot of hidden complexity here. For example, eyes are far from directly analogous to antennas in their mechanism of action, even if they are sized appropriately to be antennas for light. It’s also true that at the extreme ends of photon energy, interactions are based more on energy than on size. I’ve omitted this in order to write something that isn’t entirely caveats, but be aware that it occurs.
 You may have heard that the speed of light changes in different substances. Tables will tell you that the speed of light in water is only about ¾ of the speed of light in air or vacuum and that the speed of light in glass is even slower still. This isn’t technically true. The speed of light is (as far as we know) cosmically invariant – light travels the same speed everywhere in the galaxy. That said, the amount of time light takes to travel between two points can vary based on how many collisions and redirections it is likely to get into between two points. It’s the difference between how long it takes for a pinball to make its way across a pinball table when it hits nothing and how long it takes when it hits every single bumper and obstacle. ^
 This is a first approximation of what is going on. Eyes can be modelled as antennas for the right wavelength of EM radiation, but this ignores a whole lot of chemistry and biophysics. ^
 The smaller the wavelength, the easier it is to find an appropriately sized system of electrons. When your wavelength is the size of a double bond (0.133nm), you’ll be able to interact with anything that has a double bond. Even smaller wavelengths have even more options for interactions – a wavelength that is well sized for an electron will interact with anything that has an electron (approximately everything). ^
 This interaction is actually governed by quantum mechanical tunneling. Whenever a form of EM radiation “tries” to cross a barrier larger than its wavelength, it will be attenuated by the barrier. The equation that describes the probability distribution of a particle (the photons that make up EM radiation are both waves and particles, so we can use particle equations for them) is approximately (I say approximately because I’ve simplified all the constants into a single term, k), which becomes (here I’m using k1 to imply that the constant will be different), the equation for exponential decay, when the energy (to a first approximation, length) of the substance is higher than the energy (read size of wavelength) of the light.
This equation shows that there can be some probability – occasionally even a high probability – of the particle existing on the other side of a barrier. All you need for a particle to traverse a barrier is an appropriately small barrier. ^