« Posts by MadRocketSci

Random Observation

Not intended to be entirely novel or surprising, but has anyone noticed the fairly stark difference between a little human girl and her cute fuzzy toys and a girl dog (like my parent’s dog Honey) and their cute fuzzy toys?

Little Girl: It’s cute and fuzzy and I want to nurture it.
Little Dog: It’s cute and fuzzy and IT NEEDS TO DIE! (shake-shake-riiiip-waggle-waggle)

Distinguishability Quibble

(Disclaimer – there may be some better reason for this that I just haven’t come across yet. I am still learning this stuff, and may end up being wrong. Even so … )

The issue of explanations of distinghishability wrt particles is something that bugs me to no end. I cannot recommend enough the following paper by Jaynes [1] which as far as I can tell is the one of the only things I’ve read that gets this *right*.

[1] Jaynes, Edwin T. “The Gibbs Paradox.” Maximum Entropy and Bayesian Methods, 1992, 1–22.

Eliezer Yudkowsky has a series explaining the basics of what must be going on with quantum physics from a wavefunction-realist perspective (which is my own perspective as well), but I’ve been seriously wondering lately whether quantum indistinguishability and its supposed proofs suffer from the same problems/sloppiness as classical indistinguishability.

Here is his post: http://lesswrong.com/lw/ph/can_you_prove_two_particles_are_identical/

Here is my comment/question/proposition:

I have a counter-hypothesis: If the universe did distinguish between photons, but we didn’t have any tests which could distinguish between photons, what this physically means is that our measuring devices, in their quantum-to-classical transitions (yes, I know this is a perception thing in MWI), are what is adding the amplitudes before taking the squared modulus. Our measurers can’t distinguish, which is why we can get away with representing the hidden “true wavefunction” (or object carrying similar information) with a symmetric wavefunction. If we invented a measurement device which was capable of distinguishing photons, this would mean that photon A and photon B striking it would dump amplitude into distinct states in the device rather than the same state, and we would no longer be able to represent the photon field with a symmetric wavefunction if we wanted to make predictions.

I think quantum physicists here are making the same mistake that lead to the Gibbs paradox in classical phyiscs. Of course, my textbook in classical thermodynamics tried to sweep the Gibbs paradox under the quantum rug, and completely missed the point of what it was telling us about the subjective nature of classical entropy. Quantum physics is another deterministic reversible state-machine, so I don’t see why it is different in principle from a “classical world”.

While it is true that a wavefunction or something very much like it must be what the universe is using to do it’s thing (is the territory), it isn’t necessarily true that our wavefunction (the one in our heads that we are using to explain the set of measurements which we can make) contains the same information. It could be a projection of some sort, limited by what our devices can distinguish. This is a not-in-principle-complete map of the territory.

PS – not that I’m holding my breath that we’ll invent a device that can distingish between “electron isotopes” or other particles (their properties are very regular so far), but it’s important to understand what is in principle possible so your mind doesn’t break if we someday end up doing just that.

PPS – I really like the comment by Dmytry:
Well, in principle, it can happen that two particles would obey this statistics, and be different in some subtle way, and the statistics would be broken if that subtle difference is allowed to interact with the environment, but not before. I think you can see how it can happen under MWI. Statistics is affected not by whenever particles are ‘truly identical’ but by whenever they would have interacted with you in identical way so far (including interactions with environment – you don’t have to actually measure this – hitting the wall works just fine).

Furthermore, two electrons are not identical because they are in different positions and/or have different spins (‘are in different states’). One got to very carefully define what ‘two electrons’ mean. The language is made for discussing real world items, and has a lot of built in assumptions, that do not hold in QM.

edit: QFT is a good way to see it. A particle is a set of coupled excitations in fields. Particle can be coupled interaction of excitations in fields A B C D … and the other can be A B C D E where the E makes very little difference. E.g. protons and neutrons, are very similar except for the charge. Under interactions that don’t distinguish E, the particles behave as if they got statistics as if they were identical.

Random thoughts about non-existence proofs

A placeholder for when I have time to write about this…

Quantum Telescope Update

So, I was thinking a bit about how this is supposed to work. Also, I started reading the academic paper that Kellerer wrote on the topic:

http://www.aanda.org/articles/aa/pdf/2014/01/aa22665-13.pdf

She’s not messing with the diffraction limit of the photons, cloned or otherwise. However, if the photons come in one at a time, then you can see each Airy disc (and an image of the entire disc (N photons drawn from the disk distribution), not just a single point drawn at random from the disc) produced by the cloned photons independently of the any other photons from the star. This deconvolves everything in the image. You can then take some average or moment of your disc to recover the center with greater accuracy, improving your angular resolution.

This could be a very big advance in astronomy. For years, what we could resolve was limited by the aperture of the optical system. To resolve something like an extrasolar planet, baselines or primary mirror diameters of hundreds of miles are needed – the sort of baselines that were planned for telescope constellations such as Terrestrial Planet Finder.

If we could increase by some multiple factor the angular resolution of a telescope, we might be able to someday image extrasolar planets with mirrors that aren’t too big to be physically feasable, or in single systems without the need to orchestrate large distributed aperture constellations.

Quantum Telescope

This is an interesting idea:

http://physicsworld.com/cws/article/news/2014/apr/29/quantum-telescope-could-make-giant-mirrors-obsolete

However, I’m having a hard time figuring out how it is supposed to work. I understand the resolution limit of telescopes in classical terms: Given a wavefront that gets windowed by an aperture, you’ve cut out the longest wavelengths in the fourier transform of the wavefront passing through the apeture, or reflecting off the primary mirror which limits the spot size that it can focus down to at the focal plane.

The same sort of logic should apply to single photons. If you have a photon from a distant star, it’s spread out into a giant wavefront by the time it gets to your primary mirror, the portion of the photon that reflects of the primary mirror is windowed, and can only focus down to a spot of a finite size on your detector (which pixels/entangled superposition of pixels it ends up exciting then becomes a matter of $INTERPRETATION), but N such photons will light up some airy disc of finite size on your detector.

So if you pass the photons through a gain medium (such as for a laser) prior to them hitting your primary mirror, you get N photons with the same wavefront description as the classical case passing through your detector. I can understand why you would get a brighter image from this (more photon counts hitting the detector within the airy disc), but not why the image should have a finer resolution.

Paramagnetism Visualization

The reason why most substances are weakly paramagnetic (tend to strengthen the applied magnetic field) is that their electrons, being bound up in current loops about massive atoms, experience a torque when exposed to an externally applied magnetic field. This torque aligns the atomic magnetic moment with the magnetic field.

In contrast, conductors and plasmas are Larmor-diamagnetic. The charge carriers (electrons in metals, electrons and ions in plasmas) can move freely, so the magnetic field exerting forces on them bends them into current loops with oppositely directed magnetic moment. (There is also Langevin diamagnetism where the applied field induces some sort of precession of the orientation of a bound electron orbital, leading to a current, leading to a magnetic moment against the applied field)

(Also, there is ferromagnetism, in which a substance is very strongly and nonlinearly paramagnetic. I don’t have an extremely solid grasp on how this works yet, but it seems the bound charge carriers in the atoms (of which there are many unpaired) must be overcoming what the free charge carriers in the conduction band are doing to induce diamagnetism).

One of the things about paramagnetism that bugged me was that if you have a paramagnetic gas like Oxygen, where the atomic electron shells are widely separated, or an atomic system where the size of the relevant current loops is far smaller than the interatomic spacing of the system, then it isn’t obvious that a bunch of magnetic moment dipoles aligned with the B field would produce induced fields which reinforce the applied field internal to the substance.

One argument you could make is that:
1. far from the dipoles, outside the boundaries of the paramagnetic substance, the net action of all the magnetic dipoles produces a field opposing the applied field.
2. B fields are solenoidal (unless you have magnetic monopoles floating around in your substance, in which case there’s a bunch of guys in Sweden who might be looking for you….)
3. Therefore the average induced field in the substance must reinforce the applied field.

Or, you could, like me, want to see it. I like being able to see things, with my minds eye or otherwise. This matlab script produces a visualization of the induced field of a bunch of randomly distributed dipoles aligned along the +y axis, and in what regions the +y component is greater than 0.

It turns out that the fields of individual dipoles tend to link up in the body of the substance, pushing the region where they oppose the applied field outside of the boundaries of the substance.

main script: Dipolefield.m
subroutine: infdipole.m

Bygt0

internalfield1

Best explanation of Bitcoin that I have read

This is an outstanding explanation of modern cryptocurrency in general and Bitcoin in particular. If you are curious about this absolutely brilliant invention that will probably end up changing the financial world, read about it here:


http://www.michaelnielsen.org/ddi/how-the-bitcoin-protocol-actually-works/

Subaru Telescope discovers water vapor atmosphere around super-earth type planet, 40 LY distant

The Subaru Telescope, a ground based 8.2-meter optical-infrared telescope at Mauna Kea has made observations of a “super-earth” planet GJ1214b, orbiting around the star Gliese 1214, a red dwarf star, approximately 40 LY distant.

The telescope has observed how the planet’s atmosphere scatters the light of the parent star during it’s transit across the stellar disc. They infer details about the atmosphere of the planet, based on what is expected for Raleigh scattering for atmospheres of different composition. The Raleigh scattering shown is more characteristic of a dense atmosphere of water vapor than a thick hydrogen atmosphere.

In the solar system, planets can be neatly divided into two categories – terrestrial planets, which are small, dense, and solid with thin atmospheres. They never had the mass needed to retain hydrogen during planetary formation, and so the thick hydrogen atmosphere characteristic of the gas giants was blown off by solar wind, or lost/never-condensed due to thermal motion early in their history. The gas giants had enough mass relative to their position to retain hydrogen, and built up extremely deep hydrogen atmospheres. The gas giants are large, less dense overall, and mostly light gas.

Observations of exoplanets though show that there are many planets with masses and diameters in between those of Earth, the largest terrestrial planet of the Solar System, and the smallest gas giants, (the ice giants of the outer solar system). It is unknown what these planets are like – whether they are more like terrestrial planets, or like very small gas-giants. Other than diameter and orbit characteristics, it is hard to know anything about the exo-planets directly with our current detection methods.

Gliese 1214 b is one such super-Earth, with 2.4 times the radius, and an estimated 6.5 times the mass of our planet. Depending on how much light is reflected, it could have equilibrium temperatures between 120 and 282 C. While this planet is probably too hot for Earth-life, it is not so hot that it is impossible that some form of molecular biology could exist. Hydrothermal vent creatures have been known to live in up to 80 C.

http://subarutelescope.org/Pressrelease/2013/09/03/index.html
http://en.wikipedia.org/wiki/GJ_1214_b

New Nova

For those of you who can see the sky, (as opposed to a bunch of clouds), this might be something to look for:

Apparently a star has gone nova.

Bright New Nova In Delphinus — You can See it Tonight With Binoculars

Privacy Tools: How to use the GnuPG encryption software

Here is a tutorial that I wrote for a particularly useful piece of encryption software. Gnu Privacy Guard or GPG allows you to use asymmetric encryption to communicate securely.

Privacy Tools: How to use the GnuPG encryption software

One thing about modern cryptography that many people don’t realize is that it is possible to encrypt communications in such a way that it is literally impossible to decrypt without a key. Symmetric and information theoretically secure schemes like large symmetric keys or one-time pads can provide encryption that is quite literally unbreakable. Without the key, the plaintext is uncorrelated with the ciphertext – the information literally does not exist without both the key and the ciphertext together.

There are other more convenient encryption schemes that are also very secure – in these cases, the amount of computational work required to break the encryption, barring supposed mathematical breakthroughs, is outrageously large. While it is supposedly possible, it is unlikely for any earthly amount of computing capacity, to be able to break the encryption without the key. One of these is RSA – a means of sending messages securely using asymmetric key encrpytion.

Asymmetric key encryption works by generating two keys – a public key which is published for others to use, and a private key which is kept securely by the owner. The public key can be used by anyone to encrypt a message which can only be decrypted using the private key. So users, on obtaining your public key, can securely send you communications. If you obtain their public key, you can then send them messages. This can happen in the open, without the need to exchange keys in private via another secure channel or meeting.

Individual average everyday computer owners and users actually have a great deal of power to secure the privacy of their communications if they want to take the time to do so. It is the intent of this how-to to help educate users on how to use some of these freely available tools to secure their communications and data.

In fact, one of the fascinating things about modern computing is that almost all of the supposedly special high-power encryption/communication/anonymizing tools that are imagined to be the province l33t hackers and super spies are actually freely available as a result of the broader open-source community. All it takes to use them is a little time to figure out how they work. The unit cost of software is zero and all computers are Turing complete. There is nothing that a super empowered agency can do with their computers that you in principle cannot with yours.

(click the link above for more information)