Detecting the sound from images

So, now I can start contemplating the analysis of my data.

The whole point of this study is to detect oscillations on the Sun. Translated, I’m trying to separate sound from the images.

Sounds complicated, but in essence, all I need to do is detect tiny changes in series of pictures, and those changes will give me information about the hidden waveform. Because sound is a wave.

Imagine, there is a wave hidden in series of pictures. When you analyze a pixel where that wave is hidden, the intensity of the pixel will change, giving you information about a hidden wave.

Screen Shot 2017-03-07 at 16.15.07

Let us assume this is the wave you’re trying to detect. Wave is going towards a camera, making the pixel of the image change.
So when you take series of images of the area that carries sound you will have the necessary information to reconstruct the sound.

 

Screen Shot 2017-03-07 at 16.15.40

And the principle is simple. In this image, green vertical lines are pictures taken with the camera over some period time. Yeah, I know that series of the images in time is a video, but since I use time resolution that does not match the standard video time-resolution, I will not call it a video. (Sorry for clumsy drawing, I’m not really good at it.)

 

Screen Shot 2017-03-07 at 16.15.54

Then at each intersection of the wave and the frame, we will get the information about the wave. The more points you get in the wave, more correctly you can reconstruct it. That’s why the different time resolution is used. See, different waves oscillate at different frequencies, meaning you need a different number of images to detect different frequency waves.

In the end, you’ll end up with data points describing the change of pixel along the wave. And from here is easy to reconstruct the original wave thanks to a wonderful math. There are developed methods for it, methods that I will use later.

sound-1781570_960_720

This principle works even when a wave is way more complicated like sound waveforms usually are. The only thing you need to make sure of is to adjust your time-resolution so that you can get enough points of the wave so that you can end up with the complete information.

Math has tools to get the shape of that wave too.

Anyway, the biggest problem is making sure I detect a true wave and not some false signal.
Since the whole point of this detection is to figure out tiny changes at the same location in the video, this means that images have to be perfectly co-aligned. I absolutely have to make sure that I observe the same location in the same pixel all through my time-series of images.

Sun does not move much, and when you point the telescope at it and take a bunch of images for hours at a temporal resolution of 12 seconds (that’s what AIA on board SDO does) the images look more-less the same. Especially if you observe the quiet Sun.

But Sun rotates, and it does not rotate as Earth does. Sun is a fluid, which means that surface of the Sun resembles to slow moving river, edges going slower, center going fast.
So, I have to move all images to the same spot. SDO instruments observe the whole Sun, meaning, that river-like flow of plasma is caught in the data. That’s why you can see sunspots moving slowly across the solar disk.
Well, I cannot allow that movement. And because of laminate rotation speeds of the Sun, I cannot use approximate rate of rotation, the one students learn in the school as ‘rate of Sun’s rotation.’ This approach is not precise enough. I might shift pixels too little in one region and too much in other, making in the process an artificial signal.
I have to use more precise tools, that will make image shift for the appropriate number of pixels in the direction opposite of the rotation.
As I mentioned in a previous blog post (getting the data), there is a whole module developed for Sun analysis. And that module has a function that should correlate images. Just, when I tried to run it on my MacBook Pro, it did not work.
I went through the code (publicly shared on Github) and found out that the basis of the shift calculation is a function called match_template from python module skiimage.

So I tried this particular function on just two of my images to see what happens. Unfortunately, because of high similarity of my images, I got a 1×1 array as a result.
This is a problem. The result should be more than one pixel because a resulting little image is used to calculate shift between input images. Calculating shift from just one pixel is impossible. In essence, this particular module ‘claims’ that images do not move with respect to each other. And that is not true, I know Sun moves.

My time with Computer Scientists taught me that next step should be asking the question at Stackoverflow. I did on Feb 21. Until today, there was no answer. Today, I posted my own.

So next step is for me to make my own code that will do what I need.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s