Wednesday, January 29, 2014

Lunar Photography on the Cheap II (gratis/free)

Yesterday I showed how to take a lunar shot with a simple point&shoot camera with a 8x zoom lens. Of course, the trick was that this inexpensive camera is able to shoot in RAW (thanks to CHDK).

Towards the end of the post, I mentioned that I used commercial software, i.e. Photomatix, to deal with the "develop" part of things. Further I stated that this process would potentially be possible with "free" software.

Today, I gave it a try with Luminance HDR (version 2.3.1), which is free software, to my knowledge.


Preparation

Luminance HDR asks for bracketed frames. Well, with my single shot, I only got one frame. So I used a very common HDR trick, namely creating more frames, differently "exposed" by software. Mind you, I am shooting DNG (digital negative).
Here is how this trick is done (with is free software by know):
  • open the shot in Rawtherapee
  • go to the EXPOSURE menu
  • click on 'Neutral' (this should bring everything to default values)
  • export the image using TIFF-16bit (this is your 0Ev shot)
  • drop the exposure to -2Ev
  • export the image using TIFF-16bit (this is obviously your -2Ev shot)
  • raise the exposure to +2Ev
  • export the image using TIFF-16bit (this is obviously your +2Ev shot)
Done! You now got 3 frames of identical size and format with different exposure values. That's what HDR-software likes!

The first results in Luminance HDR were not that great. Actually, they were so bad (a lot of grain and noise, over-exposed bits and what not), that I decided not to show those. The software is not easy to use, so I will give it a second try (see below).


Gratis

There is another bit of HDR-software available "for free", i.e. gratis. So we are not dealing with free software here, however, one can use this program without paying for a license.
The program is called FDRTools Basic.

Having loaded the 3 frames into FDRTools, the results were better, but not really satisfying yet. It appeared that the +2Ev frame was not serving any reasonable purpose, hence, I excluded this frame from the process (this is a very nice feature of FDRTools, it is like making invisible a layer in GIMP).
And guess what, the result was instantaneously much better than yesterday's Photomatix results!
Here is the re-sized output, converted into PNG:
2 frames pseudo HDR using FDRTools Basic


To me, that was a stunning result, coming from gratis software! A lot less noise than in yesterday's attempt.

Still, this photo could be slightly improved in GIMP, using the masking technique I explained in the previous post, i.e. the sky was treated with 'wavelet denoise' and the moon with 'wavelet sharpen'.
after GIMP
In the original 16 Megapixels image, the difference is somewhat visible. The above shown scaled down versions look almost identical to me.


Free

And here is the promised text about the entirely free solution.
Learned from my experience with FDRTools, I only loaded the -2Ev and the 0Ev frame into Luminance HDR. So for so good, but now the hard part.
Luminance HDR offers a lot of different algorithms to combine the frames. I went for "Profile 1".
Luminance HDR offers even more algorithms to tone-map the image. And this is where it went wrong in the earlier attempts. Having tried all different options, I selected "Reinhard '02", pulled 'Key Value' to 0.01 (none of the other tone-mapping parameters have any effect at this stage). In order to darken the image, I use the 'Adjust Levels' histograms. And voilà, we got a presentable result created by free software.
2 frames pseudo HDR using Luminance HDR
There is still more noise in the image than in the image created by FDRTools. Again, this calls for the GIMP.
Here we go, same technique as described previously... and here is the result:
denoise / sharpen by the GIMP
Yep, the differences are getting really subtle now, which of course speaks for the use of free software!

I hope you enjoyed this little journey from commercial to gratis to free. Personally, I am not sure if gratis or free won the contest. But certainly commercial produced the least favorable result in this particular case. But than, using software of this nature is somewhat of an unfair abuse, isn't it?

Tuesday, January 28, 2014

Lunar Photography on the Cheap

Well, this is not the usual thing I would do. However, since those things are possible, I will show 'em to you... and also explain how I did it.

First of all, when doing astro-photography, I would usually use a decent telescope, e.g. an APO refractor w/ a fluorite-glass lens... or a decent reflector with some decent optics.

However, today, I show you how you can achieve an acceptable shot of the moon, using an inexpensive Canon P&S (point 'n shoot). I my case, it is my trusty IXUS 140 (ELPH 130). Of course, we need to use CHDK in order to shoot in RAW.

The shot was taken in the blue hour (aka. magic hour or golden hour). According to the exif data, the shot was taken at f/6.9, 1/125s, 40.0mm, ISO400 (date: 10.01.2014 @ 16:40).

Here is what the camera though I was aiming for (jpeg done by the camera, in the hope that blogger does not tweak the image too much):
JPEG as recorded by the camera, scaled and saved as PNG

The same shot, recorded in RAW (DNG) by CHDK, was taken into Photomatix and the GIMP with the following steps performed on the image.

Photomatix:
  • playing with single frame pseudo HDR parameters (tone mapping)
  • tweaking highlights and shadows
  • dropping exposure by some stops
  • increasing contrast
  • etc. 
Actually, for the fun of it, this is the image before I used the GIMP:
as exported from Photomatix, no noise-reduction yet
The GIMP:
  • duplicating the image, creating a second layer
  • creating a layer mask for the first layer (100% opacity)
  • masking out the moon (the first layer now contains clouds only!)
  • reducing the noise in the clouds using wavelet denoise
  • on the lower layer (luna!), increasing sharpness using wavelet sharpen
  • flatten the image
  • export to PNG
the result of the process laid out above

Yep, this is the same shot... compare the cloud pattern...
After some tweaks, the daylight shot looks like a night-time photo. Also, some detail (noise!) was added to the moon, while noise (detail) was removed from the clouds selectively.

Here you have it, it does not take pro-gear to create a cool(ish) shot of the moon. I happen to have access to Photomatix, however, I feel that you might be able to use some free software to obtain the same effect... maybe even the GIMP!

In comparison, on screen, you might like the image before GIMP better... however, I believe on a print, the GIMPed shot might have an edge.

Tweak your photos and enjoy!

Sunday, January 26, 2014

Shoot - Develop - Enlarge (the modern times)

The title might be misleading. We are not talking old-skool chemical photography here. However, more and more analogies reveal themselves to me.
Let's have a look what is possible today!

You may have seen that many photo-studios show near-live images to their clients. This is called tethering. A raw image is presented on a large screen, just a moment after the shot has been taken.
This is very cool, however, not every amateur setup is able to do that... not every, but some!

Mine does! And here are the specs which enable me to tether to either to my workstation or to my laptop.

First of all, the camera. I am using a Canon EOS 350D (aka. Rebel XT). A camera I really love! Thought of replacing it over the years, however, up to now, I was unable to find any reason to retire this fine piece of technology.

On the computer side of things, 3 main players are to be mentioned. First of all my trusty GA-Z77N-WiFi i5-3570k running OS-X 10.9.1, which serves as an everyday image processing machine.
On a secondary basis, I am using an HP ProBook 4540s (i3 SandyBridge), running OS-X 10.8.5.
Speaking "Apple", we are looking at Mavericks and Mountain Lion in terms of operation systems.
And last to not least, my AMD FX-6100 based debian machine.

The topic was tethering... hence, it would be about time to look into some software allowing tethering of DSLRs.
Of course, the first look is into free software. To be found under the GPL is a product called darktable. Very obviously, the name of this particular software reflects a "darkroom" and a "light-table", both being tools of photographer of the chemical age. Also, the name reminds of "LightRoom" (the name of non-free software by Adobe).

I love darktable! However, it is not as easy to use, as one may hoped for. On the positive, it is free and it enables tethering!

On all OS's I experienced that, as soon as the camera is connected, some sort of software takes over the connection. Fair enough, there might be shots on the memory card, which we may want to transfer to the workstation.
However, we are looking into tethering here. So, how do we get darktable accepting the camera for tethering purposes?

Before we start, we have to set the camera into the correct mode. In the settings (some of the wrenges) it says "PC connection"... that's what we want!

When having connected the camer, first of all, we need to close all programs which opend when the camera was connected. Now, let's wait for a bit! Really, this is mandantory!
As soon as everything settled down a bit, we want to open darktable.
In the "light table" mode, click "scan for devices". The camera should show up, in my case as "Canon Rebel XT". If not, try again!
As soon as the camera is detected correctly, there is an option to "tether". Click on this to start a new tethering session.

This may not work the first time, hence, be patient!
For me, this works perfectly under OS-X 10.9.1, OS-X 10.8.5 and Debian-Linux.

As to the title of this post, this would be the "shoot" part. The part that gets the initial data.

From here on, we need to focus on "post-processing", i.e. developing and "enlarging".

As a side remark, the AMD FX-6100 box, under debian, takes a punch in post-production,

We are done with the shooting part of the photo. Now, lets move into the develop part of things. Of course, there are some options to do "development work" in darktable, however, I would strongly recomment to use rawtherapee for further purposes. Tweak your raws as you would tweak your film in the good old days!

For wrapping things up, the tool of my choice is the GIMP. The functionality of the GIMP is very close to what PhotoShop can offer.

This post is not about developing or enlarging, it is about the moment you are shooting, in a studio, with a talent... Tethering will ensure that the talent understands what you, as a photographer, are doing. This is what this post is all about.

=> set up your studio, such that the shots can be seen on a huge screen, just when they were taken.



Friday, January 17, 2014

GIMP Your Photos!

To me, photography has got several steps. Those steps, I learned in the beginning of the 1980s, when I was learning B&W-photography the classical way.
  • taking the shot by
    - measuring your lighting
    - deciding of aperture and shutter speed
    - framing the picture
  • developing the film, according to the "exposure compensation" (A.A.'s Zone System)
  • enlarging the print by
    - taking exposure test strips
    - thinking of effects and cutting out masks from paper
    - exposing the print and potentially using said paper masks
    - developing & fixing
    - drying the prints
Yes, that was a lot of work to get a photo at that time!

Today we have computers and software like Adobe Photoshop. And there is also GIMP is a pretty powerful replacement for Photoshop. The advantage of GIMP is, not only is it entirely free, it is also available for all popular platforms. I have to admit, it is not the easiest program to use and the user manual is pretty massive.

Occasionally, I learn a trick, e.g. when people demonstrate the use of Photoshop, which can be transferred to GIMP.

Here is one, it is pretty simple and allows for some artistic improvement of photos, I hope that blogger does not "improve" the images too much (although it ugly-fied the plain image pretty good).

plain shot

after the little trick
Here is the workflow:
  1. duplicate the layer (drag&drop layer icon to the duplicate icon)
  2. set the layer mode to "multiply" (pull down menu on top)
  3. add a layer mask to the top layer (right-click on the top layer icon)
    => select "white"
  4. activate the layer mask (by clicking on the white icon)
  5. select the airbrush tool with a soft brush and adjust the size
  6. be sure the color selected is black (!)
  7. selectively paint your talent with the airbrush 

In the above example, blogger helped me to make the unmodified photo look even more terrible and flat as is originally.

I decided to add more depth to the statute by bringing the forehead and the close shoulder more to the attention of the viewer by lighten it. This is done by removing the influence of the top layer by masking parts out.

You will see, this little trick will help you to make dull photos a lot more interesting.

BTW, in the good old wet darkroom times, I would have done more or less the same! I would have screened portions of an image, which I wanted brighter, with a little mask, so that said portions would be less exposed (the test-strips are your friends!), creating the effect I was after.

Sunday, January 12, 2014

What Makes a Great Shot?

First of all, I would like to point out, that no really great shot is shown here, at least what my contributions is concerned.

Photography can be a form of art. It not always is, we are aware of that, however, sometimes it is.

A good photo is certainly defined by good framing, i.e. choice of scene, which is the artsy part of photography.
In addition to framing, the technical parts play a major role. Is my exposure right (ISO)? What about the depth of field (aperture)? Do I need motion blur (shutter speed)?

However, there is more to a good photo than the above mentioned aspects...

Art is often defined by its period... let's have a look at this great overview by Marco Marilungo (I hope I don't have to pay for linking his work into my blog):
History of Art by Marco Marilungo
Very obviously, the effect of the individual paintings is not only provided by the painting itself, but also the style (period) and the title given.

In photography, we can't really tweak the style. You photograph a building, very obviously this would be architectural photography - take an image of a scene in a busy market, that would be street photography ... etc. etc.

However, we photographers still have the title and maybe subtitle to set the mind of the viewer.

Let's have a look at a scene I shot today:
CRW_0388-1.jpg
Yep, the file was named CRW-0388-1.jpg... why is that? I shot in RAW (DNG) with CHDK, and it was the 388th image taken. Further, the "-1" means that I cropped the file in RawTherapee. Interesting, isn't it?! Dah!

Well, let's have a look at the same shot with a different caption:
Bridges between buildings
Does it make a difference? I hear you saying yes. This is somewhat descriptive and hence, we are still not telling a story, cf. "realism" above.

Now let's take this a step further and play with your imagination... (do you know Latin? if not, google is your friend). Same photo:
Qua vadis?
Due to the title (caption), the photo now tells a story! From which floor do I have to start to get on which floor of the other building?! Is there any way I can tell where a particular passage gets me to? What am I actually doing in such a building? Where is the restaurant at the end of the universe?!


Besides the psychological part of photography, there is another thing to add: choose a crop that works. The original of this photo was shot in RAW (DNG) at 4:3, which represents the sensor size. Fair enough. However, sometimes, the dimensions of your sensor just wont fit the scene in term of composition.
Have a look at the scaled and jpg converted original:
CRW_388.DNG (not really DNG, since scaled and converted)
See what the problem with this frame is? There is a cut off statue at the lower left corner, some red sign behind the pillars at the right lower corner... and this ugly purple thing in the lower portion right from the center... this all had to go.
Taking the focus on the upper part of the image: There is some really stupid looking rectangular thing on to of the building, which his has to go. Also, the sky, which was just cloudy, does not show any texture, and therefore needed to be minimized.
Further, to emphasis the stretch of the bridges, a wider format cam in handy. Hence, I went for a 16:9 image format for the final image.

Actually, when I took the shot, I already knew that I wanted to call in "quo vadis?". Sometimes such ideas come in handy... sometimes, the shot does not work out as hoped (this particular one worked out for me).
It may occur that the idea how to name a cool photograph comes in late. Take your time and imagination to think about a good title.
At the end, it is the title you give a photograph that influences the mind of the viewer.

Saturday, January 11, 2014

Ghetto HDR-Photography

Why am I calling this "ghetto"? Well, simply said, this is as cheap and as low profile as you can get... still automated, without any scripts.

Let me first show you my gear:

Canon IXUS 140 on a JOBY GorillaPod


Concerning the original GorillaPod, I figure that this camera, the Canon IXUS 140, is somewhat the limit that this tripod can handle easily. However, there are stronger version available.

The camera is actually not just like it came from the maker, it has the CHDK installed.
One can take advantage of the CHDK in many way, scripts and all... cool stuff! One can, however, also go for the very simple and obvious. That's was this post is all about.

Framing a shot, as you imagine it from the ground, put the setup on the ground, press the shutter button and wait... that's what this post is all about.

To achieve this, the camera and CHDK need to be set into certain modes.

Lets start with CHDK. Set your camera to shoot in RAW or DNG first. Further, in "Advanced Photo Operations", select the menu "Bracketing in Continuous Mode". Here, you want to go for "Tv Bracketing", lets say 2 Ev. Another very important setup in this menu is the "Disable Brackting on Startup"... which in my taste needs to be de-selected (more about this later).

Now to the camera. Put it into record mode by half-pressing the shutter release. Now what you want to do is press the "Func.Set"-button and go into the "self timer"-menu. Select "Custom Timer". You will be able to enter a timer delay and a number of shots to be taken. For a regular HDR, i.e. 3 shots -/0/+ 2 Ev, select "3". As to the timer delay, 2 secs will be OK, select more, if you want to be sure that your GorillaPod settles down before the shutter opens.

To shoot, just press the shutter release button. Some blinking action will go on in front of the camera. When the shots according to your settings are taken, the display will light up and display an image.

Of course you can change the parameters to shoot more than 3 frames in different Ev-steps. CHDK and Canon's custom timer leaves you a huge playground!

In the example, I have taken matters to the extreme. The IXUS 140 was set to shoot 7 frames, in 1Ev increments, held by the GorillaPod on the handrail of the little bridge I was standing on.
7 frames HDR, 1Ev increments, DNG
This is the original middle image of the HDR image above.
Regular exposure, DNG
Note the cropping / lens corrections done by the HDR processing. The very right tree on the "regularly" exposed frame has been taken out completely.

3 frames HDR, Canon IXUS 140 (CHDK) on the original GorillaPod
The above shots are probably not the best photos ever created. The purpose of the exercise was to show, who cheap you can go with your gear, still being able to achieve decent quality HDR photos.

BTW: If you own a GorillaPod, which ever model, you have some fun out there!

Summary: I used a Canon IXUS 140 w/ CHDK to shoot in DNG (Digital Negative) on a GorillaPod using the built-in timer. I doubt that there is any way to shoot decent HDR-photos any cheaper than that! Feel free to prove me wrong!


Wednesday, January 8, 2014

New Bag

My pleasure in photography is pretty much based on the fact, that for certain types of photography, you will be outside. Be it architecture, street or landscape photography, even sports will pull you outside most of the time.
However, there is a problem. How to carry the gear w/o being noticed? I struggled a while buying a bag, since most of the design are kinda pretty much in your face photo-stuff. This I don't want for myself.

Finally, I found a relatively inconspicuous bag by Lowe, the Passport Sling II. The bag, being a little bit asymmetric, gives the looks of being just one of those up-to-date day-pack replacements. To me, that was a perfect option to covertly carry a camera.

Interestingly enough, I when I bought the bag, I was hoping for this, the asymmetric shape of the bag allows for a small tripod to be inserted.
"As small tripod..." I hear you saying, "... what is that good for?" Well, sometimes, size does not matter. Actually, my "small" tripod is a GorillaPod SLR-Zoom; I figure that even the beefiest of the models, the Focus, will fit in the bag.

Are there downsides to the bag? Yes, there are!
- The lower half of the inner bag is essentially made from liner material. It would have been nice if Cordura-like outer material would have been used for the entire inner bag. (More seams = more problems, in particular in weaker material)
- There are 2 little compartments, inside the inner bag, which have a Velcro latch. Those compartments look pretty useless... I would have wished for a single bigger compartment with a zipper, e.g. for keeping additional SD-cards and batteries from falling out.

All in all, it is the bag I was looking for. I hope it is able to serve me for a while.

Tuesday, January 7, 2014

Shutter Release Cable for Canon Cameras

Some weird coincidence got me into buying a Canon DSLR years ago. The EOS-350D, aka Digital Rebel XT, was top notch at that time... I still love it today.
Today, I am happily choosing Canon cameras for CHDK, Magic Lantern and BaclyardEOS, but this an entirely different story.

Luck had it, that Canon made it very easy to build a remote shutter release cable. The internet is full of decent descriptions. Yet again, I would like to share my solution with you.

First of all, the materials I used where all in my possession, nothing needed to be bought. Actually, all the stuff was lying about since years, unused, and you will see why in the next few lines.

When you search the internet for shutter release cables, you will come across many square of rectangular boxes, which not only look DIY, but also look not following latest ergonomics standards.
About 20 years ago, I bought an analogue joystick for my PC. I really never got into gaming ever, the thing just went into a box and was moved about only when I changed cities.

The same can be said about a pair of headphones, that once came with a first generation digital camera having an MP3 player. What where people thinking?! An MP3 player in a camera?! The player was crap, the camera, for the time, was OK. The developers of this great design had a brain-wave, why not preventing people from using their decent headphones (having a 3.5mm TRS jack)? Hence, the developers added a 2.5mm TRS jack to that particular MP3-player-camera-gadget, just what I needed for the Canon cameras.

And here we have it, 2 useless gadgets can be put together into 1 useful accessory.

the parts, taken apart, and the complex schematics diagram
I used a of perforated PCB to create a stable transition between the flimsy headphone cable and the very sturdy joystick cabling. A knot reduces the length of the latter.
It might be somewhat obvious, I used the blue cables for focus and the white ones for exposure.
In order to prevent the flimsy cable from being pulled off the PCB (not shown), another know secures the external cable to one of the posts of the joystick.

Although I took a picture of the 'not yet closed but functioning' device, I wont post it here, due to the potentially offending shape of said object.

Concluding, a remote shutter release which feels like being built to be held in a fist (actually the joystick is) is a great addition to my bag.
Needless to say that this device works great with either of the two cameras, the EOS-350D and the PowerShot G15.

Saturday, January 4, 2014

Legacy Series, General Thoughs on Imaging

General Thoughts on Imaging

CCDs are sensitive on visual, ultra-violet and infra-red wavelengths. Lets forget about UV by now! IR, quiet interesting, has a very different focal plane with refractive optics than visible light (it's all about the Snellius-stuff, e.g. chromatic aberration), thereby opening two possibilities:

  1. Block visual completely (a waste of light since IR is pretty well attenuated through our atmosphere)
  2. Use pure reflective optics (i.e. Newtonians).

I am still playing with the thought of the second option for the future. The problem here: The available telescopes of this kind having a reasonable size and a not too long focal length (about 70mm aperture and not more than about 500mm focal length) are usually of extremely cheap quality. To get some decent images the mount of the telescope (and it's tracking) has to be rather stable, usually then the telescope on a quality mount is again much bigger (i.e. focal lengths about 2m), making the field of view rather small, also the f-number ususally shifts to „darker“ (aka slower) values. All these things are supposed to be teleSCOPEs, optimised for visual applications usually.
Still one option to go for, a „cheap“ scope on a good mount. But remember, focal-reducers are no option here, these would include „chromatic“ aberrations again. Thus, a pure reflective telescope (i.e. Newtonian design) with absolutely no refractive element, the greatest possible aperture and the shortest possible focal length would be the intrument to go for, preferrably with a parabolic mirror (most of the cheapoes have spherical primary mirrors).

Presently I am using two basic setups for imaging, both including an IR-cut filter. The first setup is the relatively cheap, computerised refracting telescope ETX-70 by Meade, having an aperture of 70mm and a focal length of 350mm (making it f/5, a rather fast setup). The ETX-70, meant to be a beginners level scope, has quiet inaccurate tracking, thus exposure times are limited to about half a minute (still recording stars fainter than 14th magnitude!), field of view (FOV) is less than about 1°. The second setup consists of a webcam and a photographic objective (have a look the Bellow-Cam MK-II page for details). The setup is tracked by standart "cheap" hobby material, namely an EQ-2 mount (usually provided with very very simple telescopes) and the appropriate right ascension (RA) motor. Compared to the focal length (mostly 50mm) of the system this mount tracks well enough to expose for quiet some time. Drawback on this system: the camera fitting best mechnically (QC4000pro) is not as good as the one used together with the ETX-70. Advantage though: even faster optics, the 50mm lens, for example, is f/1.8, an IR-cut filter (not the best, better than nothing) present in the base of the CCD... Even faster lenses are available (e.g. on ebay) and, besides the webcam and the motor, everything in this setup was obtained via ebay for a real bargain total amount of money.
The alternative setup to the ETX-70 is a SK8035 (SkyWatcher 80mm 350mm f/4.4 achromatic refractor). The newest addition to the family is a SK15075 (SkyWatcher 150mm 750mm f/5 achromatic refractor), which performs really nice; more starry nights needed!

More on filters? Yes, there still is something to mention, I would recommend filters of all kinds cutting out Na- (Sodium) and Hg- (Quicksilver) lines. The visual impression might be disturbed, the photographic will be fine.


Legacy Series, Another Kind of HDR Photography

M42


Finally, some time and sky to work on long exposure webcam-astronomy... The remaining clouds allowed an open view on Orion. M42, certainly one of the more prominent objects, easy to see and easy to record, gives a perfect light source for experimenting as it includes a very bright open cluster as well as a dark cloud and a bright nebula.


Instrument
ETX-70 equatorial mode, #494 autostar
Camera
ToUCam pro PCVC-740, Baader IR filter
Data acquisition K3CCDTools, 10s exposures
Data registration
RegiStax 2, K3CCDTools
Frame stacking
RegiStax 2, K3CCDTools
Postprocessing
RegiStax 2 & IrfanView, iMerge

Some resulting images (based on the same recorded AVI file - have a look at a compressed WMV version) were obtained using different sets of postprocessing parameters (contrast, brightness, saturation, gamma curve, etc.) in order to respect the different aspects of the complex object. All image tuning steps were carried out on all pixels equally (no area selective tinkering).



The Results



Image resulting from registration and stacking, no postprocessing



The above "raw" image postprocessed using IrfanView



Light postprocessing using RegiStax


Massive fiddeling with RegiStax and IrfanView postprocessing


Registration using K3CCDtools linear scale


Registration using K3CCDtools logarithmic scale


All above combined using iMerge


The Observatory

Have a look at my hyper-professional setup to catch the hole in the clouds (greyish stuff above the roofs in the background). Even though the mechanical setup looks pretty solid, the RA motor of the fork produces some jitter occasionally. Consequently I performed a manual selection of the frames to be registered, resulting in a loss of about 30% of the 10sec frames.


Legacy Series, General Thoughs on Imaging

General Thoughts on Imaging

CCDs are sensitive on visual, ultra-violet and infra-red wavelengths. Lets forget about UV by now! IR, quiet interesting, has a very different focal plane with refractive optics than visible light (it's all about the Snellius-stuff, e.g. chromatic aberration), thereby opening two possibilities:

  1. Block visual completely (a waste of light since IR is pretty well attenuated through our atmosphere)
  2. Use pure reflective optics (i.e. Newtonians).

I am still playing with the thought of the second option for the future. The problem here: The available telescopes of this kind having a reasonable size and a not too long focal length (about 70mm aperture and not more than about 500mm focal length) are usually of extremely cheap quality. To get some decent images the mount of the telescope (and it's tracking) has to be rather stable, usually then the telescope on a quality mount is again much bigger (i.e. focal lengths about 2m), making the field of view rather small, also the f-number ususally shifts to „darker“ (aka slower) values. All these things are supposed to be teleSCOPEs, optimised for visual applications usually.
Still one option to go for, a „cheap“ scope on a good mount. But remember, focal-reducers are no option here, these would include „chromatic“ aberrations again. Thus, a pure reflective telescope (i.e. Newtonian design) with absolutely no refractive element, the greatest possible aperture and the shortest possible focal length would be the intrument to go for, preferrably with a parabolic mirror (most of the cheapoes have spherical primary mirrors).

Presently I am using two basic setups for imaging, both including an IR-cut filter. The first setup is the relatively cheap, computerised refracting telescope ETX-70 by Meade, having an aperture of 70mm and a focal length of 350mm (making it f/5, a rather fast setup). The ETX-70, meant to be a beginners level scope, has quiet inaccurate tracking, thus exposure times are limited to about half a minute (still recording stars fainter than 14th magnitude!), field of view (FOV) is less than about 1°. The second setup consists of a webcam and a photographic objective (have a look the Bellow-Cam MK-II page for details). The setup is tracked by standart "cheap" hobby material, namely an EQ-2 mount (usually provided with very very simple telescopes) and the appropriate right ascension (RA) motor. Compared to the focal length (mostly 50mm) of the system this mount tracks well enough to expose for quiet some time. Drawback on this system: the camera fitting best mechnically (QC4000pro) is not as good as the one used together with the ETX-70. Advantage though: even faster optics, the 50mm lens, for example, is f/1.8, an IR-cut filter (not the best, better than nothing) present in the base of the CCD... Even faster lenses are available (e.g. on ebay) and, besides the webcam and the motor, everything in this setup was obtained via ebay for a real bargain total amount of money.
The alternative setup to the ETX-70 is a SK8035 (SkyWatcher 80mm 350mm f/4.4 achromatic refractor). The newest addition to the family is a SK15075 (SkyWatcher 150mm 750mm f/5 achromatic refractor), which performs really nice; more starry nights needed!

More on filters? Yes, there still is something to mention, I would recommend filters of all kinds cutting out Na- (Sodium) and Hg- (Quicksilver) lines. The visual impression might be disturbed, the photographic will be fine.


Legacy Series, how to analyse long exposure AZ data




Analysing Data Recorded with Azimuthal Setups

It is quite obvious that an azimuthal mount causes field rotation, when not used on the North- or the South-pole. For real long exposures (i.e. minutes to hours) this is utterly devastating for every image taken.
Here the abilities and power of CCDs are coming into play. With exposure times of a couple of seconds, field rotation does not play any role for the single frame. As long as the CCD amplifier ensures that the charge in a single pixel is high enough to result in a signal greater than the detection threshold we can integrate over several different frames. Now field rotation will return when not properly taken care of .

IRIS however is capable to compensate for rotation between individual frames during alignment. Lets see what is to be done, on a step by step basis, assuming the raw fits series (after conversion) is called r#.fits (for red), g#.fits (for green) and b#.fits (for blue). Furthermore assume that the data set contains 50 frames.
  • Get a “Display commands window” first.
  • If not using IRIS in the first place you most likely have to convert your AVI-file into a plurality of FITS-files. When going for color, every frame will be present in a red, a green and a blue channel in separate FITS-files. That means, that you will have to perform all following steps on a respective color files individually. Conversion done by: [click: File -> AVI conversion... ].
  • Now you will have to select two suitable objects (stars) by marking.
    For this you have to get one frame into the main window, preferably the first of a series: [type: load r1].
    Now mark two medium-bright stars which are not too close to each other: [click: Analysis -> Select Objects => with the funny mouse pointer click on two medium-bright stars].
  • If everything went fine so far, you will be ready for registering the frames yet: [type: rregister r rr 100 50].
    A new file-set with the name rr will be created. Forget about the third parameter for the moment, you will learn to use it with some practice.
    With the time it will become clearer which registration objects will give better results and what combination will not be so good.
  • Stacking registered frames, nothing easier than this: [type: add2 rr 50].
  • Now you certainly would like to save the result: [type: save red].
  • Do the same with the other two colors...
  • To combine the three images you could do all sorts of tricks with IRIS. A simple first glance could be: [type: trichro red green blue].
    Save your result by: [type: savebmp myimage].
Have a look at an example analysis done on M52 data. Both results came from the same data set, analysed in different ways.





M52, ETX-70, analysed with K3CCDTools, dark frame subtracted
M52, ETX-70, analysed as described above, no dark frame subtraction



Legacy Series, long-exposure ToUCam PCVC-740K

ToUCam PCVC-740K long-exposure modification


Background Story

Please have a look at the superb pages available all over the globe. Many thanks to Steve Chambers for sharing his findings with the internet-community! Please have a look at Steve's page to understand that it will not be allowed to use the information provided for purposes of profit.
Three pages to be mentioned as being a very good source for rock solid informations:
(1)   http://www.pmdo.com/wintro.htm
(2)   http://members.bellatlantic.net/~vze29wzh/toucam740mod.htm
(3)   http://www.aludobson.de/CCD/umbau_einer_toucam_pro.htm


Please note that whatever you do to your camera is up to you. What I disclose on my pages is based on personal experience. Other setups (OS, software, soldering skills) might lead to other results, don't blame it on me.

Schematics

This is just a tiny little part of it. For convenience I decided to used different NAND gates than shown in (2). Reason being that the ugly style / dead bug / Manhattan method appeared to be the appropriate design choice to me for an incomplex circuit like this one. In dead bug it is easiest to bend pins together... pin 1 and pin 2, pin 3 and pin 12. Half the work done w/o any effort.



Warning

If this is your first SMD-sized project, please do not start directly... do some practice before. A webcam certainly is ruined very very quickly!


Photographs...








Overview of the battle-field side of the toucam's PCB, cuts already being applied







Practice cut.... get rid of additional light produced by the green SMD-LED






These cuts are essential (see link (2) mentioned above), cutting too much of the mass-plane does not harm, but... watch out for the other leads! I used a sphere type diamond-tool to mill away the leads to be disconnected.



Wires do not need to go around the PCB in any case. Here are two vias which can be used to solder [Pad8] to pin 1 of the 4011 (lower wire) and [Pad10] to the potential switch (upper wire). I used 0.2mm "magnet wire" (enameled copper wire)...
In these places, the wires are soldered to the PCB-vias. BTW: finally I did not add a switch to the cam... what for anyway?





The leads (0.2mm magnet wire) connecting "Pin8" to pin 11 (4011) and [Pin10] to the 100k resistor. Both wires use "foreign" vias to channel to the side of the PCB (as long as the enamel varnish is not scratched at the tunneling portion of the wire, no problem can be encountered).






Both sides of the story.... the blue wire is the power supply for the 4011. (Sure, I could have searched for another place to get 5V from.... but, time is money, isn't it?).





And... on the other side of life....  Dead bug "mounted" (cyan-acrylate) 4011 with 100k resistor. In the upper quarter of this image the connection between [Pad8] and pins 1 and 2 of the 4011 is visible. The lower quarter show the "tunneled" leads to pin 11 (4011) and to the 100k resistor.







Dead bug 4011.... the two blue wires are the link to the parallel port connector.



Switch?

You will need the switch only if you intend to take still-image with the camera in the common way. For video the camera will still be working fine with [Pin10] of the 16510 being connected to the 100k resistor.
Personally I omitted the switch...



Have a look at an observation using the camera recording the Great Orion Nebula (M42) in combination with an ETX-70.


Legacy Series, the Bellow Cam Mk 2 SC GT



Bellow-Cam Mk 2 GT

 Being used to GOTO-telescopes, I also liked my wide-angle camera to be equipped with this feature. OK, it is not a big deal at all to handle right ascension and declination setting circles, pointing the scope/camera by α and δ looked up from catalog or star chart data, but, goto is sooo convenient! After some looking around, I got myself a used MEADE DS-127 mount with an AutoStar #957 computer. Perfect!
Fitting Bellow-Cam Mk 2 to the mount is so simple that I would not like to waist words on this issue. It took a piece of hardwood, two bolts and two nuts to get it all sorted.

Bellow-Cam Mk 2 GT, the wide-angle camera setup on the “autostared” DS-mount, in operation.

Analysis

It is quiet obvious that this is an azimuthal mount, resulting in field rotation. For real long exposures (i.e. minutes to hours) this is utterly devastating for every image taken.
Here the abilities and power of CCDs are coming into play. With exposure times of a couple of seconds, field rotation does not play any role for the single frame. As long as the CCD amplifier ensures that the charge in a single pixel is high enough to result in a signal greater than the detection threshold we can integrate over several different frames. IRIS offers a very good possibility to register and stack frames which are slightly rotated between one another.
A very very short introduction how an analysis like this could be looking like can be found here.

Result

The image of M45 was taken 2005 January 9th. Again I was to lazy to subtract a dark frame. Note the image tilt, that's the sacrifice for compensating field-rotation when using an azimuthal mount. To be noted on the image is amplifier-glow on the upper left corner. The QuickCam appears not to be the best camera, even with the use of non-raw patch, ear-like artifacts still occur.
It is amazing, I think, that the nebulosity of M45 can be recorded using a f=50mm SLR lens and a webcam from a light polluted place like South Holland, on a night with light overcast and quiet poor transparency...

Lagecy Series, the Bellow Cam Mk 2 SC



QC4000pro LX (SC2)

The QuickCam 4000 pro had to go through a long exposure  modification following the "SC2" approach as described by Martin Burri. There is nothing much beyond Martin's remarks to be mentioned. The modification went pretty straightforward. No pins are to be lifted in this way of modifying, cutting PCB traces does the deal. I soldered magnet wire (I have tons of this around for my radio projects) to the pins of the ICs. The individual leads are connected to a kind of "patch panel", the tiny bit of bread-board PCB glued to the camera opposite the USB and microphone connector. Blue wires are making the final connection to the 4011 hidden below the whole mess. The red and the yellow wire (stereo phono cable) are serving as connection to the parallel port of a computer.
BTW, in contrary to the ToUCam, this camera holds an infrared cut filter inside the objective lens bearing, thus no additional IR-filter is needed in a setup like shown here.










Camera front-side, no lens mounted.
For reasons of experimentation, the camera is built in tilted...


Additionally I spent money and obtained a
Ra-motor for the EQ2. Decent tracking is
a necessity for the long exposure times the
camera was modified for.


LX-astro images

You are invited to compare with the images recorded using Bellow-Cam MK1 and Bellow-Cam MK2. Messier 44  gives some impression about the benefits of longer exposures. Exposure time was chosen to be 10s/frame. Conditions on March 13th 2004 when I was acquiring data for the following image were fair (for this site), with the unarmed eye, I could see Orion's sword.


M44, 50mm f/1.8

Legacy series, the Bellow-Cam Mk 2

Bellow-Cam Mk 2

Another Camera

The Vesta fits quiet OK, but not really well.... Philips, for what ever reason, appears do design funky shapes for webcams... Logitech obviously shows some more heart for tinkerers, doing so they designed a ball, long ago, and sticked to that design. Certainly a ball fits better into the bellow-cam... Have another look... you might recognize a QC-4000pro in un-modified state.
The webcam is held by self-adhesive window frame insulation rubber, the circular openings of the bellow are perfect to fit the QuickCam-ball in....
In this stage I have chosen to mount the camera on an EQ2 mount. The Ra and Dec axis allow easy aiming when the coordinates of the object of interest are known, furthermore the mount enables manual tracking.


side view with 50mm lens
resulting field of view about 4°

side view, 135mm lens with "dew cap"
resulting field of view about 1°30'

front view with 50mm lens

back view

front mount detail, no lens

rear mount detail

the "victim"

the "supporter"


M45 (Pleiades)  recorded with the above setup