If you are ever able to make it to the KSC visitor complex in Cape Canaveral they have mock-ups of both the Gemini and earlier Mercury capsules you can get in as a size reference. They are both incredibly tight. It's amazing during Gemini 7 they spent 14 days crammed in the capsule testing systems, doing EVA activity along with normal human activity (eating, sleeping, bodily functions). All while being seconds from death at any time if things go wrong. These early astronauts were men of a different caliber.
The Todd Miller Apollo 13 documentary (which is phenomenal, it's entirely file footage but assembled better than most blockbusters) has these bits during launch/landing where they overlay the three astronaut's heart rates with the footage. My big takeaway from that was that they were incredibly unflappable, almost to an absurd degree.
No amount of computational smartphone photography can match, in my eyes, the clarity and contrast and intensity of whatever analogue medium these were captured on.
This looks gorgeous. I'm extremely tempted to splurge on this, and the Apollo, books...
Medium format film (120, 6x6), Hasselblad cameras. I personally think we're barely starting to match the quality of medium format film with modern medium format sensors
Depends on how you define quality. While medium and large format photography are extremely high resolution that’s not the only factor. Space age lenses were significantly lower resolution than the film. Modern mirrorless lenses are starting to come close to being able to out resolve film but still aren’t there. Meaning that you get more functional resolution out of modern digital. Digital also beats the pants off film for dynamic range and low light. That said the noise (grain) and dynamic range fall off in film are more pleasing than digital to most eyes. So it’s not all about technical specs.
> Digital also beats the pants off film for dynamic range and low light.
While this is true now, it took a surprisingly long time to get there. The dynamic range of professional medium format negative films is still respectable. Perhaps not so much in a low light, but it's very immune to overexposure.
Also, you can buy a cheap medium-format camera in a good condition and experience that "huge sensor" effect, but unfortunately there are no inexpensive 6x6 digital cameras.
> Space age lenses were significantly lower resolution than the film.
Can you say a little more about this? Modern lenses boast about 7-elements or aspherics, but does that actually matter in prime lenses? You can get an achromat with two lenses and an apochromat with three. There have definitely been some advances in glass since the space program, like fluorite versus BK7, but I'm wholly in the dark on the nuances.
I find modern primes much sharper than their older counterparts not because of the elements or the optical design, but from the glass directly.
Sony's "run of the mill" F2/28 can take stunning pictures, for example. F1.8/55ZA is still from another world, but that thing is made to be sharp from the get go.
The same thing is also happening in corrective glasses too. My eye numbers are not changing, but the lenses I get are much higher resolution then the set I replace, every time. So that I forget that I'm wearing corrective glasses.
The lenses also have to be better to compensate for the smaller sensors. All lens defects get more "magnified" the smaller the sensor is. So a straight comparison isn't fair unless the sensor is the same size as the film was.
Computational photography is about to get really good when it can combine hundreds or thousands of frames into one. 1000 frames effectively combined is equivalent to a lens and sensor of 1000x the surface area - ie exceeding a single frame from a DSLR.
Current methods use optical flow and gyroscopes to align images, but I imagine future methods to use AI to understand movement that doesn't work well for optical flow (ie. Where a specular reflection 'moves' on a wine glass).
I like this, it's really cool - especially the stack images from 16 mm film. The first image (first selfie in space on Gemini 12) is very artistic but I like the original better in that example - just look at the specular highlight before and after.
I understood it less as an attempt of improvement and more as an alternate version of the same shot.
Where you see more of Aldrin and other smaller bits that were less visible in the original and rightfully iconic shot.
The headline before/after image is astonishing, almost in-credible. I can't see how the left image was restored into the right. It looks like there is substantial new detail on the right that I can't see anywhere on the left.
I can only assume that the image on the left is a low resolution scan produced for this web article, and that there must be a much better scan somewhere else.
It must be. The amount of detail is incredible, and even trying to extract data from the before picture, it doesn't come close to what you see in the newly processed image.
Absolute least they did is to rescan original film with newer type of scanning process/device into higher resolution and bit depth „digital negative”. You cannot replicate that from low quality jpeg image.
But it is connected to the Internet protocol of the same name, which uses 1965 as its well-known port number (although IANA has not heard about this yet).
The name comes from Latin "gemini" meaning "twins," referring to the mythological Dioscuri, Castor and Pollux, sons of Leda. In mythology, Pollux was immortal, Castor mortal; their story is connected with themes of brotherhood and sacrifice.
Project Gemini was NASA's second human spaceflight program (1961-1966), preceding Apollo. It developed spaceflight techniques such as orbital rendezvous and docking, essential for the Moon landing.
Gemini is also a lightweight internet protocol and associated ecosystem (the Gemini Protocol), designed as a middle ground between Gopher and the modern web (HTTP/HTTPS), emphasizing simplicity and privacy.
It is also the name of Google's multimodal AI model, successor to Bard (announced 2023).
This looks gorgeous. I'm extremely tempted to splurge on this, and the Apollo, books...
https://airandspace.si.edu/collection-objects/camera-hasselb...
While this is true now, it took a surprisingly long time to get there. The dynamic range of professional medium format negative films is still respectable. Perhaps not so much in a low light, but it's very immune to overexposure.
Also, you can buy a cheap medium-format camera in a good condition and experience that "huge sensor" effect, but unfortunately there are no inexpensive 6x6 digital cameras.
Can you say a little more about this? Modern lenses boast about 7-elements or aspherics, but does that actually matter in prime lenses? You can get an achromat with two lenses and an apochromat with three. There have definitely been some advances in glass since the space program, like fluorite versus BK7, but I'm wholly in the dark on the nuances.
Sony's "run of the mill" F2/28 can take stunning pictures, for example. F1.8/55ZA is still from another world, but that thing is made to be sharp from the get go.
The same thing is also happening in corrective glasses too. My eye numbers are not changing, but the lenses I get are much higher resolution then the set I replace, every time. So that I forget that I'm wearing corrective glasses.
Current methods use optical flow and gyroscopes to align images, but I imagine future methods to use AI to understand movement that doesn't work well for optical flow (ie. Where a specular reflection 'moves' on a wine glass).
I can only assume that the image on the left is a low resolution scan produced for this web article, and that there must be a much better scan somewhere else.
So, what improved is probably our digitalization tools and with some post, you can reveal a lot of detail.
My attempt: https://i.imgur.com/QZDDEB5.png
Imagine:
1. Film -> Method 1 -> Photo #1
2. Film -> Method 2 -> Photo #2
Instead you tried:
3. Photo #1 -> Method 3 -> Photo #2
Which instead gives you a badly edited Photo #1. You don't have the source code, so to speak.
Project Gemini was NASA's second human spaceflight program (1961-1966), preceding Apollo. It developed spaceflight techniques such as orbital rendezvous and docking, essential for the Moon landing.
Gemini is also a lightweight internet protocol and associated ecosystem (the Gemini Protocol), designed as a middle ground between Gopher and the modern web (HTTP/HTTPS), emphasizing simplicity and privacy.
It is also the name of Google's multimodal AI model, successor to Bard (announced 2023).
1965 was two weeks ago?