How-To Geek

HTG Explains: Everything You Know About Resolution Is Probably Wrong


“Resolution” is a term people often throw around—sometimes incorrectly—when talking about images. This concept is not as black and white as “the number of pixels in an image.” Keep reading to find out what you don’t know.

As with most things, when you dissect a popular term like “resolution” to an acedemic (or geeky) level, you find that it’s not as simple as you might have been lead to believe. Today we’re going to see just how far the concept of “resolution” goes, briefly talk about the implications of the term, and a little bit about what higher resolution means in graphics, printing, and photography.

So, Duh, Images Are Made of Pixels, Right?


Here’s the way you’ve probably had resolution explained to you: images are an array of pixels in rows and columns, and images have a pre-defined number of pixels, and bigger images with bigger number of pixels have better resolution… right? That’s why you’re so tempted by that 16 megapixel digital camera, because lots of pixels is the same as high resolution, right? Well, not exactly, because resolution is a little bit murkier than that. When you talk about an image like it’s only a bucket of pixels, you ignore all the other things that go into making an image better in the first place. But, without a doubt, one part of what makes an image “high resolution” is having a lot of pixels to create a recognizable image.

It can be convenient (but sometimes wrong) to call images with lots of megapixels “high resolution.” Because resolution goes beyond the number of pixels in an image, it would be more accurate to call it an image with high pixel resolution, or high pixel density. Pixel density is measured in pixels per inch (PPI), or sometimes dots per inch (DPI). Because pixel density is a measure of dots relative to an inch, one inch can have ten pixels in it or a million. And the images with higher pixel density will be able to resolve detail better—at least to a point.


The somewhat misguided idea of “high megapixel = high resolution” is a sort of carryover from the days when digital images simply couldn’t display enough image detail because there weren’t enough of the little building blocks to make up a decent image. So as digital displays started to have more picture elements (also known as pixels), these images were able to resolve more detail and give a clearer picture of what was going on. At a certain point, the need for millions and millions of more picture elements stops being helpful, as it reaches the upper limit of the other ways that the detail in an image is resolved. Intrigued? Let’s take a look.

Optics, Details, and Resolving Image Data


Another important part of the resolution of an image relates directly to the way it is captured. Some device has to parse and record image data from a source. This is the way most kinds of images are created. It also applies to most digital imaging devices (digital SLR cameras, scanners, webcams, etc) as well as analog methods of imaging (like film-based cameras). Without getting into too much technical gobbledygook about how cameras work,  we can talk about something called “optical resolution.”

Simply said, resolution, in regard to any kind of imaging, means “ability to resolve detail.” Here’s a hypothetical situation: you buy a fancy-pants, super high-megapixel camera, but have trouble taking sharp pictures because the lens is terrible. You just can’t focus it, and It takes blurry shots that lack detail. Can you call your image high resolution? You might be tempted to, but you can’t. You can think of this as what optical resolution means. Lenses or other means of gathering optical data have upper limits to the amount of detail they can capture. They can only capture so much light based on form factor (a wide angle lens versus a telephoto lens), as the factor and style of lens allows in more or less light.


Light also has a tendency to diffract and/or create distortions of light waves called aberrations. Both create distortions of image details by keeping light from focusing accurately to create sharp pictures. The best lenses are formed to limit diffraction and therefore provide a higher upper limit of detail, whether the target image file has the megapixel density to record the detail or not. A Chromatic Aberration, illustrated above, is when different wavelengths of light (colors) move at different speeds through a lens to converge on different points. This means that colors are distorted, detail is possibly lost, and images are recorded inaccurately based on these upper limits of optical resolution.


Digital photosensors also have upper limits of ability, although it’s tempting to just assume that this only has to do with megapixels and pixel density. In reality, this is another murky topic, full of complex ideas worthy of an article of its own. It is important to keep in mind that there are weird trade-offs for resolving detail with higher megapixel sensors, so we’ll go further in depth for a moment. Here’s another hypothetical situation—you chunk out your older high-megapixel camera for a brand new one with twice as many megapixels. Unfortunately, you buy one at the same crop factor as your last camera and run into trouble when shooting in low light environments. You lose lots of detail in that environment and have to shoot in super fast ISO settings, making your images grainy and ugly. The trade off is this—your sensor has photosites, little tiny receptors that capture light. When you pack more and more photosites onto a sensor to create a higher megapixel count, you lose the beefier, bigger photosites capable of capturing more photons, which will help to render more detail in those low light environments.

Because of this reliance on limited light-recording media and limited light-gathering optics, resolution of detail can be achieved through other means. This photo is an image by Ansel Adams, renown for his achievements in creating High Dynamic Range images using dodging and burning techniques and ordinary photo papers and films. Adams was a genius at taking limited media and using it to resolve the maximum amount of detail possible, effectively sidestepping many of the limitations we talked about above. This method, as well as tone-mapping, is a way to increase the resolution of an image by bringing out details that might otherwise not be seen.

Resolving Detail and Improving Imaging and Printing

Because “resolution” is such a broad-reaching term, it also has impacts in the printing industry. You’re probably aware that advances in the past several years have made televisions and monitors higher definition (or at least made higher def monitors and televisions more commercially viable). Similar imaging technology revolutions have been improving the quality of images in print—and yes, this too is “resolution.”

When we’re not talking about your office inkjet printer, we’re usually talking about processes that create halftones, linetones, and solid shapes in some kind of intermediary material used for transferring ink or toner to some kind of paper or substrate. Or, more simply put, “shapes on a thing that puts ink on another thing.” The image printed above was most likely printed with some kind of offset lithography process, as were most of the color images in books and magazines in your home. Images are reduced to rows of dots and put onto a few different printing surfaces with a few different inks and are recombined to create printed images.


The printing surfaces are usually imaged with some kind of photosensitive material which has a resolution of its own. And one of the reasons that print quality has improved so drastically over the last decade or so is the increased resolution of improved techniques. Modern offset presses have increased resolution of detail because they utilize precise computer-controlled laser imaging systems, similar to the ones in your office variety laser printer. (There are other methods, as well, but laser is arguably the best image quality.) Those lasers can create smaller, more accurate, more stable dots and shapes, which create better, richer, more seamless, more high-resolution prints based on printing surfaces capable of resolving more detail. Take a moment to look at prints done as recently as those from the early 90s and compare them to modern ones—the leap in resolution and print quality is quite staggering.

Don’t Confuse Monitors and Images


It can be quite easy to lump the resolution of images in with the resolution of your monitor. Don’t be tempted, just because you look at images on your monitor, and both are associated with the word “pixel.” It might be confusing, but pixels in images have variable pixel depth (DPI or PPI, meaning they can have variable pixels per inch) while monitors have a fixed number of physically wired, computer-controlled points of color that are used to display the image data when your computer asks it to. Really, one pixel is not related to another. But they can both be called “picture elements,” so they both get called “pixels.” Said simply, the pixels in images are a way of recording image data, while the pixels in monitors are ways to display that data.

What does this mean? Generally speaking, when you’re talking about the resolution of monitors, you’re talking about a far more clear-cut scenario than with image resolution. While there are other technologies (none of which we’ll discuss today) that can improve image quality—simply put, more pixels on a display add to the display’s ability to resolve the detail more accurately.

In the end, you can think of the images you create as having an ultimate goal—the medium you’re going to use them on. Images with extremely high pixel density and pixel resolution (high megapixel images captured from fancy digital cameras, for instance) are appropriate for use from a very pixel dense (or “printing dot” dense) printing medium, like an inkjet or an offset press because there’s a lot of detail for the high resolution printer to resolve. But images intended for the web have much lower pixel density because monitors have roughly 72 ppi pixel density and almost all of them top out around 100 ppi. Ergo, only so much “resolution” can be viewed on screen, yet all of the detail that is resolved can be included in the actual image file.

The simple bullets point to take away from this is that “resolution” is not as simple as using files with lots and lots of pixels, but is usually a function of resolving image detail. Keeping that simple definition in mind, simply remember that there are many aspects to creating a high resolution image, with pixel resolution being only one of them. Thoughts or questions about today’s article? Let us know about them in the comments, or simply send your questions to

Image Credits: Desert Girl by bhagathkumar Bhagavathi, Creative Commons. Lego Pixel art by Emmanuel Digiaro, Creative Commons. Lego Bricks by Benjamin Esham, Creative Commons. D7000/D5000 B&W by Cary and Kacey Jordan, Creative Commons. Chromatic Abbertation diagrams by Bob Mellish and DrBob, GNU License via Wikipedia. Sensor Klear Loupe by Micheal Toyama, Creative Commons. Ansel Adams image in public domain. Offset by Thomas Roth, Creative Commons. RGB LED by Tyler Nienhouse, Creative Commons.

Eric Z Goodnight is an Illustrator and Graphics Geek who hopes to make Photoshop more accessible to How-To Geek readers. When he’s not headbanging to heavy metal or geeking out over manga, he’s often off screen printing T-Shirts.

  • Published 03/1/12

Comments (23)

  1. Techy1984

    Very informative. Thank you for writing this article.
    The only question I have then- Are there any tips for people who want to create images originating from a computer program, such as a rendering program such as, the type of program that generates a photo-realistic image from a 3D model, and then send to a printer?

    Thank you.

  2. LadyFitzgerald

    Thanks for pointing out the “more pixels in a camera, the better” myth. I know of at least one high end point & shoot camera line that increased the pixel count without increasing the sensor size and got reviews saying picture quality decreased.

  3. Andacar

    Thanks very much for this article. As a computer graphics teacher I constantly hear all the confusion about this from my students. I especially get problems (and arguments) when students try to go from the print world to the digital display world, where resolution is discussed in entirely different ways. This, along with comments about color models, additive vs. subtractive, etc., rarely seem to get brought up in college graphics courses, but they are absolutely essential for anyone trying to do this kind of work professionally.

  4. Bilal

    Nicely written

  5. David Hutchinson

    So what is the resolution of my 1080p flatscreen? Would my Carl Ziess lensed camera at 6 mp be as good as a14 mp camera? Or are you saying that the sum of the whole, good camera components and good lenses make for the better picture. Is all this megapixality a pissing match?

  6. Eric Z Goodnight

    @Techy1984: Think of your ultimate goal when considering pixel resolution and density. If you’re outputting to print or something similar that can benefit from the high resolution of tons and tons of pixels, that’s your answer. If you’re just making web graphics, you probably want to size it to 72 or 100 ppi. Hope that helps.

    @David Hutchinson: Sum of the whole is definitely important as each part helps to resolve detail. The whole “this camera has more megapixels” is marketing–consumers understand a camera with higher numbers HAS to be a better camera. There’s truth to it, but you can take amazing pictures with a lower megapixel camera, provided you have the right optics.

  7. steve

    interesting considering nokia have just announed a phone capable of 41MP camera shots….

  8. steve

    My old sony walkman had a 5MP camera, and my dads cybershot of the same age had only 3mp, amd i always thought his pciture quality was far better, go figure.

  9. steve

    walkman phone*

  10. Dana Ross

    Great article. Thank you.
    The biggest pixel problem I have with my clients comes up when the ask for an image at 300dpi when I’ve given them a full size 8 megapixel image. Their software is showing that the image is at 72dpi and they think it’s a low res image when in fact it is a huge file.

  11. Jesse

    @Dana Ross

    I think you’re misunderstanding the article.

    An 8 megapixel image is roughly 3200 pixels wide by 2400 pixels tall when created, with roughly 72 pixels in a given inch. What your clients are asking for 300dpi images, they are asking for that 8MP file that has had the pixel *density* essentially tripled (Photoshop does this easily).

    What this does (in a nutshell) is decrease the image’s size by about a third while cramming the pixels closer together, which effectively increases the resolution.

    Did that make sense? If I was unclear (not running on much sleep), this page might explain better:

    Hope that helps!

  12. HanoverFiste

    Great article, this is a saver. Funny thing is, I read this immediately after watching the “The Quirky Science of Leap Years Explained [Video].” I could not stop myself from reading the entire article in that guys voice or his speed.

  13. Steve Lawson

    @Jesse: An 8 megapixel image can have many dimension ratios. 3200 x 2400 is merely the 4:3 aspect ratio (or 1.33 aspect ratio). The pixels could also be arranged as 4800 x 1600 (or even 1600 x 4800) for an aspect ration of 3:1, or 7680 x 1000 [7.68 aspect ratio], etc. The term “8 megapixel” merely means there are roughly 8 million pixels in the image. How those pixels are arranged is arbitrary. They will, however, be arranged by the image creation device (such as a digital camera or video camera), and once arranged that way, the image won’t be recognizable if arranged any other way, BUT the image still CAN be arranged some other way — in fact, it wouldn’t even have to be as a rectangle.

    @Dana Ross: What is also arbitrary is the DPI (or PPI depending on whether your talking about print media or display media). An 8 megapixel image can have a DPI of 300 or a DPI of 75 (or any DPI). What makes the difference is what physical size that image will wind up being once it is ‘resolved’ by some image device (such as a printer or monitor). If a 3200 x 2400 image is printed at 300 DPI it will have the physical dimensions of 10.7″ x 8″. If that same image is printed at, say, 50 DPI, it will be 64″ x 48″. Basically, DPI sets the distance between pixels (and the size, if you consider that the pixel will be rendered large enough to fill the gap made by those distances).

    If that 8 megapixel image has a 3 to 1 aspect ratio, then at 300 DPI it will render, roughly, as a 16″ x 5.3″ picture.

    “Now,” you might ask, “why is it when I load a digital photo into PhotoShop it already has a DPI?” That’s because the image file is tagged with a DPI value (either by the camera, or by some other PhotoShop session, or by something else that either created or altered that image). But that can be changed (int the Photoshop Image Size dialog box). If you change that, and then save it as a file and then reload it, it will then have whatever DPI setting you gave it before you saved it.

    All the DPI setting does is tell the printer what size to print the image at. BTW: on a monitor, it’s the PPI of the monitor that will determine what size the image renders at–in HTML, the code that is used to make web pages, you indicate the width and height of the image (in pixels) NOT the DPI.

    So, when your client complains that they received a 72DPI image from you when they wanted a 300DPI image, just say, “Oops, forgot to adjust that before I delivered that image to you…here, let me fix that.” and then load it into PhotoShop and change it’s “size” to 300DPI. Or, simply print out this post and hand it to them ;)

  14. Showing my age

    @ Jesse,

    You wrote: “72 pixels in a given inch. What your clients are asking for 300dpi images, they are asking for that 8MP file that has had the pixel *density* essentially tripled (Photoshop does this easily).

    What this does (in a nutshell) is decrease the image’s size by about a third while cramming the pixels closer together, which effectively increases the resolution.

    Did that make sense? If I was unclear (not running on much sleep)”

    Could be the lack of sleep talking, but you got your figures off by quite a bit. As the pixel density is increased, image size decreases in the same proportion. So, if you change a row of dots from 72 dpi to 300 dpi, you get a row of dots that becomes 24% of the original length. That is so close to a quarter of the original, but easier to work with, that I’m going to use “one quarter” below.

    As the row of dots is shrunk to one quarter of its original length, the length of that row is actually decreased by fully three quarters. But we are not yet finished.

    Since we are not talking about one row of pixels, but an array in a grid forming a picture, although resolution could be changed in one direction only, with pictures you really are asking to increase the density in proportion, and thus the picture size, in both directions.

    That means one quarter horizontally as well as one quarter vertically. Since we have shrunk the printed image size by a quarter in both directions, that means that we have actually shrunk it to one sixteenth!

    That means the image size is not decreased by a third, but by fifteen sixteenths, or 93.75%.

    If you didn’t quite follow how that result was achieved, imagine, or even draw out, a rectangle. Divide it into quarters both horizontally and vertically, and you will have sixteen rectangles. If you shrink the width to a quarter it will leave a rectangle that is only a quarter the width of the original, but still as tall as it was before.

    But we also want to shrink it vertically, or it would be out of proportion, and so that tall rectangle will shrink to a quarter of its height, and so the image now fits into just one of the original 16 rectangles.

    A remarkable difference.

  15. Anthony

    What really bothers me is when my friend claims that he has to print out a camera photo on his inkjet printer to see how sharp it is — because “a print has higher resolution than a monitor”! And your article adds to his confused and erroneous view, by saying monitors “top out” at 72 – 100 ppi, whereas prints are far higher. Sheesh! If monitors were not capable of “resolving” all, and I mean ALL, the detail in a digital image (of whatever resolution) on screen, why would any professional photographer bother using Photoshop, in conjunction with their color-calibrated monitor to view, edit and manipulate their photos?

    Resolution for the lay person should really only be discussed regarding the capture of images (camera sensors, scanners, lenses, etc.), but not for monitors, which are generally all capable of displaying ALL the colors and detail in an image. In fact, because they use light — namely, RGB — they have the greatest, widest color gamut compared to any laser or inkjet printer — using CMYK. (Yes, obviously, the more pixels in a monitor, the better, and we can say one monitor is better than another. That’s not the point. It’s still better than the best print.)

    Monitor resolution is simply a whole ‘nother thing, it’s apples and oranges, imho, and shouldn’t be discussed in this article, which is really about image capture. The paragraph that ends with “Ergo, only so much ‘resolution’ can be viewed on screen…” is either very wrong or very misleading, suggesting that high resolution images are best viewed with an inkjet or laser print, and low resolution images intended for the web are best viewed on the lowly 72ppi monitor. All images are BEST viewed on a monitor, period.

  16. Ric

    I have a question. In my opinion, high megapixel is very useful for cameras that do not have optical zoom. Which includes all camera-phones. Consider Nokia 808 – I can digitally zoom in (as a replacement for optical zoom) and still get higher detail. What do you think?

  17. igeek

    Now I better understand why the amoled display from samsung with high dpi is displaying a better image than the same resolution display with lower dpi. So dpi on smaller screens like cellphones makes it easier to read the text.

  18. Josh

    @Anthony: your friend is right. If you have a 300ppi image and you print it, your printer is going to use it’s technology to display 300×300 pixels in one REAL, measured-by-a-ruler inch.

    A monitor can’t do that. A monitor, depending on its ppi, is never going to display 300ppi. Take a 17″ monitor at 1280×768 resolution, its sides are going to be roughly 10″ each (for the sake of this post). So, per inch, it’s displaying 128×77 pixels in one REAL, measured-by-a-ruler inch.

    Where you’re getting confused is that resolution is NOT a relative value, it’s an ABSOLUTE value. It uses the “inch”, which is as absolute as you can get. TV’s try to trick you by saying “1080p”, when in reality, though the screen resolution is 1080xWhatever, measure the true resolution with a ruler – on a 50″ TV it’s probably about 40x40ppi.

    Now, an iPhone is actually really close to matching print resolution. That’s why it’s such a big deal – they cram “960-by-640-pixel resolution at 326 ppi” (I got that off googling “iphone resolution”).

  19. Art Kennedy

    @Jesse, thanks for really getting down to “rocks and powder”. “dpi” as used by Photoshop has always seemed to me to be unnecessarily confusing. It has nothing to do with the image file except as a tag to tell the printer what size to make the print. Jeeze, if you’re not going out to a remote print server, why not just let the user choose a paper size (on a graphic display, don’t we all have one?) and interactively size and position the image? I mean I enjoy math but why use it to confuse ourselves when easier methods are sitting there smiling at us?

  20. pete

    To put it simple DPI (or PPI) is a suggestion which says how dense a device (printer, monitor) should put the pixels from the image to render it. If a picture has 1000 pixels in a row, the DPI data tells how many pixels of those 1000 to put next to each other in an inch.

    More often than not devices aren’t listening these suggestions coming from the picture but they ask directly the user: how many pixels should I put in an inch, or how many inch wide should this 1000 pixels be on the paper?

    The pixel count of a picture is given, you tell the device either the DPI or the physical size and the device computes the other (physical size in inches = pixel count / DPI).

  21. Steve Lawson

    The thing about comparing resolution between a monitor and a printer is this: There is a huge difference in how the pixels are rendered between a monitor and a printer, thus making it difficult to make a true comparison.

    A pixel on a monitor is made up of various numbers of dot triads (or “slots” — at least that’s what they were called on the old CRT monitors–I’ll just call them “spots”). These triads are made up one of three colors–primary colors for light. Those colors are Red, Green and Blue. And each of those colors can have (at least on most monitors, today) 256 different shades, including black (or the absence of light). These color spots are so close together that your eye blends them together in to one color (red and green make yellow or orange, blue and green make cyan, etc.) Because each spot has a range of 256 shades, when the three are combined together, they can make over 16 million different “colors” (256x256x256=16777216). Now, in this context, “color” includes both the brightness and the hue (thus black is a “color” as well as white and all the other 254 other shades of gray. Also, pink and red are different colors in this context).

    A printer does this slightly differently. A printer also combines primary colors to form the color that is perceived by the eye, but because the primary colors have only one intensity, a different technique is used to acheive a range of intensities. For instance, a printer usually has blank ink, and sprays this ink through a nozzle (or transfers power from a drum or hammers a spot onto the paper from a ribbon–it all depends on the technology, but the end result is a spot on the paper). That spot can only be black. The only way to acheive the appearance of a shade of gray is to apply various numbers of spots to a grid of “spot positions”. A few spots in the space of the grid makes a lighter shade of gray and more spots in the grid makes a darker shade of gray, until the whole grid is filled with spots which is perceived as black.

    It’s like if the windows in a highrise were used to make different patterns. If those windows have white blinds that can be opened or closed, then when all the blinds are closed, the windows would be all white, and the perception would be of a brighter building. If all of the blinds are opened, then all of the windows would be dark, giving the building an overall darker appearance. Different levels of brightness could be achieved by closing only some of the blinds.

    In terms of this analogy, the window represents a single spot of black ink when the blind is opened, and the absence of a spot of ink is when the blind is closed. Now, the thing to get is that the whole building is the “dot” that is refereed to in DPI, and the window is the smallest spot the printer can render. BUT, because it has to use many spots to get different shades in the dot, the actual resolution of the printer is determined by the dimensions of the grid of spots, not the dimensions of the spot. So, for a printer to produce the same number of intensities as a monitor, it has to use up to 256 spots, arranged, perhaps in an 8×8 grid (it varies with different printers and technologies). PLUS, it has to do this with each primary color (for a printer that’s usually Cyan, Yellow, Magenta and Black [CYMB] — though, photo printers generally use more primary colors, including a couple types of black and even a gray). Each one of these colors takes up space on the page and thus a printer might be able to print at 2400 DPI, it’s really only something like 600 DPI if it needs to render different shades of color using CYMB. So, it can render a document with just black text at 2400 DPI, but a full color photo only at 600 DPI. [There is a different print process where the primary colors can overlap and thus they take up much less space — so comparing resolutions between monitors and printers is that much more complicated ;]

  22. Steve Lawson

    Like Josh said, 1080p has nothing to do with PPI (or DPI). What it means is, on any screen, no matter the size, there are going to be 1080pixels across it’s face. So, if the screen is small, say an iPad, then the actual physical number of pixels per inch will be around 1080/7.7″ = 140PPI. For a large screen TV that is, say 50″ wide, the PPI will be much less: 1080/50 = 21.6.

    If you think of the monitor as paper, then it’s just different sizes of [video] paper. Then it might be easier to make a comparison between “display media” (like monitors), and print media (like what comes out of your ink-jet printer).

  23. Steve Lawson

    Oops, I got that wrong. 1080p is the number of pixels (or actually “lines”) in the vertical (or shorter) dimension. So, for the iPad case, the PPI would be 1080/5.8 = 186PPI

    And for the monitor, which would be around 28″ high, it would be: 1080/28 = 39PPI

Enter Your Email Here to Get Access for Free:

Go check your email!