How to choose a smartphone with a good camera. Camera interpolation, why and what is it? How to disable interpolation on a smartphone

Camera interpolation is an artificial increase in image resolution. It is the image, not the matrix size. That is, this is special software, thanks to which an 8 megapixel image is interpolated to 13 megapixels or more (or less). To use an analogy, camera interpolation is like a magnifying glass or binoculars. These devices enlarge the image, but do not make it look better or more detailed. So if interpolation is indicated in the phone's specifications, then the actual camera resolution may be lower than stated. It's not good or bad, it just is.

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers on the advertising poster the resolution of the phone's camera and position it as an advantage or something good. Not only does resolution itself not affect the quality of photographs, but it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and different ways tried to cram them into their smartphone sensors with as many sensors as possible. This is how smartphones with cameras with a resolution of 5, 8, 12, 15, 21 megapixels appeared. At the same time, they could take photographs like the cheapest point-and-shoot cameras, but when buyers saw the “18 MP camera” sticker, they immediately wanted to buy such a phone. With the advent of interpolation, it has become easier to sell such smartphones due to the ability to artificially add megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

What is camera interpolation in a phone technically, since all the text above described only the basic idea?

Using special software, new pixels are “drawn” on the image. For example, to enlarge an image by 2 times, a new line is added after each line of pixels in the image. Each pixel in this new line is filled with a color. The fill color is calculated by a special algorithm. The very first way is to pour new line colors that the nearest pixels have. The result of such processing will be terrible, but this method requires a minimum of computational operations.

Most often, another method is used. That is, new rows of pixels are added to the original image. Each pixel is filled with a color, which in turn is calculated as the average of neighboring pixels. This method gives top scores, but requires more computational operations. Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program edits the image, trying to artificially increase its size. smartphone camera interpolation There are many advanced interpolation methods and algorithms that are constantly being improved: the boundaries of the transition between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is banal and is unlikely to catch on in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in films does a small blurry picture become clear after applying a couple of filters. In practice this cannot happen.
.html

Market mobile phones filled with models with cameras with huge resolutions. There are even relatively inexpensive smartphones with sensors with a resolution of 16-20 megapixels. An unknowing buyer is chasing a “cool” camera and prefers the phone with a higher camera resolution. He doesn’t even realize that he is falling for the bait of marketers and sellers.

What is permission?

Camera resolution is a parameter that indicates the final size of the image. It only determines how large the resulting image will be, that is, its width and height in pixels. Important: the picture quality does not change. The photo may turn out to be of low quality, but large due to the resolution.

Resolution does not affect quality. It was impossible not to mention this in the context of smartphone camera interpolation. Now you can get straight to the point.

What is camera interpolation in a phone?

Camera interpolation is an artificial increase in image resolution. It is images, and not That is, this is special software, thanks to which an image with a resolution of 8 megapixels is interpolated to 13 megapixels or more (or less).

If we draw an analogy, camera interpolation is similar to binoculars. These devices enlarge the image, but do not make it look better or more detailed. So if interpolation is indicated in the phone's specifications, then the actual camera resolution may be lower than stated. It's not good or bad, it just is.

What is it for?

Interpolation was invented to increase the size of the image, nothing more. Now this is a ploy by marketers and manufacturers who are trying to sell a product. They indicate in large numbers on the advertising poster the resolution of the phone's camera and position it as an advantage or something good. Not only does resolution itself not affect the quality of photographs, but it can also be interpolated.

Literally 3-4 years ago, many manufacturers were chasing the number of megapixels and in various ways tried to cram sensors with as many megapixels as possible into their smartphones. This is how smartphones with cameras with a resolution of 5, 8, 12, 15, 21 megapixels appeared. At the same time, they could take photographs like the cheapest point-and-shoot cameras, but when buyers saw the “18 MP camera” sticker, they immediately wanted to buy such a phone. With the advent of interpolation, it has become easier to sell such smartphones due to the ability to artificially add megapixels to the camera. Of course, photo quality began to improve over time, but certainly not because of resolution or interpolation, but because of natural progress in terms of sensor and software development.

Technical side

What is camera interpolation in a phone technically, since all the text above described only the basic idea?

Using special software, new pixels are “drawn” on the image. For example, to enlarge an image by 2 times, a new line is added after each line of pixels in the image. Each pixel in this new line is filled with a color. The fill color is calculated by a special algorithm. The very first way is to fill the new line with the colors of the nearest pixels. The result of such processing will be terrible, but this method requires a minimum of computational operations.

Most often, another method is used. That is, new rows of pixels are added to the original image. Each pixel is filled with a color, which in turn is calculated as the average of neighboring pixels. This method gives better results, but requires more computational operations.

Fortunately, modern mobile processors are fast, and in practice the user does not notice how the program edits the image, trying to artificially increase its size.

There are many advanced interpolation methods and algorithms that are constantly being improved: the boundaries of the transition between colors are improved, the lines become more accurate and clear. It doesn't matter how all these algorithms are built. The very idea of ​​camera interpolation is banal and is unlikely to catch on in the near future. Interpolation cannot make an image more detailed, add new details, or improve it in any other way. Only in films does a small blurry picture become clear after applying a couple of filters. In practice this cannot happen.

Do you need interpolation?

Many users, out of ignorance, ask questions on various forums about how to do camera interpolation, believing that this will improve the quality of images. In fact, interpolation not only will not improve the quality of the picture, but may even make it worse, because new pixels will be added to the photos, and due to the not always accurate calculation of colors for filling, the photo may have undetailed areas and graininess. As a result, quality drops.

So interpolation in the phone is a marketing ploy that is completely unnecessary. It can increase not only the resolution of the photo, but also the cost of the smartphone itself. Don't fall for the tricks of sellers and manufacturers.

The smartphone has an 8 MPix camera. What does interpolation up to 13 MPix mean?

    Good day.

    This means that your smartphone stretches a photo/image taken with an 8 MPix camera to 13 MPix. And this is done by moving the real pixels apart and inserting additional ones.

    But, if you compare the quality of an image/photo taken at 13 MP and 8 MP with interpolation to 13, then the quality of the second will be noticeably worse.

    To put it simply, when creating a photo, the smart processor adds its own pixels to the active pixels of the matrix, as if it calculates the picture and draws it to a size of 13 megapixels. The output is a matrix of 8 and a photo with a resolution of 13 megapixels. The quality doesn't improve much from this.

    This means that the camera can take a photo up to 8 MPIX, but in software it can enlarge photos up to 12 MPIX. This means that it enlarges it programmatically, but the image does not become better quality, the image will still be exactly 8 MPIX. This is purely a trick of the manufacturer and such smartphones are more expensive.

    This concept assumes that the camera of your device will still take photos at 8 MPIX, but now in software it is possible to increase it to 13 MPIX. At the same time, the quality does not become better. It's just that the space between the pixels gets clogged up, that's all.

    This means that in your camera, as there were 8 MPIX, they remain the same - no more and no less, and everything else is a marketing ploy, a scientific fooling of the people in order to sell the product at a higher price and nothing more. This function useless, the quality of the photo is lost during interpolation.

    On Chinese smartphones This is now used all the time, it’s just that a 13MP camera sensor costs much more than an 8MP one, that’s why they set it to 8MP, but the camera application stretches the resulting image, as a result, the quality of these 13MP ones will be noticeably worse if you look at the original resolution.

    In my opinion, this function is of no use at all, since 8MP is quite enough for a smartphone; in principle, 3MP is enough for me, the main thing is that the camera itself is of high quality.

    Camera interpolation is a trick of the manufacturer; it artificially inflates the price of a smartphone.

    If you have an 8 MPIX camera, then it can take a corresponding picture; interpolation does not improve the quality of the photo, it simply increases the size of the photo to 13 megapixels.

    The fact is that the real camera in such phones is 8 megapixels. But with the help of internal programs, images are stretched to 13 megapixels. In fact, it doesn't reach the actual 13 megapixels.

    Megapixel interpolation is a software blurring of the image. Real pixels are moved apart, and additional ones are inserted between them, with the color of the average value from the colors moved apart. Nonsense, self-deception that no one needs. The quality doesn't improve.

  • Interpolation is a method of finding intermediate values

    If all this is translated into a more human language, applicable to your question, you get the following:

    • The software can process (enlarge, stretch)) files up to 13 MPIX.
  • Up to 13 MPix - this could be 8 real MPix, like yours. Or 5 real MPix. The camera software interpolates the camera's graphics output to 13 MPix, not enhancing the image but electronically enlarging it. Simply put, like a magnifying glass or binoculars. The quality doesn't change.

What is camera interpolation?

All modern smartphones have built-in cameras that allow you to enlarge the resulting images using special algorithms. From a mathematical point of view, interpolation is a method of detecting intermediate values ​​of a number from an existing set of discrete parameters.

The interpolation effect is somewhat reminiscent of a magnifying glass. The smartphone software does not increase the clarity and sharpness of the image. It simply expands the image to the required size. Some smartphone manufacturers write on the packaging of their products that the built-in camera has a resolution of “up to 21 megapixels.” Most often we are talking about an interpolated image, which is of low quality.

Types of interpolation

Nearest neighbor method

The method is considered basic and belongs to the category of the simplest algorithms. Pixel parameters are determined based on one nearest point. As a result of mathematical calculations, the size of each pixel doubles. Using the nearest pixel method does not require much computing power.

Bilinear interpolation

The pixel value is determined based on data from the four closest points recorded by the camera. The result of the calculations is a weighted averaging of the parameters of the 4 pixels that surround the origin. Bilinear interpolation allows you to smooth out transitions between the color boundaries of objects. Images obtained using this method are significantly superior in quality to images interpolated using the nearest pixel method.

Bicubic interpolation

The color value of the desired point is calculated based on the parameters of the 16 nearest pixels. The points that are closest to each other receive the maximum weight in the calculation. Bicubic interpolation is actively used software modern smartphones and allows you to get a fairly high-quality image. Application of the method requires significant power central processor and high resolution built-in camera.

To avoid asking unnecessary questions:

Advantages and disadvantages

Science fiction films often show how a camera captures the face of a passerby and transmits digital information to a computer. The machine enlarges the image, recognizes the photo and finds the person in the database. IN real life interpolation does not add new details to the image. It simply enlarges the original image using a mathematical algorithm, improving its quality to an acceptable level.

Interpolation defects

The most common defects that occur when scaling images are:

  • Stepping;
  • Blur;
  • Halo effect.

All interpolation algorithms allow maintaining a certain balance of the listed defects. Decreasing the aliasing will definitely cause an increase in image blur and halos. Increasing the sharpness of the image will lead to increased blurriness of the image, etc. In addition to the listed defects, interpolation can cause various graphic “noises” that can be observed at maximum image magnification. We are talking about the appearance of “random” pixels and textures unusual for a given object.

Image interpolation occurs in all digital photographs at a certain stage, be it dematrization or scaling. It occurs whenever you change the size or scan of an image from one grid of pixels to another. Resizing an image is necessary when you need to increase or decrease the number of pixels, while changing the position can occur in a variety of cases: correcting lens distortion, changing perspective, or rotating the image.


Even if the same image is resized or scanned, the results can vary significantly depending on the interpolation algorithm. Since any interpolation is just an approximation, the image will lose some quality whenever it is interpolated. This chapter is intended to provide a better understanding of what affects the results - and thereby help you minimize any loss of image quality caused by interpolation.

Concept

The essence of interpolation is to use available data to obtain expected values ​​at unknown points. For example, if you wanted to know what the temperature was at noon, but measured it at 11 o'clock and at one o'clock, you can guess its value by applying linear interpolation:

If you had an extra measurement at half past twelve, you could notice that the temperature rose faster before noon and use that extra measurement to perform a quadratic interpolation:

The more temperature measurements you have around midday, the more complex (and expectedly more accurate) your interpolation algorithm can be.

Example of resizing an image

Image interpolation works in two dimensions and tries to achieve the best approximation in pixel color and brightness based on the values ​​of surrounding pixels. The following example illustrates how scaling works:

planar interpolation
Original before after without interpolation

Unlike fluctuations in air temperature and the ideal gradient above, pixel values ​​can change much more dramatically from point to point. As with the temperature example, the more you know about the surrounding pixels, the better the interpolation will work. This is why the results quickly deteriorate as the image is stretched, and also why interpolation can never add detail to an image that is not there.

Image rotation example

Interpolation also occurs every time you rotate or change the perspective of an image. The previous example was misleading because it is a special case in which interpolators usually work quite well. The following example shows how quickly detail can be lost in an image:

Image degradation
Original turn 45° turn 90°
(no loss)
2 45° turns 6 turns at 15°

A 90° rotation introduces no loss, since no pixel needs to be placed on the border between two (and therefore divided). Notice how much of the detail is lost on the first turn, and how the quality continues to drop on subsequent turns. This means that you should avoid rotation as much as possible; If an unevenly exposed frame requires rotation, you should not rotate it more than once.

The above results use the so-called "bicubic" algorithm and show a significant degradation in quality. Notice how the overall contrast decreases due to the decrease in color intensity, how dark halos appear around the light blue. The results can be significantly better depending on the interpolation algorithm and the imaged subject.

Types of interpolation algorithms

Common interpolation algorithms can be divided into two categories: adaptive and non-adaptive. Adaptive methods vary depending on the subject of interpolation (hard edges, smooth texture), while non-adaptive methods treat all pixels equally.

Non-adaptive algorithms include: nearest neighbor method, bilinear, bicubic, splines, cardinal sine function (sinc), Lanczos method and others. Depending on complexity, they use from 0 to 256 (or more) contiguous pixels for interpolation. The more adjacent pixels they include, the more accurate they can be, but this comes at the cost of a significant increase in processing time. These algorithms can be used for both scanning and scaling of images.

Adaptive Algorithms include many commercial algorithms in licensed programs such as Qimage, PhotoZoom Pro, Genuine Fractals and others. Many of them use different versions its algorithms (based on pixel-by-pixel analysis) when the presence of a border is detected - in order to minimize unsightly interpolation defects in places where they are most visible. These algorithms are primarily designed to maximize the defect-free detail of enlarged images, so some of them are not suitable for rotating or changing the perspective of an image.

Nearest neighbor method

This is the most basic of all interpolation algorithms and requires the least processing time because it only takes into account one pixel - the one closest to the interpolation point. As a result, each pixel simply becomes larger.

Bilinear interpolation

Bilinear interpolation considers a 2x2 square of known pixels surrounding an unknown one. The weighted average of these four pixels is used as the interpolated value. The result is images that look significantly smoother than the result of the nearest neighbor method.

The diagram on the left is for the case where all known pixels are equal, so the interpolated value is simply their sum divided by 4.

Bicubic interpolation

Bicubic interpolation goes one step further than bilinear interpolation, looking at a 4x4 array of surrounding pixels - 16 in total. Since they are on different distances from an unknown pixel, the nearest pixels receive more weight in the calculation. Bicubic interpolation produces significantly sharper images than the previous two methods and is arguably the best in terms of processing time and output quality. For this reason, it has become standard in many image editing programs (including Adobe Photoshop), printer drivers and built-in camera interpolation.

Higher order interpolation: splines and sinc

There are many other interpolators that take more surrounding pixels into account and thus are more computationally intensive. These algorithms include splines and cardinal sine (sinc), and they retain most of the image information after interpolation. As a result, they are extremely useful when an image requires multiple rotations or perspective changes in separate steps. However, for single zooms or rotations, such higher order algorithms provide little visual improvement with a significant increase in processing time. Moreover, in some cases, the cardinal sine algorithm performs worse on a smooth section than bicubic interpolation.

Observable interpolation defects

All non-adaptive interpolators try to find the optimal balance between three undesirable defects: boundary halos, blur, and aliasing.

Even the most developed non-adaptive interpolators are always forced to increase or decrease one of the above defects at the expense of the other two - as a result, at least one of them will be noticeable. Notice how similar the edge halo is to the defect caused by sharpening with an unsharp mask, and how it increases the apparent sharpness through sharpening.

Adaptive interpolators may or may not create the defects described above, but they can also produce textures or single pixels at large scales that are unusual for the original image:

On the other hand, some “defects” of adaptive interpolators can also be considered as advantages. Because the eye expects to see detail down to the smallest detail in finely textured areas such as foliage, such designs can deceive the eye at a distance (for certain types of material).

Smoothing

Anti-aliasing or anti-aliasing is a process that attempts to minimize the appearance of jagged or jagged diagonal borders that give text or images a rough digital appearance:


300%

Anti-aliasing removes these jaggies and gives the appearance of softer edges and higher resolution. It takes into account how much the ideal border overlaps adjacent pixels. A jagged border is simply rounded up or down with no value in between, while a smooth border produces a value proportional to how much of the border is included in each pixel:

An important consideration when enlarging images is to avoid excessive aliasing resulting from interpolation. Many adaptive interpolators detect the presence of edges and adjust to minimize aliasing while maintaining edge sharpness. Since the smoothed boundary contains information about its position at more high resolution, it is quite possible that a powerful adaptive (boundary-detecting) interpolator could at least partially reconstruct the boundary upon magnification.

Optical and digital zoom

Many compact digital cameras can perform both optical and digital magnification (zoom). Optical zoom is achieved by moving the vari-lens so that the light is amplified before it hits the digital sensor. In contrast, digital zoom reduces quality because it simply interpolates the image after it has been received by the sensor.


optical zoom (10x) digital zoom (10x)

Even though a photo using digital zoom contains the same number of pixels, its detail is clearly less than when using optical zoom. Digital zoom should be almost completely eliminated, minus the cases where it helps display a distant object on your camera's LCD screen. On the other hand, if you typically shoot in JPEG and want to crop and enlarge your image later, digital zoom has the advantage of interpolating before introducing compression artifacts. If you find that you need the digital zoom too often, invest in a teleconverter or, better yet, a longer focal length lens.