In todays world photographs and videos have become an important source of information. These illusions play a big role in decision making. These decisions might be related to politics, business, social life, diplomacy or even warfare. In the past, these illusions have been the closest representation of reality (Macdonald, 2007). Alteration of images and videos existed since the early global wars. Armies used mannequins to inflate the army size and mislead the enemies. As technology evolved the use of image and video modification also evolved.
Propagandist and those looking to deceive people utilized rough strategies to modify images of real people, events and objects (Macdonald, 2007).
Computers enable the creation of any possible image, still or moving, with acceptable incidental audio. This has substantially increased the number of manipulated images or video surfacing around in newspapers, social media or the internet. These are being used as powerful tools to deceive humans in decision making. YouTube is one of the leading social media platform to share videos.
As a result, its troubled with misleading content that has staged videos shown as actual footage from an event, videos with false captions and videos where recordings are twisted (Palod, Patwari, Bahety, Bagchi, Goyal, 2019). Therefore, to detect such images and videos a number of methods and software’s are being developed.
Types of Altered Images and Videos
An image or video that maybe real i.e. it is showing something that has really happened but is mislabelled to make a point or get shared. Some images are deceived through alternation and staging while some or of real images. Such images are put out to deceive through false captioning and not through the image. They are created usually to promote a bias view about politics, business, agenda etc. These types of images have the power to spread rapidly on social media, internet or news where only headlines and a small brief of the full article is displayed for the audience. Below image is an example of an real image with a misleading caption.
The top tech giants, Google and Facebook have introduced reporting and flagging tools as one of the measures to tackle misleading photographs and videos. The few basic steps to recognise such images or videos is to have a closer look at the story. Check for the source of the image and the article. Scrutinize if the source is reliable. Always look beyond the headlines or captions. They are manipulated to deceive the viewer. Look for the facts and then see if your views or beliefs are affecting your judgement (Webwise, 2019).
Illusions those aren’t real, either because they’ve been staged or digitally doctored. The growing popularity of social media and economic opportunities for content has led to the creation of manipulated images and spam videos (Springer, 2019). There are various aspects for such creations which include business manipulation, political propaganda etc in order to get more views and to deceive or change one’s judgement. Such illusions are used to create memes as well. Below is a picture of US President Donald Trump rescuing survivors during a hurricane. Following that is a picture that news channels telecasted during the hurricane, Irene in 2011. Both images have been altered and telecasted on television which have mislead people.
Detection of Misleading Illusions
A number of algorithms and techniques have been used to detect altered images and videos. With the development in Artificial Intelligence and Machine Learning detection of these misleading illusions has become less complex. Below are two techniques which have been used for detection.
Error Level Analysis (ELA)
Image error level analysis is a technique that is used to detect altered images and videos. It helps to identify manipulations to JPEG images that are compressed by detecting the error distribution introduced the image has been resaved at a specific compression rate (Krawetz, 2012). The algorithm for this technique detects the foreign object ts which have been manipulated in the original image by analysing quantization tables of pieces of pixels across the image. Quantization level of particular objects as well as the edited/altered object added by the editor may differ from other objects in the image, specifically if either the original image or the altered image was compressed in the JPEG (Belkasoft Research, 2013)
In JPEG formatted images, a composite with parts subjected to similar compressions will show dissimilarity in the compression artifacts. To make the unclear compressions more visible, the information to be analysed is subjected to an additional round of compressions, this time at a uniform level. This result is subtracted from the original data which is being checked. Any variations in the resulting difference image is checked manually for compression artifacts (Krawetz, 2007).
ELA has a few limitations. The biggest limitation is that ELA is not applicable for PNG formatted images i.e. data without lossy compressions. Therefore, as editing can be done on images or videos without lowered compression with lower compression applied consistently to the edited, composite data, the presence of a uniform compression level does not rule out manipulation. Also for a value of 256 or less for the image colour space the ELA generates results which are ineffective (FotoForensics, 2015).
Linear Time Invariant Filtering
Another technique used for tampering detection of digital images or videos is linear time invariant (LTI) filtering. This technique makes use of time series data representing images or videos or live images with the help of LTI filter. A causal relationship in time exists between the pictures within the two recordings and the still picture when they are captured by the camera. The computing framework decides whether the images using LTI filter is casual. If the system output determines that the filtered image is non casual, a signal/warning is sent out recommending that the image has been altered after the capture (Tampering detection for digital images, 2016).
As shown above, the filter first receives the time series data representing images or videos or still images. It further applies the Linear Time Invariant Filter to the time series data which is representing the image/video. If the image has been modified, then the filtered output data which is basically representing the still image or video depends on the later image, then it is considered to be modified.
As the technology grows the ability to alter images perfectly or with little or no chance of detection increases especially if the time frame for analysis is short. With large number of images and videos surfacing the internet, business models, news etc it is important to detect such fraudulent cases. The quality of an altered photograph or video depends on both the sophistication of the technology used and the skill of manipulator (Macdonald, 2007). Several software’s such as Adobe Photoshop, Meitu and Lyrebird are used to alter images and videos. Adobe is currently creating a photoshop for audio. Therefore, it is necessary to detect such images.
Detection of these photographs using software’s is one of the methods which can be used. Small businesses sometimes find it hard monetarily to afford such high-end software’s. To start with basic detection, it is always a good practice to check the metadata of the image and video. It provides in depth detail about the picture/video such as the pixels, format, Camera maker etc. Reverse Image Search technique is another method to detect such illusions for a small-scale industry. Reverse Image Search is a content-based image retrieval query technique that involves providing the system with a sample image that it will then base its upon, in terms of information retrieval (Wikipedia, 2013). Google Images and TinEye can be used for Reverse Image Search. Verifeyed, UCNet and Fake Video Corpus are a few software’s which have been used by banks and law enforcement agencies to detect misleading illusions.
Alterations to deceive humans has been done since the World War 1. “A picture is worth a thousand lies: photographic deceptions” (Orwell, 1984). As the tools to edit and alter images have become cheap and easily available, anyone can use them to intensify images. Leading manufactures in the camera industry attempted to resolve these issues by introducing secure digital certificates. Technology found a way around it too. Therefore, it was necessary to approach a new technique to automatically authenticate images using analysis techniques. Solutions are being developed using Artificial Intelligence and Machine Learning to detect such images and videos. It is a necessary process to be included in the industry to deal with such huge amount of forged/fraudulent data.
- Tampering detection for digital images. (2019). US20180165835A1.
- Palod, P., Patwari, A., Bahety, S., Bagchi, S. and Goyal, P. (2019). ADVANCES IN INFORMATION RETRIEVAL. [S.l.]: SPRINGER NATURE.
- Macdonald, S. (2009). Propaganda and information warfare in the twenty-first century. London: Routledge, Taylor & Francis Group.
- Kuznetsov, A., Severyukhin, Y., Afonin, O. and Gubanov, Y. (2019). Detecting Forged (Altered) Images. [online] Forensic Focus – Articles. Available at: [Accessed 11 Sep. 2019].
- Mantzarlis, A. (2019). Not fake news, just plain wrong: Top media corrections of 2017 – Poynter. [online] Poynter. Available at: [Accessed 11 Sep. 2019].
- Webwise.ie. (2019). Explained: What is Fake news? | Social Media and Filter Bubbles. [online] Available at: [Accessed 11 Sep. 2019].
- Sputniknews.com. (2019). Fake Photo of Trump Rescuing Hurricane Victims Surfaces Once More (PHOTOS). [online] Available at: [Accessed 11 Sep. 2019].
- Wolchover, N. (2019). Amazing photo of Hurricane Isaac is nothing but a fake. [online] msnbc.com. Available at: [Accessed 11 Sep. 2019].
Cite this essay
Altered Images and Videos. (2019, Dec 03). Retrieved from https://studymoose.com/altered-images-and-videos-essay