24/7 writing help on your phone
The proposed system recognizes the value of Indian banknotes, focused for visually impaired people. The system is divided into two parts, first part is to recognise banknote either by its colour or various patterns specific to that note. Second part produces audio messages that announce the value of a banknote placed in front of a camera by processing each frame of its continuous filming. This work takes its theoretical basis from the Image Processing techniques. These processing is done in MATLAB simulation.
The basic techniques used in our proposed system include edge detection, RGB plane extraction, SURF method and finally producing speech output based on different mean intensities. Therefore, from this anticipated method a well-developed system can brought out to help the visually impaired to overcome their disability so that they can have a well-off life in the society.
Keywords- currency recognition, visually impaired, edge detection, RGB plane extraction, text-to-speech
Currently, India has around one-third of the world’s blind population.
There are about 12 million individuals with visual impairment as against the global total of 39 million according to a report published by the National Programme for Control of Blindness (NPCB). The new banknotes introduced by the RBI post demonetisation are proving extremely difficult to the blinds, due to almost similar sizes.
The old notes had a difference in size of 10 mm in case of each note, either in height or in each value, for new notes this has been reduced to 4 mm. In fact, the old Rs 20 note and the new Rs 200 notes are of the same size.
Both being similar in colour, it becomes difficult with people having low vision to differentiate between the two notes and make appropriate transactions.
This situation dominates for almost over 50 lakh blind people in India and lakhs of senior citizens with low eyesight. Blind people recognise notes by its size while people with low vision and illiterate ones identify these notes by its colour.
The new notes, however, have increased the challenge. Digital money still remains inaccessible though they can use computers and apps through assistive technology. Completely blind people need differently sized notes and tactile marks that can be easily felt by touch. Low vision individuals need contrast colours and large font.
These people face a lot of difficulty in their financial transactions. The relative measurement of the length of the currency notes using palms and fingers might lead to an inaccurate judgement. The tactile symbols on the currency notes are obsolete to be used for identification. An available solution is a mechanical device like scale which has markings for the length of the various values of the bank notes. We aim at providing an inclusive solution in the form of an application installed in the user’s mobile. This application would use the camera module to recognise the currency and inform the user with an audio output.
The objective of the proposed system is to develop a solution to resolve the crisis to make blind people feel safety and confidence in the financial dealings.
In , authors use simple image processing techniques like thresholding, noise removal, histogram equalization and segmentation are used to extract the ROI and facilitate the template matching procedure. Correlation based template matching is used after that to find the ROI in the dataset images. The average accuracy was 89%.
In , authors proposed a system based on Raspberry Pi board and Raspberry Pi camera with infrared light embedded with pair of sunglasses. The Haar feature and AdaBoost Algorithm use for classification and detection. While SURF methodology is describe for banknote Recognition.
In , authors proposed a system based on ORB(Oriented fast and Rotated) method feature extraction by using detection of fast cornerand rotation invariance of ORB. The recognition rate shows 90% accuracy.
In , banknotes processed through different image processing techniques like edge detection, segmentation, and feature extraction and classification Algorithms use for classification such as K-Nearest Neighbor (k-NN).
In , robust Method is used for identification of notes and it is based on the identification of the geometrical pattern that characterize the different denominations of banknotes.
To develop an android application capturing banknotes online, and use template matching with test image to generate required audio output.
In image processing, an edge is the boundary between an object and its background. They represent the frontier for single objects. Therefore, if the edges of images objects can be identified with precision, all the objects can be located and their properties such as area, shape can be calculated.
Feature extraction is a special type of dimensional reduction. When the input of an algorithm is too large to be processed and it is not needed then the input data will be converted into a reduced representation set of features. Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully selected it is supposed to the features set will extract the related information from the input data to perform the required task using this reduced representation instead of the full size input.
In order to confirm image similarity, we check whether the keypoints in the test image are in spatial consistency with the retrieved images.
The recognized text codes are recorded in script files. Then we employ the text to speech converter to load these files and display the audio output of text information. Blind users can adjust speech rate, volume and language according to their preferences.
Four sets of image samples (front and back), captured offline were used i.e, Rs.50, Rs.100, Rs.200, and Rs.500. The samples were first extracted in RGB planes. Edge detection technique was used on these samples for extraction of ROI. Mean intensities of each RGB plane was found for differentiation of each note from one another. RGB plane mean intensity was considered to distinguish banknotes on the basis of their colour.
The proposed system for the Indian currency recognition has been implemented in MATLAB software. The expected outcome of the proposed system is to generate audio output when a person captures an image of banknote in the Android application.
The proposed system starts with capturing still image. Simple image processing techniques are used to extract the ROI.
Audio output is generated based on the different mean intensities of extracted RGB planes of banknotes.
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment