Pages - Menu

Sunday, August 31, 2014

The Best Smart Watch ever made


MOTO360 [Image from Mashable]


Watch becomes more than an accessory
Time keeping devices are nothing short of a modern marvel. Clocks were the first of devices to be acknowledged by a wide audience and recieve almost universal adoption. These devices were the genesis for all the revolutionary computing devices that we have today. By closing the physical distance and by radiating more personality, Watches embraced even more success. This is where the Watch becomes more than an accessory.  By the end of this year we might probably witness a revival of the lost art of watch making. The more interesting aspect is that, this might as well trigger rapid advancements in the wearable market. Recent venturing of tech giants into this arena will and must herald future technical advances in wearable technology as well as personal healthcare.
PC HAS BEEN THE PRIMARY SOURCE FOR MAJORITY OF THE DIGITAL CONTENT
The PC has been the primary source for majority of the digital content. The Smart Phone has slowly but surely stepped up as an important entity of Data creation. Googles continued disconnection with PC world is due to the fact that Chrome OS hasn’t taken off anywhere nor does it play nice with any other platforms. Android never promised an enhanced experience for sharing content across Mobile devices and PCs. Instead, Googles persistence over faux unified experience with a hoard of “seamless” cloud apps has seen success at least to an extent. Google cloud apps are an affirmative addition to Android but never bridged what Ubuntu for Android tried to provide and possibly the OS X Yosemite-iOS ‘Continuity’ will.

SMART WATCH NEEDS AN INDEPENDENT IDENTITY
Click to enlarge


Look at where the Phone, meant to be used as a remote PC, ended up. The Smartwatch has to stop being a remote view of a remote view of a remot… You get the point. There is no need to tie down these watches to the Smart Phone. The Smart Watch needs an Independent Identity. It must serve the purpose it was built for. Keeping Time must be the fundamental idea around which the other features must converge. 

YET ANOTHER VERSION OF ANDROID WAS NOT THE ANSWER WE NEEDED
It doesn’t seem bizarre when all Tech companies working on same ideas (may be different implementations) end up producing same similar products. It is not difficult to imagine someone rushing into Google’s office to announce “Apple is making an iWatch!!!” or the inverse at Apple’s office. There is a unanimous effort from the companies to push into Smart Watch business at full steam. Google’s former protégé Motorola and Android accomplice LG have dived first to bring out Android Wear Watches. Amongst them, Motorola’s MOTO360 is more beautiful and also seems to be the one worth buying. But I fear yet another version of android was not the answer we needed. There are as well alternate experiments happening in the market referring to likes of Samsung’s Tizen Watches, Sony Smartwatches, People-funded success story Pebble and then Apple’s wearable (iWatch?). Pebble has been excessively promoted in Social Media by Authors who are backing the Project or raising opinions since they own one or they are just simply assuming many are rooting to own one. The Pebble is a good product but not ‘the’ best product. It is arduously concentrated on being a remote notification device while leaving a huge gap in terms of appeal as a Watch. This is where Apple will come into its own. It appears as though it would provide what the current vendors will fail to provide – A Watch.  It will be down on hardware specifications compared to existing market baseline but there is no doubt it will sell. Make no mistake I am a pro Android user but none of the devices from any manufacturer satiate my demands, and that is what I’m raving about in this article.


Introducing  quantum Smart Watch concept
Click to enlarge


Before everybody goes bonkers over Apple’s new training kit and then others start off a bandwagon of watches, I want to show off what is possible. I want to set the tone for devices ahead, so I give you - “quantum”.

Key elements for the design
  • WATCH = TIME
  • ‘Clean and Elegant’
  • ‘Simple and Easy’
  • ‘Precise and Timeless Presence'
Click to enlarge 


Guidelines for building the Best Smart Watch


  • Visibility The perception of depth is the key to replace a practical device that visually interfaces a human body. Think 3D displays and/or Virtual/Augmented displays, Google Glass for instance.
  • Interoperability Smart Watch needs an Independent Identity. Think Sync-free ecosystem, Data connectivity only once a while or completely absent, Create content within device.
  • Mobility Anything with a wire is not a Wearable rather wire-enabled. Think solar charging, Wireless power, and Body heat conversion.
  • Capability There no need for a  >400Mhz  processor equipped device to be on your arm. Think OS-less device, purpose-built device and dedicated functions.
  • Durability The Materials must be adaptable to activities. Think Hard Metals bodies, Scratch resistant Glass, Durable straps, Water/Dust resistance.
  • Usability Create Time driven events rather than Content/Data driven events. Sync only when required.
  • Credibility Everybody, Please stop ‘Dick Tracy’ing around!!! Think of commendable applications such as NFC based Medical Information Card, Emergency contact or Emergency information, Buzz alerts for taking pills/keep drowsiness away while driving, Magnetic Compass for navigation, Gesture recognition , Non-invasive health monitoring.
Smart Watch Features Wish list

- AMOLED 3D display with Sapphire
- High Density Small Volume Battery
- Transparent Infrared Solar cells
- POWER over WIFI (new standard) Wireless Charging option
- WiFi Direct support
- Water resistance and Dust resistance
- NFC
- Accelerometer, Gyroscope, Digital Compass and Barometer
- Gesture Control for higher devices

Healthcare Features Wish list

 - Geiger counter for Radiation sensing
 - Pedometer
 - In-contact Body temperature sensor and monitoring
 - Infrared Heart rate sensor and monitoring
 - Ultrasonic Blood pressure monitoring
 - Non-invasive Blood sugar monitoring
 - NFC assisted Global Medicare card with Emergency Distress Signals

Comparison Chart of Devices


HTC Wildfire
Sony Liveview
Motorola MOTOACTV
LG G Watch
Launch
May 2010
Dec 2010
Dec 2011
July 2014
Processor Family
Qualcomm Snapdragon S1
STMicroelectronics 32 bit ARM MCU
TI OMAP 3
Qualcomm Snapdragon 400
Model
MSM7225
ST 32F103C6
OMAP3630-600
MSM 8226
CPU
ARM11
ARM Cortex-M3
ARM Cortex-A8
ARM Cortex-A7
Frequency
528 MHz
72 MHz
600 MHz
787 MHz
GPU
-
-
PowerVR SGX530
Adreno 305
Price
Rs 24000
Rs 9600
Rs 14950
Rs 15300

Unanswered Questions from the Article

Will Android Wear take off?
Will When will phasing out of Chrome OS happen as Android TV/ Wear/Auto/@Home take flight?
Will existing companies involved in making specialty commodity play ball or perish?

Download:
Download from DsynFLO box folder - https://app.box.com/s/wjver8chwlxdz32s4lei


Fine Print
The information mentioned here are purely the perceptions of the Author and are not from any data analysis study or information data source. They are only opinions expressed by the Author. Android, Apple, Braun, HTC, Google, LG, Microsoft, Motorola, Qualcomm, Sony and Obaku are registered trademarks of respective entities. Names are used only for Illustration. DsynFLO or the Author do not have any affiliations with these entities or vice versa.

Friday, August 15, 2014

simplAR 2: 99 Lines of Code for Augmented Reality with OpenCV using Chessboard

This is a simple program that implements Augmented Reality in OpenCV. This is a follow-up for the previous post that was implemented in the old 1.0 API.

Files:
Download from DsynFLO box folder -  https://app.box.com/s/nh82nmjt2w3fxj85g399

Usage:
cmake .
make
./app

Overlay Image:
Photo by Bharath P.  Zero License.


Pattern:


Source:
//______________________________________________________________________________________
// Program : SimplAR 2 - OpenCV Simple Augmented Reality Program with Chessboard
// Author  : Bharath Prabhuswamy
//______________________________________________________________________________________

#include <iostream>
#include <opencv2/opencv.hpp>

using namespace std;
using namespace cv;

#define CHESSBOARD_WIDTH 6
#define CHESSBOARD_HEIGHT 5
//The pattern actually has 6 x 5 squares, but has 5 x 4 = 20 'ENCLOSED' corners

int main ( int argc, char **argv )
{

 Mat img;
 Mat display = imread("shingani.jpg");
 VideoCapture capture(0);

 Size board_size(CHESSBOARD_WIDTH-1, CHESSBOARD_HEIGHT-1);
    
 vector<Point2f> corners;

 if(display.empty())
 {
  cerr << "ERR: Unable to find overlay image.\n" << endl;
  return -1;
 }
 
 if ( !capture.isOpened() )
 {
  cerr << "ERR: Unable to capture frames from device 0" << endl;
  return -1;
 }
    
    int key = 0;
 
 while(key!='q')
 {
  // Query for a frame from Capture device
  capture >> img;

  Mat cpy_img(img.rows, img.cols, img.type());
  Mat neg_img(img.rows, img.cols, img.type());
  Mat gray;
  Mat blank(display.rows, display.cols, display.type());

        cvtColor(img, gray, CV_BGR2GRAY);
        
  bool flag = findChessboardCorners(img, board_size, corners);

  if(flag == 1)
  {            
   // This function identifies the chessboard pattern from the gray image, saves the valid group of corners
   cornerSubPix(gray, corners, Size(11,11), Size(-1,-1), TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1));
    
   vector<Point2f> src;   // Source Points basically the 4 end co-ordinates of the overlay image
   vector<Point2f> dst;   // Destination Points to transform overlay image 
   
   src.push_back(Point2f(0,0));
   src.push_back(Point2f(display.cols,0));
   src.push_back(Point2f(display.cols, display.rows));
   src.push_back(Point2f(0, display.rows));
 
   dst.push_back(corners[0]);
   dst.push_back(corners[CHESSBOARD_WIDTH-2]);
   dst.push_back(corners[(CHESSBOARD_WIDTH-1)*(CHESSBOARD_HEIGHT-1)-1]);
   dst.push_back(corners[(CHESSBOARD_WIDTH-1)*(CHESSBOARD_HEIGHT-2)]);
 
   // Compute the transformation matrix, 
   // i.e., transformation required to overlay the display image from 'src' points to 'dst' points on the image
   Mat warp_matrix = getPerspectiveTransform(src, dst);

   blank = Scalar(0);
   neg_img = Scalar(0);        // Image is white when pixel values are zero
   cpy_img = Scalar(0);        // Image is white when pixel values are zero

   bitwise_not(blank,blank);

   // Note the jugglery to augment due to OpenCV's limitation passing two images of DIFFERENT sizes while using "cvWarpPerspective"

   warpPerspective(display, neg_img, warp_matrix, Size(neg_img.cols, neg_img.rows)); // Transform overlay Image to the position - [ITEM1]
   warpPerspective(blank, cpy_img, warp_matrix, Size(cpy_img.cols, neg_img.rows));  // Transform a blank overlay image to position 
   bitwise_not(cpy_img, cpy_img);       // Invert the copy paper image from white to black
   bitwise_and(cpy_img, img, cpy_img);      // Create a "hole" in the Image to create a "clipping" mask - [ITEM2]      
   bitwise_or(cpy_img, neg_img, img);      // Finally merge both items [ITEM1 & ITEM2]
 
  }

  imshow("Camera", img);
  key = cvWaitKey(1); 
 }
    
 destroyAllWindows();
 return 0;
}

Sunday, August 10, 2014

OpenAR: OpenCV Augmented Reality Program



OpenAR is a very simple C++ implementation to achieve Marker based Augmented Reality. OpenAR based on OpenCV and solely dependent on the library. OpenAR decodes markers in a frame of image. OpenAR does not implement Marker tracking across frames. Also OpenAR does not implement Template matching for Marker decoding.

Demo:


Building up to openAR:
Some of the basic operations are independently discussed in the previous posts -
Link  Installing Ubuntu 14.04
Link  Installing OpenCV 2.4.9 in Ubuntu
Link  Building a simple OpenCV Program
Link  OTSU thresholding
Link  Corner Detection
Link  Connected Component extraction

Source:
Git: https://github.com/bharathp666/openAR

Download:
Download from DsynFLO box folder - https://app.box.com/s/p2cpo7i6vp9ilazk3dhv

Instructions:
cmake .
make
./openar

License:
ZERO License. Students, Geeks, Tramps alike, free for all. :)

Implementation Notes:
1.     The program picks up one blob at a time and does not release it until it is verified if it is the marker. This method is chosen to eliminate creation of yet an another array containing details of all blobs of the image.
2.     Tracking the marker in subsequent frames is not implemented to keep the program simple and understandable. (It was too complicated for me as well!)
3.     Augmentation jugglery



Tuning the code:
If you are facing issues to reliably detect Markers, the following can be done -
1. Decrease the severity to determine corners. Warning: Segmentation fault (read next section)
2. Decrease the severity on blob size constraints.

Limitations:
1. Possible Segmentation fault due to too many corners and hence the array of corners may overflow.
2. Rotation of image according to the pattern orientation is not taken care but it can be done easily.
3. Detection fails on  blurry images caused by rapid movement of  markers.

Further enhancements:
Interested contributors fork me at Github or Mail me -
[  ] Code movement from legacy OpenCV 1.0 to 2.4.9
[  ] OpenGL integration - if required or may be OpenCV 3D visualization (3.0+)
[  ] Create an OpenAR library

Support:
For most parts- it must work. In case you hit a problem you can comment below or mail me, I would be glad to help. I'm not a seasoned C++ programmer and for any advanced assistance; posting a question at stackoverflow.com under 'opencv' tag is recommended.

References and Further reading:
1.    [Book] Learning OpenCV - Computer Vision with the OpenCV Library     
By Gary Bradski, Adrian Kaehler
First Edition

2.    ARlib – C++ Augmented reality library
by Danny Diggins

3.   Features from accelerated segment test
by Edward Rosten

4.    Connected Components Analysis

5.    Perpendicular Distance of a Point from a line
6.    Solutions to Equation of a Line


OpenCV Online Resources:

If you like my work - Please share !!!

Sunday, August 3, 2014

OpenCV: Connected Component Analysis


Image  - License_plate_Tirana.JPG
Public Domain |  Link

A lot of openCV based programs depend on blob detection for extracting region of interest for post processing. There are numerous blob identification libraries such as cvblob , cvbloblib and the newer opencvblob.

  The code below is a slightly different algorithm that detects connected components from the Image. The key to understanding this algorithm is to know the inheritance, relation and state of the pixels surrounding each other.
   To analyze the image, first we start off running through one scan line at a time. As soon as a black pixel is encountered, it is the treated as the start of the blob. If the next pixel in the scan line is also black than the scanning continues else the program jumps to next scan line. Also when the first black pixel of the scan line is determined, a check is made if the next scan line has a black pixel below it. If it is present then it indicates the there is continuation of the blob in the next scan line. So the vertical inheritance is flagged and the program continues to progress in the scan line. When the program reaches to the next scan line, it checks if the black pixel has veritcal inheritence, if yes then it is part of the current blob that is being scanned.
   A Matrix (in the form of IPlImage) called Process_Flag is maintained to make sure a black pixel encountered is flagged as analyzed for further scanning. This way one black pixel part of a particular blob doesn't get added again as part of another blob.

Note that majority of the "if-else" conditions in the code is to handle traversing through the inheritance. Also the pixel count of the blob and 2 points giving the span of the blob is recorded and updated on the fly while scanning.


Try out the program on a still image to study the various detection capabilities of the program. Additional training images are available in the shared link

Tweaking the code:
The program can be tweaked to reduce or minimize unwanted blobs that contain only 3 or 4 pixels or is 1 pixel wide and 100 pixel long. To remove such blobs, you can filter using the count of pixels in each blob. Also we already store the start and end points of each blob that gives us the span of the blob. By calculating the aspect ratio of the blob and the number of pixels in each blob - we can eliminate irrelevant blobs.

The image below depicts the decision making -


Example code snippet that allows all blobs to be seen
rectw = abs(cornerA.x - cornerB.x);
recth = abs(cornerA.y - cornerB.y);
aspect_ratio = (double)rectw / (double)recth;

if(n > 20)
{
 if(aspect_ratio > 0) 
 {

Now change this to -
int min_blob_sze = 400;               // Minimum Blob size limit 
int max_blob_sze = 150000;            // Maximum Blob size limit

rectw = abs(cornerA.x - cornerB.x);
recth = abs(cornerA.y - cornerB.y);
aspect_ratio = (double)rectw / (double)recth;
if((n > min_blob_sze) && (n < max_blob_sze))  // Reduces chances of decoding erroneous 'Blobs' as markers
{
 if((aspect_ratio > 0.33) && (aspect_ratio < 3.0)) // Increases chances of identified 'Blobs' to be close to Square 
 {

Notice the difference between the two images -


Usage:
cmake .
make

# For detecting blobs from camera frames
./video
# For detecting blobs in a still image
./still <image.jpg>

Files:
Download from DsynFLO box folder -
Source  - https://app.box.com/s/r42ua57wco3z3h00j4wt
Training Images  - https://app.box.com/s/d1zj7l5d9qja8kvod55x

Compatibility  > OpenCV 1.0

Source Code :
//______________________________________________________________________________________
// Program : OpenCV connected component analysis
// Author  : Bharath Prabhuswamy
//______________________________________________________________________________________
#include <cv.h>
#include <highgui.h>
#include <math.h>
#include <stdio.h>
#include <string.h>

void cv_adjustBox(int x, int y, CvPoint& A, CvPoint& B);  // Routine to update Bounding Box corners

// Start of Main Loop
//------------------------------------------------------------------------------------------------------------------------
int main ( int argc, char **argv )
{
 CvCapture* capture = 0;
 IplImage* img = 0;

 capture = cvCaptureFromCAM( 0 );
  if ( !capture )                // Check for Camera capture
  return -1;

 cvNamedWindow("Camera",CV_WINDOW_AUTOSIZE);

 //cvNamedWindow("Threshold",CV_WINDOW_AUTOSIZE);

 // cvNamedWindow("Test",CV_WINDOW_AUTOSIZE); // Test window to push any visuals during debugging

 IplImage* gray = 0;
 IplImage* thres = 0;
 IplImage* prcs_flg = 0;     // Process flag to flag whether the current pixel is already processed as part blob detection


 int q,i;        // Intermidiate variables
 int h,w;        // Variables to store Image Height and Width

 int ihist[256];                      // Array to store Histogram values
 float hist_val[256];     // Array to store Normalised Histogram values

 int blob_count;
 int n;                                 // Number of pixels in a blob
 int pos ;        // Position or pixel value of the image

 int rectw,recth;                     // Width and Height of the Bounding Box
 double aspect_ratio;      // Aspect Ratio of the Bounding Box

 int min_blob_sze = 400;               // Minimum Blob size limit 
 int max_blob_sze = 150000;            // Maximum Blob size limit


 bool init = false;      // Flag to identify initialization of Image objects


 //Step : Capture a frame from Camera for creating and initializing manipulation variables
 //Info : Inbuit functions from OpenCV
 //Note : 

     if(init == false)
 {
         img = cvQueryFrame( capture ); // Query for the frame
          if( !img )  // Exit if camera frame is not obtained
   return -1;

  // Creation of Intermediate 'Image' Objects required later
  gray = cvCreateImage( cvGetSize(img), 8, 1 );  // To hold Grayscale Image
  thres = cvCreateImage( cvGetSize(img), 8, 1 );  // To hold OTSU thresholded Image
  prcs_flg = cvCreateImage( cvGetSize(img), 8, 1 ); // To hold Map of 'per Pixel' Flag to keep track while identifing Blobs
  
  init = true;
 }

 int clr_flg[img->width];  // Array representing elements of entire current row to assign Blob number
 int clrprev_flg[img->width]; // Array representing elements of entire previous row to assign Blob number

 h = img->height;  // Height and width of the Image
 w = img->width;

 int key = 0;
 while(key != 'q')  // While loop to query for Camera frame
 {
    
  //Step : Capture Image from Camera
  //Info : Inbuit function from OpenCV
  //Note : 

  img = cvQueryFrame( capture );  // Query for the frame

  //Step : Convert Image captured from Camera to GrayScale
  //Info : Inbuit function from OpenCV
  //Note : Image from Camera and Grayscale are held using seperate "IplImage" objects

  cvCvtColor(img,gray,CV_RGB2GRAY); // Convert RGB image to Gray


  //Step : Threshold the image using optimum Threshold value obtained from OTSU method
  //Info : 
  //Note : 

  memset(ihist, 0, 256);

  for(int j = 0; j < gray->height; ++j) // Use Histogram values from Gray image
  {
   uchar* hist = (uchar*) (gray->imageData + j * gray->widthStep);
   for(int i = 0; i < gray->width; i++ )
   {
    pos = hist[i];  // Check the pixel value
    ihist[pos] += 1; // Use the pixel value as the position/"Weight"
   }
  }

  //Parameters required to calculate threshold using OTSU Method
  float prbn = 0.0;                   // First order cumulative
  float meanitr = 0.0;                // Second order cumulative
  float meanglb = 0.0;                // Global mean level
  int OPT_THRESH_VAL = 0;             // Optimum threshold value
  float param1,param2;                // Parameters required to work out OTSU threshold algorithm
  double param3 = 0.0;

  //Normalise histogram values and calculate global mean level
  for(int i = 0; i < 256; ++i)
  {
   hist_val[i] = ihist[i] / (float)(w * h);
   meanglb += ((float)i * hist_val[i]);
  }

      // Implementation of OTSU algorithm
  for (int i = 0; i < 255; i++)
  {
   prbn += (float)hist_val[i];
   meanitr += ((float)i * hist_val[i]);

   param1 = (float)((meanglb * prbn) - meanitr);
   param2 = (float)(param1 * param1) /(float) ( prbn * (1.0f - prbn) );

   if (param2 > param3)
   {
       param3 = param2;
       OPT_THRESH_VAL = i;     // Update the "Weight/Value" as Optimum Threshold value
   }
  }

  cvThreshold(gray,thres,OPT_THRESH_VAL,255,CV_THRESH_BINARY); //Threshold the Image using the value obtained from OTSU method


  //Step : Identify Blobs in the OTSU Thresholded Image
  //Info : Custom Algorithm to Identify blobs
  //Note : This is a complicated method. Better refer the presentation, documentation or the Demo

  blob_count = 0;    // Current Blob number used to represent the Blob
  CvPoint cornerA,cornerB;  // Two Corners to represent Bounding Box

  memset(clr_flg, 0, w);  // Reset all the array elements ; Flag for tracking progress
  memset(clrprev_flg, 0, w);

  cvZero(prcs_flg);   // Reset all Process flags


        for( int y = 0; y < thres->height; ++y) //Start full scan of the image by incrementing y
        {
            uchar* prsnt = (uchar*) (thres->imageData + y * thres->widthStep);
            uchar* pntr_flg = (uchar*) (prcs_flg->imageData + y * prcs_flg->widthStep);  // pointer to access the present value of pixel in Process flag
   uchar* scn_prsnt;      // pointer to access the present value of pixel related to a particular blob
   uchar* scn_next;       // pointer to access the next value of pixel related to a particular blob

            for(int x = 0; x < thres->width; ++x ) //Start full scan of the image by incrementing x
            {
                int c = 0;     // Number of edgels in a particular blob
               
                if((prsnt[x] == 0) && (pntr_flg [x] == 0)) // If current pixel is black and has not been scanned before - continue
                {
   blob_count +=1;                          // Increment at the start of processing new blob
   clr_flg [x] = blob_count;                // Update blob number
   pntr_flg [x] = 255;                      // Mark the process flag

   n = 1;                                   // Update pixel count of this particular blob / this iteration

   cornerA.x = x;                           // Update Bounding Box Location for this particular blob / this iteration
   cornerA.y = y;
   cornerB.x = x;
   cornerB.y = y;

   int lx,ly;    // Temp location to store the initial position of the blob
   int belowx = 0;

   bool checkbelow = true;   // Scan the below row to check the continuity of the blob

                    ly=y;

                    bool below_init = 1;     // Flags to facilitate the scanning of the entire blob once
                    bool start = 1;

                        while(ly < h)      // Start the scanning of the blob
                        {
                            if(checkbelow == true)   // If there is continuity of the blob in the next row & checkbelow is set; continue to scan next row
                            {
                                if(below_init == 1)   // Make a copy of Scanner pixel position once / initially
                                {
                                    belowx=x;
                                    below_init = 0;
                                }

                                checkbelow = false;  // Clear flag before next flag

                                scn_prsnt = (uchar*) (thres->imageData + ly * thres->widthStep);
                                scn_next = (uchar*) (thres->imageData + (ly+1) * thres->widthStep);

                                pntr_flg = (uchar*) (prcs_flg->imageData + ly * prcs_flg->widthStep);

                                bool onceb = 1;   // Flag to set and check blbo continuity for next row

                                //Loop to move Scanner pixel to the extreme left pixel of the blob
                                while((scn_prsnt[belowx-1] == 0) && ((belowx-1) > 0) && (pntr_flg[belowx-1]== 0))
                                {
                                    cv_adjustBox(belowx,ly,cornerA,cornerB);    // Update Bounding Box corners
                                    pntr_flg [belowx] = 255;

                                    clr_flg [belowx] = blob_count;

                                    n = n+1;
                                    belowx--;
                                }
                                //Scanning of a particular row of the blob
                                for(lx = belowx; lx < thres->width; ++lx )
                                {
                                    if(start == 1)                  // Initial/first row scan
                                    {
                                        cv_adjustBox(lx,ly,cornerA,cornerB);
                                        pntr_flg [lx] = 255;

                                        clr_flg [lx] = blob_count;


                                        start = 0;
                                        if((onceb == 1) && (scn_next[lx] == 0))                 //Check for the continuity
                                        {
                                            belowx = lx;
                                            checkbelow = true;
                                            onceb = 0;
                                        }
                                    }
                                    else if((scn_prsnt[lx] == 0) && (pntr_flg[lx] == 0))               //Present pixel is black and has not been processed
                                    {
                                        if((clr_flg[lx-1] == blob_count) || (clr_flg[lx+1] == blob_count)) //Check for the continuity with previous scanned data
                                        {
                                            cv_adjustBox(lx,ly,cornerA,cornerB);

                                            pntr_flg [lx] = 255;

                                            clr_flg [lx] = blob_count;

                                            n = n+1;

                                            if((onceb == 1) && (scn_next[lx] == 0))
                                            {
                                                belowx = lx;
                                                checkbelow = true;
                                                onceb = 0;
                                            }
                                        }
                                        else if((scn_prsnt[lx] == 0) && (clr_flg[lx-2] == blob_count))  // Check for the continuity with previous scanned data
                                        {
                                            cv_adjustBox(lx,ly,cornerA,cornerB);

                                            pntr_flg [lx] = 255;

                                            clr_flg [lx] = blob_count;

                                            n = n+1;

                                            if((onceb == 1) && (scn_next[lx] == 0))
                                            {
                                                belowx = lx;
                                                checkbelow = true;
                                                onceb = 0;
                                            }
                                        }
                                        // Check for the continuity with previous scanned data
                                        else if((scn_prsnt[lx] == 0) && ((clrprev_flg[lx-1] == blob_count) || (clrprev_flg[lx] == blob_count) || (clrprev_flg[lx+1] == blob_count)))
                                        {
                                            cv_adjustBox(lx,ly,cornerA,cornerB);

                                            pntr_flg [lx] = 255;

                                            clr_flg [lx] = blob_count;

                                            n = n+1;

                                            if((onceb == 1) && (scn_next[lx] == 0))
                                            {
                                                belowx = lx;
                                                checkbelow = true;
                                                onceb = 0;
                                            }

                                        }
                                        else
                                        {
                                            continue;
                                        }

                                    }
                                    else
                                    {
                                        clr_flg[lx] = 0; // Current pixel is not a part of any blob
                                    }
                                } // End of scanning of a particular row of the blob
                            }
                            else // If there is no continuity of the blob in the next row break from blob scan loop
                            {
                                break;
                            }

                            for(int q = 0; q < thres->width; ++q) // Blob numbers of current row becomes Blob number of previous row for the next iteration of "row scan" for this particular blob
                            {
                                clrprev_flg[q]= clr_flg[q];
                            }
                            ly++;
                        }
                        // End of the Blob scanning routine 


   // At this point after scanning image data, A blob (or 'connected component') is obtained. We use this Blob for further analysis to confirm it is a Marker.

   
   // Get the Rectangular extent of the blob. This is used to estimate the span of the blob
   // If it too small, say only few pixels, it is too good to be true that it is a Marker. Thus reducing erroneous decoding
   rectw = abs(cornerA.x - cornerB.x);
   recth = abs(cornerA.y - cornerB.y);
   aspect_ratio = (double)rectw / (double)recth;

                        if((n > min_blob_sze) && (n < max_blob_sze))  // Reduces chances of decoding erroneous 'Blobs' as markers
                        {
                            if((aspect_ratio > 0.33) && (aspect_ratio < 3.0)) // Increases chances of identified 'Blobs' to be close to Square 
                            {
                                // Good Blob; Mark it
        cvRectangle(img,cornerA,cornerB,CV_RGB(255,0,0),1);
                            } 
                            else // Discard the blob data
                            {                      
                                blob_count = blob_count -1; 
                            }
                        }
                        else    // Discard the blob data               
                        {
                            blob_count = blob_count -1;  
                        }

                }
                else     // If current pixel is not black do nothing
                {
                    continue;
                }
  } // End full scan of the image by incrementing x
        } // End full scan of the image by incrementing y
 

  cvShowImage("Camera",img);
  key = cvWaitKey(1); // OPENCV: wait for 1ms before accessing next frame

 } // End of 'while' loop

 cvDestroyWindow( "Camera" ); // Release various parameters

 cvReleaseImage(&img);
 cvReleaseImage(&gray);
 cvReleaseImage(&thres);
 cvReleaseImage(&prcs_flg);

     return 0;
}
// End of Main Loop
//------------------------------------------------------------------------------------------------------------------------


// Routines used in Main loops

// Routine to update Bounding Box corners with farthest corners in that Box
void cv_adjustBox(int x, int y, CvPoint& A, CvPoint& B)
{
    if(x < A.x)
        A.x = x;

    if(y < A.y)
        A.y = y;

    if(x > B.x)
        B.x = x;

    if(y > B.y)
        B.y = y;
}

// EOF