Wednesday, 2 June 2021

I see you - lincoln exhibition


Ok.... How should I start this ?

I volunteered frequency festival taking place in Lincoln, England.


They put me in charge of exhibit called as "I SEE YOU". It sounds exciting or at least that's what I hope it will be.
 
So I finally went to that place and it looks like this

They just hanged four cubes.. there is no nothing else and you just need to put your head into one of the boxes where you can see mirror images of you on all Five sides.

That's it.... Nothing else.



But I guess when I think about it, this will gives us a way to look into our self in multiple different ways ? One face for one scenario ?

Piezo electric power generation in tires

 Long ago I wrote a small paper on "Piezo electric power generation in tires"

It seems that I can't find any working link for the paper so I'm just adding it here just in case if anyone is looking for that

 

Piezo Electric Power Generation in Tires , Kunchala Anil, N Sreekanth


Please cite it with following


@article{anil2014piezoelectric,
  title={Piezoelectric power generation in tires},
  author={Anil, Kunchala and Sreekanth, N},
  journal={International Journal of Electrical, Electronics and Computer Systems (IJEECS)
}, volume={2},
issue = {2},
pages={11--16},
  year={2014}
}

Saturday, 18 January 2020

UK Tier 2 Visa Process

Hi,

Recently I applied for UK Tier 2 General visa from Chennai, India. In following post i will try to mention things I came to know and procedure I followed to get the required documentation.

First of All - go through gov.uk website for instructions and references. It is pretty well documented.

From official website, the documents you must provide are

  • Certificate of Sponsorship
  • English requirements
  • maintenance funds for first month
  • Passport
  • tuberculosis test results
  • criminal record certificate 
  • applying
  • vfs-global biometric
 Certificate of Sponsorship

To apply for Tier 2 Visa , you need to have a valid Job offer from one of the licensed sponsors, also you need to be paid Appropriate salary (Min 30,000GBP at the date of this post ) to be eligible for certificate of sponsorship.

Once you get the offer, your sponsor will be apply COS (certificate of sponsorship) for you. It will be usually applied before every month 5th and allocation date will be 11th. If your sponsor applied after 5th, you have to wait for another month.

COS consist of your annual salary , start date & end date of employment and information on Resident Labor Market Test.
I heard allocation of COS mainly depends on RLMT and Appropriate Salary (but I'm not sure), and COS is valid for 3 months only, And it can be used for single visa application only.

 English requirements
 For English requirement ,you either need IELTS for UKVI or Certificate from NARIC stating your Degree is equivalent to UK bachelors/Masters/PhD degree.

If you are like me who wrote IELTS Academic / General, Please note that It is not a valid proof for your visa application, you have to write IELTS for UKVI for which they charge extra for same test and with few more cameras and invigilators. Also check approved English language tests

once I learned painstaking fact that IELTS Academic is not a valid, I started exploring another option - NARIC certification.

Go to NARIC visa and immigration, create an account and start application. you need to upload the following

  • OD / Original Degree Certificate
  • Marks list / Academic Transcripts
  • MOI / Medium of Instruction Certificate
Medium of Instruction certificate : If you studied in a affiliated college instead of university, you need to get MOI from university not from college.

NARIC has a shitty deadlines. In website they mention turnround time is 10 working days but they usually take another 3-4 days to verify documents, so apply before 20 days to get your certificates on time.

Maintenance funds for first month
 Usually COS certified maintenance for you. Please check with your sponser

Tuberculosis Test
 You need to get TB test from one of approved test clinics
I went to Apollo chennai. usually in any test clinic they will specify to book appointment before visiting clinic and be prepared to spent a day there. In apollo chennai they will give certificates at 4.30PM only , so you have to wait until that time even appointment at 10.00 AM.

criminal record certificate 
Not all jobs require criminal record certificate , but check with your sponsor if you need one and ref this.

In India , you can get this certificate by visiting police station or from passport seva kendra.

I applied via passport seva kendra,

You just need to fill application and book appointment and visit passport seva kendra.For me it just took 2 hours to get the process done and they gave me certificate at the end.

Applying

Many websites and tutorials refer to visa4uk website to fill application, but gov.uk site refers to visa-immigration link, I applied via link from gov.uk site and pay the hefty fee (almost 1lakh - visa for 8 months health surcharge 400 GBP + Visa fee 560 GBP). for some reason I cant able to pay with debit card.

vfs-global biometric 
Once application is completed, you can book an appointment in VFS global to submit biometric. I am able book appointment in next day.

 









Thursday, 7 February 2019

Training a custom object detector using TensorFlow object detection API

After I failed to train object detector with custom data  using NVIDIA digits platform on detectNet , I tried my luck with TensorFlow object detection API. I think I successfully trained mobileNet model with it.

In this post I will try to explain what I did and what are the error's I faced while doing so.

 Data Collection : Instead of taking images I write a script to grab the frames from the video

It will take video file path and number of frames to be generated. The frames grabbed from video look like this
And yes I take brinjal (is that what it is called ?) images for testing.

labelImg is used to label images. I installed via zip instead of pip (labelImg with pip never worked for me).

Installation : you can follow official docs to install both tensorflow and models.
I find it much easier compared to digits installation

Once you downloaded models, open jupyter and run the objectDetection example. it will take bit of time to run , since it has to download the mobieNet model trained on coco dataset.

Note :
for some weird reason I have to enter

# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
 in terminal even-though I added research and slim path to my .bashrc file


Custom model Installation :
  1.  Generate XML to CSV Files
  2.  Generate .record files from CSV files
  3.  make .pbtxt file and dont include comma ','
  4. Download the model and .config file 
  5. Edit the .config file to modify following
    • number of classes
    • pretrained model path
    • test labels path, with images
    • train labels path, with images
  6. Then copy the train.py file to train your model
              When i try to train my custom data using model specified in docs (ssd_inception_v2), I keep getting the following error : WARNING:root:Variable [FeatureExtractor/InceptionV2/Conv2d_1a_7x7/BatchNorm/beta/ExponentialMovingAverage] is not available in checkpoint .

I tried to find the problem using following :
 But didn't get it working. I tried the model and config file from this tutorial  :
https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/ with ssd_mobileNet and cofig file, for some unknown reason , IT WORKED.

7. Evaluating the model
              Copy the eval.py file from legacy folder and run following command

python3 eval.py --logtostderr --pipeline_config_path=/home/ic/Documents/objectExtraction/workspace/training_demo/training/inception_v2.config --checkpoint_dir=/home/ic/Documents/objectExtraction/workspace/training_demo/training --eval_dir=/home/ic/Documents/objectExtraction/workspace/training_demo/eval

You need to specify the number of test images in .config file

eval_config: {
  num_examples: 22
}


You can check the eval output with


                             tensorboard --logdir=eval\

 Check and images , you can find the output


If you get following error :
NameError: name 'unicode' is not defined in object_detection/utils/object_detection_evaluation.py 

Try to replace unicode with str in file object_detection_evaluation.py as specified in https://github.com/tensorflow/models/issues/5203


8. Exporting the model 
           I used the following command to export inferred graph 
python3 export_inference_graph.py --input_type image_tensor       --pipeline_config_path training/inception_v2.config --trained_checkpoint_prefix training/model.ckpt-688 --output_directory trained-inference-graphs/output_inference_graph_v1

When I run it , i got the warning 

114 ops no flops stats due to incomplete shapes. Parsing Inputs... Incomplete shape.


But as stated here , we can ignore it and use the model


9. using the trained model
   To use the trained model , modify the following lines :
Specify the lebel.pbtxt file used 
PATH_TO_LABELS = os.path.join('data', 'label_map.pbtxt')
 and exported model
MODEL_NAME = 'output_inference_graph_v1'

Then add few more images in test_images folder and change the for loop range
accordingly
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 6) ]
 then run the file

And Results I get are :


And for some weird reason This
And Like always I don't even know why third image came like that, may be i have to train it with versatile images.

Are you trying the same and struck ? or do you have any suggestions / solutions for errors i faced (using inceptionNet as specified in official docs)? let me know in comments.



Sunday, 3 February 2019

NVIDIA DIGITS ( failed ) Object detection DetectNet with custom data

Let me start with this  :
I know nothing about the ML, DL , AI or those big buzz words you keep hearing couple of times a day. I just barely trying to scratch those huge mountains from past 2 months  and cant even able to successfully do that till now. So I just know names nothing else but with a lot of optimism I'm trying to use few software or programming tools to train a pre-trained network using transfer learning with custom data.

In this blog post I will try to explain how I miserably tried and failed at training a Object detection model using custom data with NVIDIA DIGITS using DetectNet.

 The total process is divided in to three steps :
  1. Installing and setting up digits in system
  2. Collecting and preparing the data
  3. Training the model
 1. Installing and setting up digits in system 
        There is exhaustive guide on how to setup the NVIDIA digits on the host system or using cloud : https://github.com/dusty-nv/jetson-inference

Follow it line by line , if you are lucky you can set it up and test it in two days as stated in the documents - but for me it took me almost entire week to set it up.

Possible Pitfalls :

2.Collecting and preparing the data
     I'm trying to detect the (lemon) leafs in the following image

So I go to the nearby field and collected around 100 images like them using my phone. I labelled them using labelImg - that's quite a laborious work for 100 , but think about when need a couple of thousand training images.

labelImg will give us  annotations as XML files in PASCAL VOC format , like this


<object>
        <name>leaf</name>
        <pose>Unspecified</pose>
        <truncated>0</truncated>
        <difficult>0</difficult>
        <bndbox>
            <xmin>2451</xmin>
            <ymin>142</ymin>
            <xmax>2798</xmax>
            <ymax>986</ymax>
        </bndbox>
 </object>
 <object>
        <name>leaf</name>
        <pose>Unspecified</pose>
        <truncated>0</truncated>
        <difficult>0</difficult>
        <bndbox>
            <xmin>637</xmin>
            <ymin>1025</ymin>



But DIGITS needs data in KITTI format.
which is a TXT file with specific information, so i wrote a python code for converting labelImg xml files to kiiti format. you can find it in my git repo.
To use my code copy all your images into the directory and specify it in SRC_DIR and it will do the rest. The code will create directory named 'labels' and saves generated files there.

After that you need to divide that data in to train and validate as specified in the doc. You can use my another script to do that.


Saturday, 29 September 2018

Custom Flight Controller using Arduino Uno

Hi,

I tried to build my own custom flight controller using Arduino Uno.
You can find all the source code at my github repo: https://github.com/anilkunchalaece/arduQuad

Following are the few test flights i tried :





I also developed my own GUI using Python (PyQt4 and PyQt Graph) for debugging and tuning


This is the one of the many problems i had to tackle to get above flight performance


The problem in the above video is using only Angle PID loop. to get minimum takeoff we have to use cascaded PID loops i.e Angular Rate followed by Angle PID loop.

The code i used to achieve take off is https://github.com/anilkunchalaece/arduQuad/blob/master/ArduinoCode/ksrmQuadLevelModePlusConfig/ksrmQuadLevelModePlusConfig.ino

Source code for python GUI
https://github.com/anilkunchalaece/arduQuad/blob/master/PythonCode/GUI/graphGUI_v2.py

Following are the few references i used

https://blog.owenson.me/build-your-own-quadcopter-flight-controller/

This is me flying APM2.7 with the same build.
I suck at flying, still lot to learn.
This is how it ended up, crashing on a tree.
and luckily nothing broken :)


Btw i used complementary filter to get the angles from MPU 6050. Drop a comment if are doing the same, i am very happy to join with you to continue this.

Wednesday, 30 May 2018

Reading Accelerometer Angles


/*
 * Author : kunchala Anil
 * Email : anilkunchalaece@gmail.com
 * Date : 22 May 2018
 * 
 * This sketch is used to calculate angle from MPU6050 Accelerometer values
 * crust of the sketch is 
 *        float pitch = atan(accX/sqrt(pow(accY,2) + pow(accZ,2)));
 *         float roll = atan(accY/sqrt(pow(accX,2) + pow(accZ,2)));
 * 
 * Check following https://cache.freescale.com/files/sensors/doc/app_note/AN3461.pdf
 * i didnt understand the math behind it
 */
 
#include <Wire.h>
#include <math.h>

const long ACCR_SENSITIVITY_SCALE_FACTOR = 16384.0; // for 2g
const int MPU6050_ADDR = 0b1101000;

const byte PWR_REG_ADDR = 0x6B;

const byte ACCR_CONFIG_REG_ADDR = 0x1C;
const byte ACCR_READ_START_ADDR = 0x3B;

int16_t accX,accY,accZ;
double angleX,angleY; 
double aX,aY,aZ;

void setup() {
 Wire.begin();
 Serial.begin(115200);
 configureMPU();
}//end of setup

void loop() {
  readAccrX();
// apply trigonometry to get the pitch and roll:
float pitch = atan(accX/sqrt(pow(accY,2) + pow(accZ,2)));
float roll = atan(accY/sqrt(pow(accX,2) + pow(accZ,2)));
//convert radians into degrees
pitch = pitch * (180.0/PI);
roll = roll * (180.0/PI);
  

  Serial.print(pitch);
  Serial.print("   ");
  Serial.println(roll);
  delay(100);
}//end of loop

void readAccrX(){
   Wire.beginTransmission(MPU6050_ADDR);
  Wire.write(ACCR_READ_START_ADDR);  // starting with register 0x3B (ACCEL_XOUT_H)
  Wire.endTransmission(false);
  Wire.requestFrom(MPU6050_ADDR,6,true);  
  accX=Wire.read()<<8|Wire.read(); 
  accY=Wire.read()<<8|Wire.read(); 
  accZ=Wire.read()<<8|Wire.read(); 
//  Serial.println(accX);   
//  aX = accX / ACCR_SENSITIVITY_SCALE_FACTOR;
//  aY = accY / ACCR_SENSITIVITY_SCALE_FACTOR;
//  aZ = accZ / ACCR_SENSITIVITY_SCALE_FACTOR;

}//end of readGyroX Fcn

void configureMPU(){
  //Power Register
  Wire.beginTransmission(MPU6050_ADDR);
  Wire.write(PWR_REG_ADDR);//Access the power register
  Wire.write(0b00000000);
  Wire.endTransmission();

  //Gyro Config
  Wire.beginTransmission(MPU6050_ADDR);
  Wire.write(ACCR_CONFIG_REG_ADDR);
  Wire.write(0b00000000);
  Wire.endTransmission();
}//end of setUpMPU Fcn