Test Custom Object Detector

Previous Article – https://wp.me/p6xoZs-3O

Once you get the loss 1 or less than 1, you can terminate the training by pressing ctrl+c

Then, run below command after directing to object_detection folder to get the trained model,

python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix training/model.ckpt-10587 --output_directory mac_n_cheese_inference_graph

Please note for trained_checkpoint_prefix you have to check whether you have three files in the training folder
output_directory will be created automatically by itself even if it’s not there.

If you get an error run below,

set PYTHONPATH=C:\D\SentDex\models-master\models-master\research;C:\D\SentDex\models-master\models-master\research\slim

Then, try our new model using below jupyter script as our first exercise,

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

if StrictVersion(tf.__version__) < StrictVersion('1.12.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.12.*.')

# This is needed to display the images.
%matplotlib inline

from utils import label_map_util

from utils import visualization_utils as vis_util

# What model to download.
MODEL_NAME = '../../../../mac_n_cheese_inference_graph'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = '../../../../model/object_detection_map.pbtxt'

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 6) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

def run_inference_for_single_image(image, graph):
  with graph.as_default():
    with tf.Session() as sess:
      # Get handles to input and output tensors
      ops = tf.get_default_graph().get_operations()
      all_tensor_names = {output.name for op in ops for output in op.outputs}
      tensor_dict = {}
      for key in [
          'num_detections', 'detection_boxes', 'detection_scores',
          'detection_classes', 'detection_masks'
      ]:
        tensor_name = key + ':0'
        if tensor_name in all_tensor_names:
          tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
              tensor_name)
      if 'detection_masks' in tensor_dict:
        # The following processing is only for single image
        detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
        detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
        # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
        real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
        detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
        detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            detection_masks, detection_boxes, image.shape[1], image.shape[2])
        detection_masks_reframed = tf.cast(
            tf.greater(detection_masks_reframed, 0.5), tf.uint8)
        # Follow the convention by adding back the batch dimension
        tensor_dict['detection_masks'] = tf.expand_dims(
            detection_masks_reframed, 0)
      image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

      # Run inference
      output_dict = sess.run(tensor_dict,
                             feed_dict={image_tensor: image})

      # all outputs are float32 numpy arrays, so convert types as appropriate
      output_dict['num_detections'] = int(output_dict['num_detections'][0])
      output_dict['detection_classes'] = output_dict[
          'detection_classes'][0].astype(np.int64)
      output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
      output_dict['detection_scores'] = output_dict['detection_scores'][0]
      if 'detection_masks' in output_dict:
        output_dict['detection_masks'] = output_dict['detection_masks'][0]
  return output_dict

for image_path in TEST_IMAGE_PATHS:
  image = Image.open(image_path)
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.
  image_np = load_image_into_numpy_array(image)
  # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
  image_np_expanded = np.expand_dims(image_np, axis=0)
  # Actual detection.
  output_dict = run_inference_for_single_image(image_np_expanded, detection_graph)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks'),
      use_normalized_coordinates=True,
      line_thickness=8)
  # plt.figure(figsize=IMAGE_SIZE)
  plt.imshow(image_np)
  plt.show
  image = Image.open(image_path)
  im = Image.fromarray(image_np)
  im.save("output_images/"+os.path.basename(image_path))

Training Custom Object Detector

Previous Article – https://wp.me/p6xoZs-3K

We are going to train an existing model. So we have to download the model and it’s configuration first.

You can go the tensorflow model zoo and download the model as you wanted, and in the downloaded tensorflow source folder direct to research/object_detection/samples/configs in this folder you can find configuration files, according to the model you use.

I use ssd_mobilenet_v1_coco_11_06_2017 model and ssd_mobilenet_v1_pets.config config file. Also you need pbtxt map file as below,

item {
  id: 1
  name: 'macncheese'
}

you can expand the items as you used the no of items for the training.

Then, you have to change the paths in the following section in the model configuration file:

num_classes: 1 # this is in model - ssd section to present no of labels we used to train

train_config: {
  batch_size: 10
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "model/ssd_mobilenet_v1_coco_11_06_2017/model.ckpt"
  from_detection_checkpoint: true
  load_all_detection_checkpoint_vars: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 200000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "Data_Set/train.record"
  }
  label_map_path: "model/object_detection_map.pbtxt"
}

eval_config: {
  metrics_set: "coco_detection_metrics"
  num_examples: 1100
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "Data_Set/test.record"
  }
  label_map_path: "model/object_detection_map.pbtxt"
  shuffle: false
  num_readers: 1
}

Be aware of the paths of specific files and the location of you going to run the train.py which is we are going to do next.

Now direct to research/object_detection/legacy and get the train.py script to start the training. Run the script as below,

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config

if you get any error, run below command

set PYTHONPATH=<location>\research;<location>\research\slim
Eg:
set PYTHONPATH=C:\python\models\research;C:\python\models\research\slim

While you start training, you can check the status from tensorboard like below,

tensorboard --logdir='training'

Next Article – https://wp.me/p6xoZs-3T

Creating TFRecords

Previous Article – https://wp.me/p6xoZs-3G

TFRecords are special data format which is used to read image data from tensorflow framework. To create TFRecords there are two steps as below,

Step 01 – Convert XML to CSV

From the source code you download direct to research/object_detection; then you can find a python script named xml_to_csv.py.

Change the script as below,
Replace

def main():
    image_path = os.path.join(os.getcwd(), 'annotations')
    xml_df = xml_to_csv(image_path)
    xml_df.to_csv('raccoon_labels.csv', index=None)
    print('Successfully converted xml to csv.')

With

def main():
    for directory in ['train','test']:
        image_path = os.path.join(os.getcwd(), 'images/{}'.format(directory))
        xml_df = xml_to_csv(image_path)
        xml_df.to_csv('data/{}_labels.csv'.format(directory), index=None)
        print('Successfully converted xml to csv.')

mindful on the folder paths and your running directory.

then run as below,

python xml_to_csv.py

Step 02 – Convert CSV to TFRecord

Now from the same location grab the generate_tfrecord.py and modify as below,

# TO-DO replace this with label map
def class_text_to_int(row_label):
    if row_label == 'macncheese':
        return 1
    else:
        None

you can expand the if condition according to no of labels you used to train.

Before run the above script direct to research folder and run below command

python setup.py install

then run as below,

python generate_tfrecord.py --csv_input=../Data_Set/train_labels.csv  --output_path=../Data_Set/train.record --image_dir=../Data_Set/train
python generate_tfrecord.py --csv_input=../Data_Set/test_labels.csv  --output_path=../Data_Set/test.record --image_dir=../Data_Set/test

Next Article – https://wp.me/p6xoZs-3O

Collect Data And Annotate Them

Previous Article – https://wp.me/p6xoZs-3C

Collect Data

You have to collect image data set to train your model, you have to be careful to collect variety of images other than same type of images to increase the quality of our training. From the total no of images you have to get 10% to 20% for validation purposes, to check the accuracy of our trained model.

Label / Annotate Images

You can use LabelImg tool to annotate the images, you can get a better idea from below link,

https://github.com/tzutalin/labelImg

Next Article – https://wp.me/p6xoZs-3K

Train a custom object to Tensorflow Object Detection Model

Previous Article – https://wp.me/p6xoZs-3y

To do this there are few steps to follow, there are,

  1. Collect a few hundred images that contain your object – The bare minimum would be about 100, ideally more like 500+, but, the more images you have, the more tedious step 2 is…
  2. Annotate/label the images, ideally with a program. I personally used LabelImg. This process is basically drawing boxes around your object(s) in an image. The label program automatically will create an XML file that describes the object(s) in the pictures.
  3. Split this data into train/test samples
  4. Generate TF Records from these splits
  5. Setup a .config file for the model of choice (you could train your own from scratch, but we’ll be using transfer learning)
  6. Train
  7. Export graph from new trained model
  8. Detect custom objects in real time!

Next Article – https://wp.me/p6xoZs-3G

Tensorflow Object Detection From Web Cam

Previous Article – https://wp.me/p6xoZs-3t

We use the same code for this exercise, lets do the necessary changes as below,

Add

import cv2
cap = cv2.VideoCapture(0)

Replace

for image_path in TEST_IMAGE_PATHS:
      image = Image.open(image_path)
      # the array based representation of the image will be used later in order to prepare the
      # result image with boxes and labels on it.
      image_np = load_image_into_numpy_array(image)

With

while True:
      ret, image_np = cap.read()

Replace

plt.figure(figsize=IMAGE_SIZE)
      plt.imshow(image_np)
      plt.show()

With

cv2.imshow('object detection', cv2.resize(image_np, (800,600)))
      if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

Full Code,

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

import cv2
cap = cv2.VideoCapture(0)

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

if StrictVersion(tf.__version__) < StrictVersion('1.12.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.12.*.')

# This is needed to display the images.
%matplotlib inline

from utils import label_map_util

from utils import visualization_utils as vis_util

# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
  file_name = os.path.basename(file.name)
  if 'frozen_inference_graph.pb' in file_name:
    tar_file.extract(file, os.getcwd())

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 4) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

def run_inference_for_single_image(image, graph):
  with graph.as_default():
    with tf.Session() as sess:
      # Get handles to input and output tensors
      ops = tf.get_default_graph().get_operations()
      all_tensor_names = {output.name for op in ops for output in op.outputs}
      tensor_dict = {}
      for key in [
          'num_detections', 'detection_boxes', 'detection_scores',
          'detection_classes', 'detection_masks'
      ]:
        tensor_name = key + ':0'
        if tensor_name in all_tensor_names:
          tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
              tensor_name)
      if 'detection_masks' in tensor_dict:
        # The following processing is only for single image
        detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
        detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
        # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
        real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
        detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
        detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            detection_masks, detection_boxes, image.shape[1], image.shape[2])
        detection_masks_reframed = tf.cast(
            tf.greater(detection_masks_reframed, 0.5), tf.uint8)
        # Follow the convention by adding back the batch dimension
        tensor_dict['detection_masks'] = tf.expand_dims(
            detection_masks_reframed, 0)
      image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

      # Run inference
      output_dict = sess.run(tensor_dict,
                             feed_dict={image_tensor: image})

      # all outputs are float32 numpy arrays, so convert types as appropriate
      output_dict['num_detections'] = int(output_dict['num_detections'][0])
      output_dict['detection_classes'] = output_dict[
          'detection_classes'][0].astype(np.int64)
      output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
      output_dict['detection_scores'] = output_dict['detection_scores'][0]
      if 'detection_masks' in output_dict:
        output_dict['detection_masks'] = output_dict['detection_masks'][0]
  return output_dict

while True:
  ret, image_np = cap.read()
  # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
  image_np_expanded = np.expand_dims(image_np, axis=0)
  # Actual detection.
  output_dict = run_inference_for_single_image(image_np_expanded, detection_graph)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks'),
      use_normalized_coordinates=True,
      line_thickness=8)
  cv2.imshow('object detection', cv2.resize(image_np, (800,600)))
  if cv2.waitKey(25) & 0xFF == ord('q'):
    cv2.destroyAllWindows()
    break

Next Article – https://wp.me/p6xoZs-3C

First Tensorflow Object Detection

Previous Article – https://wp.me/p6xoZs-3p

Open object_detection_tutorial.ipynb in the research/object_detection directory in the tensorflow source using below command.

Direct to research/object_detection directory and type

jupyter notebook

select object_detection_tutorial.ipynb and run all, you can see the two images in test_images has been processed to identify the objects.

However, I have done few changes as below to save the images under specific folder at the end.

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

if StrictVersion(tf.__version__) < StrictVersion('1.12.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.12.*.')

# This is needed to display the images.
%matplotlib inline

from utils import label_map_util

from utils import visualization_utils as vis_util

# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
  file_name = os.path.basename(file.name)
  if 'frozen_inference_graph.pb' in file_name:
    tar_file.extract(file, os.getcwd())

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 4) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

def run_inference_for_single_image(image, graph):
  with graph.as_default():
    with tf.Session() as sess:
      # Get handles to input and output tensors
      ops = tf.get_default_graph().get_operations()
      all_tensor_names = {output.name for op in ops for output in op.outputs}
      tensor_dict = {}
      for key in [
          'num_detections', 'detection_boxes', 'detection_scores',
          'detection_classes', 'detection_masks'
      ]:
        tensor_name = key + ':0'
        if tensor_name in all_tensor_names:
          tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
              tensor_name)
      if 'detection_masks' in tensor_dict:
        # The following processing is only for single image
        detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
        detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
        # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
        real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
        detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
        detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            detection_masks, detection_boxes, image.shape[1], image.shape[2])
        detection_masks_reframed = tf.cast(
            tf.greater(detection_masks_reframed, 0.5), tf.uint8)
        # Follow the convention by adding back the batch dimension
        tensor_dict['detection_masks'] = tf.expand_dims(
            detection_masks_reframed, 0)
      image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

      # Run inference
      output_dict = sess.run(tensor_dict,
                             feed_dict={image_tensor: image})

      # all outputs are float32 numpy arrays, so convert types as appropriate
      output_dict['num_detections'] = int(output_dict['num_detections'][0])
      output_dict['detection_classes'] = output_dict[
          'detection_classes'][0].astype(np.int64)
      output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
      output_dict['detection_scores'] = output_dict['detection_scores'][0]
      if 'detection_masks' in output_dict:
        output_dict['detection_masks'] = output_dict['detection_masks'][0]
  return output_dict

for image_path in TEST_IMAGE_PATHS:
  image = Image.open(image_path)
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.
  image_np = load_image_into_numpy_array(image)
  # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
  image_np_expanded = np.expand_dims(image_np, axis=0)
  # Actual detection.
  output_dict = run_inference_for_single_image(image_np_expanded, detection_graph)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks'),
      use_normalized_coordinates=True,
      line_thickness=8)
  # plt.figure(figsize=IMAGE_SIZE)
  plt.imshow(image_np)
  plt.show
  image = Image.open(image_path)
  im = Image.fromarray(image_np)
  im.save("output_images/"+os.path.basename(image_path))

from PIL import Image
for image_path in TEST_IMAGE_PATHS:
    image = Image.open(image_path)
    im = Image.fromarray(image_np)
    im.save("output_images/"+os.path.basename(image_path))

Next Article – https://wp.me/p6xoZs-3y

Installing Libraries & Download Relevant Resources For Tensorflow Object Detection

Previous Article – About Anaconda – https://wp.me/p6xoZs-3n

In the anaconda environment you can easily install the relevant resources without any hassle.

Upgrade Anaconda Environment

Just select the created environment in the anaconda navigator under environment tab and click the play button, open the terminal and enter below commands,

In here we install tensorflow cpu, but if you have gpu in your pc just look at the installing tensorflow gpu using this link – https://www.youtube.com/watch?v=r7-WPbx8VuY

pip install tensorflow
pip install pillow
pip install lxml
pip install jupyter
pip install matplotlib
<code class="bash plain">pip </code><code class="bash functions">install</code> <code class="bash plain">opencv-contrib-python</code> 

Download Tensorflow Source

Download Tensorflow Source From Github – https://github.com/tensorflow/models

Once downloaded extract it

Download Protoc

Download Protoc Latest Version – https://github.com/protocolbuffers/protobuf/releases

Once you extract the downloaded protoc bin use the protoc bin location on below command

Direct to the Tensorflow source research folder and type below command

<protoc setup bin location> object_detection/protos/*.proto –python_out=.

Next Article – https://wp.me/p6xoZs-3t

Working with Anaconda in windows 10

Image result for anaconda distribution of python

The open-source Anaconda Distribution is the easiest way to perform Python/R data science and machine learning on Linux, Windows, and Mac OS X. With over 11 million users worldwide, it is the industry standard for developing, testing, and training on a single machine, enabling individual data scientists to:

  • Quickly download 1,500+ Python/R data science packages
  • Manage libraries, dependencies, and environments with Conda
  • Develop and train machine learning and deep learning models with scikit-learn, TensorFlow, and Theano
  • Analyze data with scalability and performance with Dask, NumPy, pandas, and Numba
  • Visualize results with Matplotlib, Bokeh, Datashader, and Holoviews

For more information visit official website – https://www.anaconda.com/distribution/

Install Anaconda python version above 3 and you are good to go.

After installing the anaconda, open anaconda navigator and create a new fresh environment under environment tab for your new tensorflow machine learning and object detection exercises.

Next Article – https://wp.me/p6xoZs-3p

Spring Single Session Login

When you want to configure your application for single session login, that’s mean user can login to application only from one session. Normally even in google, they allow to login to the system from different browsers or devices; but if you want to limit this to a single browser/device/session; this is the way.

At the first time I was looking this solution from different sources, but couldn’t find. Finally I got this solution from a friend of mine and it works perfectly. This is how I did it.

Please follow below steps,

Step 01 – Update Security Configuration

You have to set below configuration to the spring security configuration.


.sessionManagement()
.invalidSessionUrl("/login")
.maximumSessions(1)
.maxSessionsPreventsLogin(true)
.expiredUrl("/login")

Note : In above configuration we set the no of session to one and allow to prevent login when maximum no of session reached.

Then you have to add below bean as well.


@Bean
public HttpSessionEventPublisher httpSessionEventPublisher() {
    return new HttpSessionEventPublisher();
}

Note : You can write a custom HttpSessionEventPublisher if you want, here I used the existing one which spring gives default.

Step 02 – Update UserDetails Implemented Domain

Now you have to find the domain which UserDetails implemented. (When Application configured with spring security,  we have to create a object which implements the UserDetails object)

Add below methods at end of the Domain class.


@Override
public boolean equals(Object otherUser) {
   if (otherUser == null)
   return false;
   else if (!(otherUser instanceof UserDetails))
   return false;
   else
   return (otherUser.hashCode() == hashCode());
}

@Override
public int hashCode() {
    StringBuffer sb = new StringBuffer();
    sb.append(this.emailAddress != null ? this.emailAddress : "");
    sb.append(this.userName != null ? this.userName : "");

    String hashCode = sb.toString();
    return hashCode.hashCode();
}