omdenalore.computer_vision package

Submodules

omdenalore.computer_vision.activation_functions module

class omdenalore.computer_vision.activation_functions.AconC(*args: Any, **kwargs: Any)

Bases: torch.nn.

ACON activation (activate or not). AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter according to “Activate or Not: Learning Customized Activation” <https://arxiv.org/pdf/2009.04759.pdf>.

forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

class omdenalore.computer_vision.activation_functions.FReLU(*args: Any, **kwargs: Any)

Bases: torch.nn.

Adding the FReLU activation function

forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

class omdenalore.computer_vision.activation_functions.Hardswish(*args: Any, **kwargs: Any)

Bases: torch.nn.

Adding the Hardswish activation function

static forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

class omdenalore.computer_vision.activation_functions.MemoryEfficientMish(*args: Any, **kwargs: Any)

Bases: torch.nn.

Adding the MemoryEfficientMish activation function

class F(*args: Any, **kwargs: Any)

Bases: torch.autograd.

static backward(ctx, grad_output)

backpropagation method for MemoryEfficientMish activation function

Parameters
  • ctx

  • grad_output – gradient of the network parameters

static forward(ctx, x)

calling method for the activation function

Parameters
  • ctx

  • x – input tensor or value for the activation function

forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

class omdenalore.computer_vision.activation_functions.MetaAconC(*args: Any, **kwargs: Any)

Bases: torch.nn.

ACON activation (activate or not). MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network according to “Activate or Not: Learning Customized Activation” <https://arxiv.org/pdf/2009.04759.pdf>.

forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

class omdenalore.computer_vision.activation_functions.Mish(*args: Any, **kwargs: Any)

Bases: torch.nn.

Adding the Mish activation function

static forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

class omdenalore.computer_vision.activation_functions.SiLU(*args: Any, **kwargs: Any)

Bases: torch.nn.

Adding the SiLU activation function

static forward(x)

calling method for the activation function

Parameters

x – input tensor or value for the activation function

omdenalore.computer_vision.augmentations module

class omdenalore.computer_vision.augmentations.Augmenter

Bases: object

Basic augmentations for images

static get_basic_train_transforms(height: int, width: int, means: List[float], stds: List[float])

Apply only basic training transformations such as Resize and Normalize.

Parameters
  • height – int specifying new height

  • width – int specifying new width

  • means – List of means for normalization

  • stds – List of stds for normalization

Returns

Albumentation compose transform object for training dataset

Example

from omdenalore.computer_vision.augmentations import Augmenter import cv2

>>> transform = Augmenter.get_basic_train_transforms(
        height=256,
        width=256,
        means=[0.485, 0.456, 0.406],
        stds=[0.229, 0.224, 0.225],
    )

# Read an image with OpenCV and convert it to the RGB colorspace >>> image = cv2.imread(“image.jpg”) >>> image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Augment an image >>> transformed = transform(image=image) >>> transformed_image = transformed[“image”]

static get_mild_train_transforms(height: int, width: int, means: List[float], stds: List[float])

Apply few mild training transformations such as Resize, horizontal and vertical, Gaussian Noise, Perspective Shift and Normalize.

Parameters
  • height – int specifying new height

  • width – int specifying new width

  • means – List of means for normalization

  • stds – List of stds for normalization

Returns

Albumentation compose transform object for training dataset

Example

from omdenalore.computer_vision.augmentations import Augmenter import cv2

>>> transform = Augmenter.get_mild_train_transforms(
        height=256,
        width=256,
        means=[0.485, 0.456, 0.406],
        stds=[0.229, 0.224, 0.225],
    )

# Read an image with OpenCV and convert it to the RGB colorspace >>> image = cv2.imread(“image.jpg”) >>> image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Augment an image >>> transformed = transform(image=image) >>> transformed_image = transformed[“image”]

static get_val_transforms(height: int, width: int, means: List[float], stds: List[float])

Apply only basic transformation such as Resize and Normalize. :param height: int specifying new height :param width: int specifying new width :param means: List of means for normalization :param stds: List of stds for normalization :returns: Albumentation compose transform object for validation dataset

Example

from omdenalore.computer_vision.augmentations import Augmenter import cv2

>>> transform = Augmenter.get_val_transforms(
        height=256,
        width=256,
        means=[0.485, 0.456, 0.406],
        stds=[0.229, 0.224, 0.225],
    )

# Read an image with OpenCV and convert it to the RGB colorspace >>> image = cv2.imread(“image.jpg”) >>> image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Augment an image >>> transformed = transform(image=image) >>> transformed_image = transformed[“image”]

omdenalore.computer_vision.benchmark module

class omdenalore.computer_vision.benchmark.BenchmarkRunner(model_dir: str, num_warm_iter: int = 10, num_bench_iter: int = 50, precision: str = 'float16')

Bases: object

Base class which can be extended to inference or traning benchmark.

Parameters
  • model_dir (str) – path to folder containing params.json file

  • num_warm_iter (int) – number of iterations as warmup

  • num_bench_iter (int) – number of iterations to benchmark

  • precision (str) – precision to use for benchmarking

class omdenalore.computer_vision.benchmark.InferenceBenchmarkRunner(model_dir: str, precision: str = 'float32')

Bases: omdenalore.computer_vision.benchmark.BenchmarkRunner

Inference class extended from BenchmarkRunner

Parameters
  • model_dir (str) – path to folder containing params.json file

  • precision (str) – precision to use for benchmarking

run()

Run the neural network model

omdenalore.computer_vision.benchmark.count_params(model: torch.nn.Module)

returns the number of parameters of the model

Parameters

model (nn.Module) – neural network model

omdenalore.computer_vision.benchmark.cuda_timestamp(sync: bool = False, device=None)

synchronizes cuda device if sync is true

Parameters
  • sync (boolean) – boolean value deciding whether to sync

  • device (torch.cuda.device) – cuda device

omdenalore.computer_vision.benchmark.resolve_precision(precision: str)

Resolves the precision of the data type

Parameters

precision (str) – precision data type passed in

omdenalore.computer_vision.benchmark.timestamp()

returns time.perf_counter

omdenalore.computer_vision.green_pixel_detection module

class omdenalore.computer_vision.green_pixel_detection.GreenPixelDetector

Bases: object

Detect the green pixels in an image

static detect(pretrained_weights, optimizer, loss, input_size)
Parameters
  • pretrained_weights (numpy ndarray) – pretrained weight matrix

  • input_size (tuple) – image dimension tuple

  • input_size – (img_height, img_width, input_channel)

  • optimizer (optimizer object) – optimization strategy

  • loss (loss object) – loss function

Returns

compiled Keras model

Return type

Keras model object

omdenalore.computer_vision.image_features module

class omdenalore.computer_vision.image_features.ImageFeatures

Bases: object

Class containing image feature methods

static brief_features(image_path: str) List[int]

detect BRIEF features from an image path

Parameters

image_path (str) – Path of the input image

Returns

BRIEF keypoints detected from an image

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.brief_features(image_path=”sample.jpeg”)

static describe_zernike_moments(image_path: str) List[Tuple[int, float]]

Calculates the Zernike moments of images inside a folder. Returns list of features and corresponding image names Zernike moments are great for describing shapes of objects.

Parameters

image_path (string) – Path of images

Returns

return a tuple of the contours and shapes

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.describe_zernike_moments(image_path=”sample.jpeg”)

static find_contours(image_path: str, show: bool = False) List[List[int]]

Detect contours from an image path

Parameters
  • image_path (str) – Path of the input image

  • show (boolean) – Whether to show the contours on a plot using matplotlib

Returns

contours detected from an image

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.find_contours(image_path=”sample.jpeg”, show=True)

static get_hough_lines(image_path: str, show: bool = False) List[List[float]]

Detect lines from an image path

Parameters
  • image_path (str) – Path of the input image

  • show (boolean) – Whether to show the lines on a plot using matplotlib

Returns

lines detected from an image

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.get_hough_lines(image_path=”sample.jpeg”, show=True)

static haralicks_features(image_path: str) List[Tuple[str, float]]

Detects Harlicks features from images in meaned four directions inside an folder with certain extensions and returns array of retrieved features

Parameters

image_path (string) – Path of the folder which contains the png images

Returns

array of extracted features - (image_name , features)

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.haralicks_features(image_path=”sample.jpeg”)

static sift_features(image_path: str, grayscale: bool = True) List[float]

detect SIFT features from an image path

Parameters
  • image_path (str) – Path of the input image

  • grayscale (boolean) – converts image to grayscale

Returns

SIFT keypoints detected from an image

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.sift_features(image_path=”sample.jpeg”)

static surf_features(image_path: str) List[float]

detect SURF features from an image path

Parameters

image_path (str) – Path of the input image

Returns

SURF keypoints detected from an image

Example

from omdenalore.computer_vision.image_features import ImageFeatures >>> ImageFeatures.surf_features(image_path=”sample.jpeg”)

omdenalore.computer_vision.loss_functions_semantic module

class omdenalore.computer_vision.loss_functions_semantic.LossFunctions

Bases: object

Various loss functions using Keras

static class_tversky(y_true, y_pred)

Returns Tversky Class for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> loss = LossFunctions.class_tversky(y_true,y_pred)

static confusion_matrix(y_true, y_pred)

Returns confusion matrix for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> confusion_matrix = LossFunctions.confusion_matrix(y_true, y_pred)

static dice_coef(y_true, y_pred)

Returns dice coefficient for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> dice_coef = LossFunctions.dice_coef(y_true, y_pred)

static dice_coef_loss(y_true, y_pred)

Returns dice_coef_loss for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> loss = LossFunctions.dice_coef_loss(y_true, y_pred)

static dice_loss(y_true, y_pred)

Returns dice_loss for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor

Example

>>> from omdenalore.computer_vision.loss_functions_semantic import LossFunctions
>>> y_true = [1.0, 2.0, 3.0]
>>> y_pred = [0.0, 1.0, 3.0]
>>> gdcl = LossFunctions.generalized_dice_coefficient_loss(
>>>     y_true, y_pred
>>>)
static focal_tversky(y_true, y_pred)

Returns focal Tversky loss for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> focal_tversky = LossFunctions.focal_tversky(y_true, y_pred)

static focal_tversky_loss(y_true, y_pred)

Returns focal Tversky loss for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> loss = LossFunctions.focal_tversky_loss(y_true, y_pred)

static generalized_dice_coefficient(y_true, y_pred)

Returns generalized_dice_coefficient for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions

>>> y_true = [1.0, 2.0, 3.0]
>>> y_pred = [0.0, 1.0, 3.0]
>>> gdc = LossFunctions.generalized_dice_coefficient(
        y_true, y_pred
    )
static log_cosh_dice_loss(y_true, y_pred)

Returns log_cosh_dice loss for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> lcdl = LossFunctions.log_cosh_dice_loss(y_true, y_pred)

static true_negative(y_true, y_pred)

Returns true negetive for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> tn = LossFunctions.true_negative(y_true, y_pred)

static true_positive(y_true, y_pred)

Returns True positives for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> tp = LossFunctions.true_positive(y_true, y_pred)

static tversky_index(y_true, y_pred)

Returns Tversky index for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> tversky_index = LossFunctions.tversky_index(y_true, y_pred)

static tversky_loss(y_true, y_pred)

Returns Tversky loss for truth vs prediction values

Parameters
  • y_true (tensor) – A tensor of the same shape as y_pred

  • y_pred (tensor) – A tensor resulting from a softmax

Returns

Output tensor.

Example

from omdenalore.computer_vision.loss_functions_semantic import LossFunctions >>> y_true = [1.0, 2.0, 3.0] >>> y_pred = [0.0, 1.0, 3.0] >>> tversky_loss = LossFunctions.tversky_loss(y_true, y_pred)

omdenalore.computer_vision.losses module

class omdenalore.computer_vision.losses.LabelSmoothingCrossEntropy(*args: Any, **kwargs: Any)

Bases: torch.nn.

NLL loss with label smoothing. Credits: timm library

forward(x, target)

omdenalore.computer_vision.metrics module

omdenalore.computer_vision.object_detection module

class omdenalore.computer_vision.object_detection.ObjectDetection(image_path: str)

Bases: object

Various object detection methods

hog_detection() List[Tuple[int, int, int, int]]

Histogram of gradients-based people detection function

Returns

List of (x,y,w,h) tuples for all the

bbox of people detected in the image :rtype: list

Example

from omdenalore.computer_vision.object_detection import ObjectDetection >>> detector = ObjectDetection(image_path=”sample.jpeg”) >>> hog_regions = detector.hog_detection()

mobile_net() List[Tuple[str, numpy.ndarray]]

Use MobileNet_SSD Object detector to detect classes as mentioned below

[“background”, “aeroplane”, “bicycle”, “bird”, “boat”,”bottle”, “bus”, “car”, “cat”, “chair”, “cow”, “diningtable”,”dog”, “horse”, “motorbike”, “person”, “pottedplant”, “sheep”,”sofa”, “train”, “tvmonitor”]

Parameters

image_path (str) – Path of the image

Returns

list of tuples and tuples are in the form (label, [bbox])

Return type

list of tuples

Example

from omdenalore.computer_vision.object_detection import ObjectDetection >>> detector = ObjectDetection(image_path=”sample.jpeg”) >>> results = detector.mobile_net()

omdenalore.computer_vision.semantic_segmentation module

class omdenalore.computer_vision.semantic_segmentation.SemanticSegemtationModel(num_classes: int, optimizer: tensorflow.python.keras.optimizer_v2.optimizer_v2, input_size: Tuple[int])

Bases: object

Returns a Semantic segmentation model

Parameters
  • pretrained_weights (numpy ndarray) – pretrained weight matrix

  • input_size (tuple) – img_height, img_width, input_channel

  • optimizer (optimizer object) – optimization strategy

  • num_classes (int) – number of target classes

:returns compiled Keras model :rtype: Keras model object

Example

from omdenalore.computer_vision.semantic_segmentation import SemanticSegemtationModel from tensorflow.keras.optimizers import Adam >>> num_classes = 10 >>> optimizer = Adam(learning_rate=0.01) >>> input_size = (224, 224, 3) >>> semantic_segmentation_model = Model(num_classes, optimizer, input_size) >>> model = semantic_segmentation_model()

omdenalore.computer_vision.utils module

class omdenalore.computer_vision.utils.Params(json_path: str)

Bases: object

Class that loads hyperparameters from a json file.

Example

from omdenalore.computer_vision.utils import Params >>> params = Params(json_path) >>> print(params.learning_rate) >>> params.learning_rate = 0.5 # change the value of learning_rate in params

property dict

Gives dict-like access to Params instance by params.dict[‘learning_rate’]

save(json_path: str)

Saves parameters to json file

Parameters

json_path – patht to save files in

update(json_path: str)

Loads parameters from json file

Parameters

json_path – patht to save files in

omdenalore.computer_vision.utils.check_imshow()

Check if environment supports image displays

Returns

Return true of false if you can display image using opencv or not

Return type

Boolean

Example

from omdenalore.computer_vision.utils import check_imshow >>> imshow = check_imshow()

omdenalore.computer_vision.utils.compute_avg_precision(recall: List[float], precision: List[float]) Tuple[float, float, float]

Compute the average precision, given the recall and precision curves

Parameters
  • recall – The recall curve (list)

  • precision – The precision curve (list)

Returns

  • Average precision, precision curve, recall curve

Example

from omdenalore.computer_vision.utils import compute_avg_precision >>> avg_precision, precision_curve, recall_curve = compute_avg_precision(

recall, precision,

)

omdenalore.computer_vision.utils.load_image(path: str) Optional[Tuple[PIL.Image.Image, int, int]]

Load an image at path using PIL and return the Image object and the width and height

Parameters

path (str) – path where the image to be loaded is

Returns

  • (PIL.Image.Image): Image object of the image

  • (int): width of the image

  • (int): height of the image

Example

from omdenalore.computer_vision.utils import load_image >>> img, width, height = load_image(“sample.jpeg”)

omdenalore.computer_vision.utils.show_cood(frame, x, y, font=cv2.FONT_HERSHEY_SIMPLEX, fontScale=1, thickness=2, color=(255, 0, 0), flag=True, radius=4, fill=- 1, offset=(1, 1))

Shows the coordinates of the cursor in the OpenCV window.

Parameters
  • frame – OpenCV frame/window.

  • x – The x-coordinate of the point that is to be shown.

  • y – The y-coordinate of the point that is to be shown.

  • font – Coordinate text font.

  • fontScale – Font scale factor that is multiplied by

the font-specific base size. :param color: Text color. :param flag: Default True, does not show coordinate values if False. :param thickness: Thickness of the lines used to draw a text. :param radius: Radius of the circular coordinate point. :param fill: Thickness of the circle outline, if positive. Negative values, like -1, mean that a filled circle is to be drawn. :param offset: Text offset relative to point coordinates. :returns:

  • Frame with point at coordinates (x,y).

Return type

list,same as input frame.

Example

>>> frame = cv2.imread("EXAMPLE_IMAGE.png")
>>> frame = show_cursor_cood(frame,x=100,y=100)
>>> cv2.imshow('frame',frame)
>>> cv2.waitKey(0)
>>> cv2.destroyAllWindows()
omdenalore.computer_vision.utils.translate_boxes(boxes: Sequence[Sequence[Any]], left: Tuple[int], top: Tuple[int]) Sequence[Sequence[Any]]

Translates bounding box by moving its cooridantes left-wise by left pixels and top-wise by top pixels.

Parameters
  • boxes (sequence) – list of box coordinates (label,left,top,right,bottom)

  • left – Number of pixels to subtract from horizontal coordinates

of the bounding box. Moving bounding box to the left is done with left > 0, and moving to the right with left < 0 :type left: int :param top: Number of pixels to subtract from vertical coordinates of the bounding box. Moving bounding box top is done with top > 0, and moving it down with top < 0. :type top: int :returns: list of new box coordinates :rtype: list,same as input

Example

from omdenalore.computer_vision.utils import translate_boxes >>> translated_boxes = translate_boxes(boxes, left, top)

omdenalore.computer_vision.utils.zoom_to_fill(image: numpy.ndarray, mask: numpy.ndarray, padding: int) numpy.ndarray

Use the mask to make the object as the center object of the image with paddings

Parameters
  • image (numpy.array) – image from which the object is taken out of

  • mask (numpy.array) – 2d mask array

  • padding (int) – add black pixel padding around the image

Returns

Image array

Return type

numpy.array

Example

from omdenalore.computer_vision.utils import zoom_to_fill >>> padding = 1 >>> image_ = zoom_to_fill(image, mask, padding)

omdenalore.computer_vision.visualisations module

class omdenalore.computer_vision.visualisations.Plot

Bases: object

Plotting functionality for images

static plot_cm(true: List[float], preds: List[float], classes: List[int], figsize: Tuple[int, int] = (8, 6))

Plot unnormalized confusion matrix

Parameters
  • true – List of targets

  • preds – List of predictions

  • classes – List of classes

  • figsize – Tuple specifying (height, width)

Returns

matplotlib figure containing confusion matrix

Example

from omdenalore.computer_vision.visualisations import Plot >>> true = [1.0, 2.0, 3.0] >>> preds = [2.0, 2.0, 3.0] >>> classes = [0, 1, 2] >>> fig = Plot.plot_cm(true, preds, classes)

static plot_hist(history)

Plotting train acc, loss and val acc and loss stored in history dict. History dict contains keys = {train_acc, val_acc, train_loss, val_loss} Each key contains list of scores for every epoch.

Parameters

history – Dict

Returns

plot the loss and acc plots for train and val

Example

from omdenalore.computer_vision.visualisations import Plot >>> history = model.fit() # Keras model >>> Plot.plot_hist(history)

static unnormalize_image(img: List[List[List[int]]], means: List[float], stds: List[float])

Convert normalized image to get unnormalized image

Parameters
  • img – Tensor of shape (C, H, W)

  • means – List of means used for normalization

  • stds – List of stds used for normalization

Returns

unnormalized input tensor which can be used to display image

Example

from omdenalore.computer_vision.visualisations import Plot >>> img = … >>> means = [0.4948, 0.4910, 0.4921] >>> stds = [0.2891, 0.2896, 0.2880] >>> Plot.unnormalize_image(img, means, stds)

Module contents