Development of a Chatbot to Improve Virtual Infrastructure Operational Efficiency

Categories: ScienceTechnology

Introduction

VMware vSpherevCenter is a management platform to manage the software defined datacentre that we offer to our customers, but we often hear customers complain about complexity of having to navigate multiple tabs to grab the infrastructure related information. We also hear customers complain about the user experience and the difficulty in performing common tasks using the current UI-vSphere client. Customers who are IT admins also face issues with management since they need to be logged into their internal network constantly via VPN to be able to execute any commands, like for example 'Creating a Virtual Machine in my Bangalore Datacentre'.

Executives who would wish to review the Infrastructure and workloads in a meeting must be aware of the technical terminologies and navigation paths on the UI.

This is where our idea of a virtual assistant was born. Imagine, if you could execute any task on your private cloud environment using just your phone from anywhere in the world. You could provide commands to create, manage and delete Virtual Machines, identify costs of your private cloud using just one interface.

Get quality help now
writer-Charlotte
writer-Charlotte
checked Verified writer

Proficient in: Science

star star star star 4.7 (348)

“ Amazing as always, gave her a week to finish a big assignment and came through way ahead of time. ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

The main features of our solution are:

  • Eliminate the need of VPN
  • 24 hours access to your private network.
  • Ease of use.
  • Information at fingertips using natural language.
  • Perform basic provisioning and management task on the workloads.

Create a chat with python, using deep machine learning capability to interpret the user queries in natural language and interact with vCenter server appliance to extract the required information and present it back to the user on ChatbotUI

Chapter 2: Architecture Overview

Overall architecture is designed using Django web framework and offers the web services for the chatbot user interface.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Django view which is the logic for processing the user’s request.

Once the request is consumed, we use NLTK libraries for python for text processing for classification technique. NTLK is an ideal suite for our project since its one of the leading platforms to work with python programs for human language data.

The challenges of classifying and contextualizing the user unput is taken care by importing Keras/Tensorflow model. Tensor flow is an end to end open source platform for machine learning and assisted by Keras for deep learning to make chatbot made as much as interactive as possible.

The vectors then are tagged with labels which will then interact with the vSpherevCenter using vSphereSDK for python to call specific API and fetch the data. The response is re processed and a standardized statement is offered back to the user.

Why custom model when compared to already available chatbot libraries?

Advantage of using this model is to process the user provided statements and even gives the power of extendibility future development of the project. Therefore, the custom models and patterns for natural language processing becomes part of core module which can be customized and worked upon more optimally for our custom use cases.

Now we also get benefit of using custom models which can help bot ask user which context they are referring to

- EX. user: How many hosts in a cluster.

bot: Please specify name of cluster.

user: vm-cluster-1

bot: there are 4 hosts in this cluster.

The list of libraries used is as below:

  •  nltk
  •  LancasterStemmer
  •  Numpy
  •  Keras models, layers and optimizers
  •  Django models and views
  •  vSphereSDKs for python

( Nltk - For Natural Language “ Lemmantization ” ) ( Django Predictor view ) ( Tensorflow )

– Processing the data to convert string into tensor vectors

Django Predictor view ) ( vSphere SDK API ) ( User ) ( Keras )

– Training and predicting model using recurrent

NeuralNetw o r k s Django Predictor view ) ( Creating standardized statement for users on the response from vSphere SDK API for python )

Figure 1: Architecture

Chatbot API with Keras/TemsorFlow Model

Keras deep learning library is used in this project to build a classification model. Keras performs the training running on top of the machine learning platform tensorflow. The datastruture of Keras are mainly layesrs and models and we use sequential model to begin with. Figure 2: Shows the keras model import

The Chatbot intents and patterns for learning are defined in a JSON file. Advantage of using this model is there is no requirement for big vocabulary. The Classification model is then created using this small volabulary. A small snippet from code is attached below in figure: 3

( root: {} 1 key intents: [] 14 items 0: {} 4 keys 1: {} 4 keys 2: {} 4 keys tag: ' Number_of_hosts ' pattern: [] 5 items 0: 'How many hosts in Cluster' 1: 'List the hosts in the cluster' 3: 'List the esxi host in the cluster' 4: 'How many hosts managed by vCenter' reponses : [] 1 item context: [] 1 item )

Figure 3: Snippet from the patterns and intent

Patterns are then processed to build vocabulary. The stemmer which was imported earlier is used to process each word and produce a root which will help in achieving more combinations for the strings from user input. Figure 4 shows the stem and sort classes

( words = [ stemmer.stem ( w.lower ()) for w in words if w is not in ignore_words ] words = sorted(list(set(words))) #Sort classes Classes = sorted(list(set(classes))) )

Figure 4: Words processing and sorting classes

The words as it is are meaning less for the machines learning. The works are translated to bucket of words with arrays. Figure 5: Show the training class to bucketize the words from each sentence.

( for doc in documents : bag = [] pattern_words = doc[ 0] pattern_words = [ stemmer.stem ( word.lower ()) for word in pattern_words ] for w in words: bag.append (1) if w in pattern_words else bad.append (0) )

Figure 5: Bucketizing words from user input sentences

The training data, patterns and intents is converted into array [0,1,0,1…, 0]). Model is built with Keras, based on three layers. Classification output will be multiclass array, which would help to identify encoded intent. The array is then labeled with tags to invoke specific vSphere API to fetch the data from the vCenter server inventory.

Django Integration

After indexing the data and creating an array with labels, the vectors obtained from the processing through tensorflow models, basically the logic is ready. Next step in the workflow is build a web application. That where I decided to use Django as it is much easier to use for python programs. Django offers the Models, View and Controllers which are the essential logics in a web application development.

The model is the part of the web application which is a mediator between the webpage and database. This implements the logic for the application data.

Endpoint is a view which is a mediator for API routing. API which is invoked to the vCenter for specific requests from user once after processing and then formatting the response from vCenter server back the User interface

I have created a Django project which create a directory to house all the code. The default directory structure is as described in Figure 7.

Generate an inside the main project and name is a Predictor. This will be the machine Learning code behind the API set.

app.py is where the config class for the project is defined and it’s a one-time execution code.

views.py contains the code to for the request and runs every time there is a request. The vectorization and classification logic are housed in this script.

This app is added into INSTALLED_APPS by adding the predictor in Setting.py in the main code /vcBot/settings.py.

Next step is to create another folder inside predictor to store the trained models and a view is added to support the classification logic in path /vcBot/predictor/view.py

The routing and mapping is added in /vcBot/urls.py.

Summary

The project is in the mid stage with the implementation of the necessary logics and framework. The goal is a create a chatbot to assist the users pull and perform basic operational tasks without undergoing extensive training on the vSphere products and knowing the terminologies required to navigate using the default UI.

Directions for future work

  •  Create sub modules/scripts using vSphere automation SDK for python to interact with vCenter.
  •  Create front end for the chat bot using HTML and JavaScript.
  •  Additional use cases to be added as a part of future enhancement for diagnostic usage like reading the logs on the esxi server and assisting users with root cause analysis and recommending knowledge base articles as a self-help approach. Also assist the Technical support engineers perform the daily task of logs reviews for the support request.
Updated: Feb 12, 2024
Cite this page

Development of a Chatbot to Improve Virtual Infrastructure Operational Efficiency. (2024, Feb 12). Retrieved from https://studymoose.com/document/development-of-a-chatbot-to-improve-virtual-infrastructure-operational-efficiency

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment