Cybot™ Log Collector

Cybot™ Log Collector

The download link for the ova will be shared by the ActiveBytes Support team.
Download the OVA and proceed to import the OVA using your preferred hypervisor.

Importing the OVA – Hypervisor

Deploy a Virtual Machine from an OVA File. Allocate resources according to the VM as per the hardware prerequisite.

Configuration.

Once the installation is complete, login to the workstation with the user name and password which will be provided by the CyBot support team.
      Power on the machine.
      Click on the machine.
      Enter the username and password to login the Virtual Machine provided by Cybot Support

 

Setting the IP for Virtual Machine

In-order to SSH into the VM via any SSH Client, assign an IP address to the VM.

To Configure IP address, you can navigate to "Configure IP in Virtual Machines". This document will guide you to configure IP address in the Virtual Machine.  

   

Setting up the host name

Once the IP address is assigned for the VM, configure the preferred host name for the data lake.

 Check current host-name
            >>> hostnamectl
Change hostname
            >>> sudo hostnamectl set–hostname <newhostname>
   Apply the changes
            >>> sudo reboot (this will reboot the machine)
Check the hostname again using
            >>> hostnamectl

Partitioning / Mounting the Hard disk.

The ideal design for setting up a data lake will include 2 hard disk, one dedicated for the OS and the other for the data which is stored in data lake.

Mounting steps
Check disk partition
      >>> lsblk
To create a mounting path for the hard disk, follow the below steps
      >>> mkfs.ext4 <hard disk name>
               Sample: mkfs.ext4 /dev/sdb
      >>> mkdir /mnt/<provide a folder name>
              Sample mkdir /mnt/cyb_datalake
      >>> mount  <hard disk name>
             Sample: mount /dev/sdb   /mnt/cyb_datalake


Recheck the disk partition and you should be able to see the mount point for the hard disk:
      >>> lsblk

                                    

 


Log Collection Setup

Log collection Support Matrix 

The log collector machine can gather logs from a wide range of top security and application devices. It currently supports the collection of logs from nearly 500 different solutions. The process involves configuring and forwarding logs to the Log Collector machine using the SYSLOG method. 

Here is a list of the diverse solutions that the log collector can support: 


Log Collection Support Matrix will be shared via Cybot Support Team on request. Feel free to send an email to cybot-support@active-bytes.com


Log collection setup 

Log Collection Methods: 

  1. Syslog: Syslog is a widely-used protocol for collecting and forwarding log messages and event data. It is commonly employed in network and security environments to centralize the storage of log data. Syslog-enabled devices and applications generate log messages, which are then sent to a centralized syslog server or collector. This server aggregates and stores the logs for analysis, monitoring, and compliance purposes.
  2. API (Application Programming Interface): APIs provide a programmatic interface for interacting with software applications and services. In the context of log collection, organizations can use APIs to access and retrieve log data directly from various sources, such as cloud services, applications, or specialized log management platforms. Many modern systems and services offer RESTful APIs that allow users to programmatically query and retrieve log information. This approach offers flexibility in log collection and allows for more customized and real-time access to log data. Developers can create scripts or applications to pull logs from APIs and integrate them into their log analysis or monitoring tools.
Organizations often choose between syslog and API-based log collection methods based on their specific needs and the types of systems and services they use. Syslog is well-suited for traditional on-premises infrastructure and network devices, while APIs are valuable for collecting logs from cloud-based services, modern applications, and systems that offer programmatic access to log data. In some cases, a combination of both methods may be employed to create a comprehensive log collection strategy that covers a diverse range of sources.

High-Level Design of a Log Collector 

The log collection and processing system is designed to efficiently gather, parse, and store logs from various sources within the organization. This streamlined process involves a series of components working in tandem, ensuring seamless and reliable handling of log data. The journey begins with log sources emitting their data, which is then transmitted through a dedicated transporter. The transporter plays a crucial role in forwarding logs to the designated worker responsible for parsing and structuring the raw log information. Once parsed, the refined logs are forwarded to the data depot, where they are systematically stored for future analysis and reference.
  1. Log Sources:

    • Diverse log sources within the organization, such as applications, servers, and network devices, generate logs containing valuable information about system activities, errors, and user interactions.
    • Log sources emit data in various formats, necessitating a flexible and adaptive log collection system.
  2. Transporter:

    • The transporter acts as the intermediary layer between log sources and the processing components.
    • It efficiently collects logs from different sources, employing robust mechanisms for data integrity and reliability during the transmission process.
    • The transporter ensures secure and optimized delivery of logs to the next stage of processing.
  3. Worker:

    • The worker component is responsible for parsing and transforming raw log data into a structured and standardized format.
    • It employs parsing algorithms and rule sets to extract relevant information from the logs, facilitating easier analysis and correlation of data.
    • The worker is designed to handle large volumes of logs efficiently, ensuring timely processing and reducing bottlenecks in the overall system.
  4. Data Depot:

    • The data depot serves as the centralized repository for storing processed logs.
    • Structured logs from the worker are securely stored in a scalable and resilient storage infrastructure.
    • The data depot enables easy retrieval and analysis of historical log data, supporting a wide range of use cases such as troubleshooting, compliance, and performance monitoring.


Preparing for Log collection 

  1. Identify which Security/Application solution you wish to forward to the Log-Collector Device.
  2. Please verify whether the SYSLOG protocol can be configured for the specific solution. 
  3. Please verify whether the Log-Collector device supports the solution to which you intend to forward logs, from the list of solutions provided. 

 

  • Configure Log Sender 

Configure the devices or systems that will send log messages to the syslog server. This can include network devices, servers, applications, and more.  

In most cases, you'll need to access the logging settings of each device and specify the IP address and Port along with the Protocol to be used by the syslog server. 

 

  • Configure Log Collector 

  1. Set up the Log Collector device to receive and process log messages. This usually involves the following steps: 
  2. Setup collector policies to enable log collection and Parsing. 
  3. Setup policies to ingest the parsed data to the datalake. 

Setup collector policies to enable log collection and Parsing. 

  1. First please confirm if the logs being sent to the Log Collector are being received correctly on the Designated Port. Use the following command to confirm the logs availability from the Log Source. 

  • If Protocol used is TCP 

    >>>> nc -lnvp <Port Number> 

  • If Protocol used is UDP 

    >>>> nc -lu <Port Number> 

      Kindly wait for a moment, and you'll soon be able to observe all incoming logs. 

  1. We need to now set Log Collector Policies for these incoming logs. Use the following steps for navigating it to the Log Collector Tool. 

  1. Navigate to the Log Collector Tool policies list. 

>>>> cd /home/lc-cybot/log-collector/linux-x64/Policy 

  1. Here you will have a list of Policies (in .json format)  

  1. Create a policy for the log source you require. 

>>>> nano <log_name>.json 


  1. Enter the following to create a policy for your log source. 

    1. sample.json

{ 
  "Enabled": true,
 
  "ProviderKey": "<Product Key>", 
  "Source": {
 
    "Type": "<Protocol>", 
    "PrefixMatch": false, 
    "Attributes": {
 
      "IP": "<Machine IP>", 
      "Port": <Port> 
    }
 
  },
 
  "Target": {
 
    "Type": "<Protocol>", 
    "Attributes": {
 
      "IP": "127.0.0.1",
 
      "Port": 9515
 
    }
 
  }
 


      In the above json Sample you need to provide the necessary information for the following parameters;
  1. ProviderKey:- Provide the Provider Key for respective Log Source. (e.g. P123456).
  2. Type (Source):- TCP or UDP, Input the Protocol that is used for Log forwarding from the respective Solution.
  3. IP (Source):- You have the option to provide the IP address of the Log collector, or you can use the loopback address 127.0.0.1.
  4. Port (Source):- Please indicate the port to which the logs from the respective solution are forwarded.
  5. Type (Target):- TCP or UDP, Input the Protocol that is used for Log forwarding from the respective Solution.
  6. IP (Target):- You have the option to provide the IP address of the Log collector, or you can use the loopback address 127.0.0.1.
  7. Port (Target):- Please specify the port to which the logs should be forwarded after the parsing process. The Log Ingestion tool will utilize this port to ingest data into the Datalake.

You can verify now the output parsed logs coming from the Target Port by using the following commands
      
  • If Protocol used is TCP 

    >>>> nc -lnvp <PTarget Port Number> 

  • If Protocol used is UDP 

    >>>> nc -lu <Target Port Number> 




Setup collector policies to Ingest logs to Datalake.


Create a log collector policy to ingest data into a datalake using the data ingestion tool.
Follow the steps to create a policy in data ingestion tool to ingest data into the Datalake:
  1. Navigate to the Ingestor tool directory
               
     >>>> cd /home/lc-cybot/log-ingestor/config/
  2. Here you will be able to create a configuration file for any logs that you need to ingest to the Datalake. Find a sample log ingestion configuration for reference
                  Example:-  sample.conf
  1. Open the sample.conf  to edit the configuration file 
                        input {
                                tcp {
                                         port => 5000  # Replace with the port you want to listen on
                                         codec => json_lines  # Assuming you are receiving JSON data, adjust the codec as needed
                                      }
                                  }

                         filter {
                                # You can add filters here to parse, transform, or enrich your data as needed
                                    }

                          output {
                                      opensearch {
                                                          hosts => "https://localhost:9200"  # Replace with your OpenSearch cluster URL
                                                          index => "your_index_name"  # Replace with the desired index name to find in datalake
                                                          user => "your_username"  # Replace with your OpenSearch username
                                                          password => "your_password"  # Replace with your OpenSearch password
                                                              }
                                          }


Add multiple log ingestion configurations as pipelines

To add multiple pipeline configurations, follow the steps below
  1. Navigate to the Log Ingestion pipeline configuration
               >>>> cd /home/lc-cybot/log-ingestor/config/
  1. Find the pipelines.yml file to edit and add multiple log ingestion configurations to run as a pipeline. Add the following lines and edit the path for path.config. Provide the path respective to the log configuration file that needs to be added in the pipelines.yml
               >>>> nano pipelines.yml
                           
                                    
You can add multiple pipelines, by adding more configs in the pipelines.yml as shown in the screenshot.

Optimizing Log Collector Pipeline Configuration for Improved System Performance

To enhance the log collector's performance after upgrading the log collector server's performance and storage, follow these steps below

Navigate to the Log Ingestion pipeline configuration
               >>>> cd /home/lc-cybot/log-ingestor/config/

Locate the pipelines.yml file and within it, increase the values of  pipeline.workers and pipeline.batch.size to improve performance as per requirements
               >>>> nano pipelines.yml

                         
                     
1.Pipeline Workers: Refers to the number of worker threads assigned to a specific log collector pipeline. These worker threads process incoming data, allowing parallel execution for better performance. You can configure the number of workers per pipeline to optimize resource utilization based on your system’s capabilities and the specific workload of each pipeline.

2. Pipeline Batch Size: This parameter determines the number of events log collector will process as a batch within a pipeline. Larger batch sizes can enhance processing efficiency, especially when handling data with high throughput. However, choosing an appropriate batch size should be balanced with available system resources and latency requirements to avoid overloading log collector. It’s important to tune this setting based on your specific use case.


Configuring Index Templated and Index Lifecycle Policies for Ingested Data

Once the data is ready to be ingested, configure the ILM policies to manage the Index Lifecycle and Retention Policies. For setting up Index Templates and ILM policies, find the following steps.

Please note to do these steps before ingesting the log source to the datalake


To create Index Template, Navigate to Dev tools from the Datalake UI,


Now follow these steps to create ILM policies and Index Templates in Dev Tools;
  1. Creating ILM
  1. PUT _plugins/_ism/policies/policy_for_<log_name>
  2. {
  3.     "policy": {
  4.         "policy_id": "policy_for_<log_name>",
  5.         "description": "<log_name>",
  6.         "schema_version": 16,
  7.         "error_notification": null,
  8.         "default_state": "hot",
  9.         "states": [
  10.             {
  11.                 "name": "hot",
  12.                 "actions": [
  13.                     {
  14.                         "retry": {
  15.                             "count": 3,
  16.                             "backoff": "exponential",
  17.                             "delay": "1m"
  18.                         },
  19.                         "rollover": {
  20.                             "min_index_age": "14h"
  21.                         }
  22.                     },
  23.                     {
  24.                         "retry": {
  25.                             "count": 3,
  26.                             "backoff": "exponential",
  27.                             "delay": "1m"
  28.                         },
  29.                         "allocation": {
  30.                             "require": {
  31.                                 "temp": "warm"
  32.                             },
  33.                             "include": {},
  34.                             "exclude": {},
  35.                             "wait_for": false
  36.                         }
  37.                     }
  38.                 ],
  39.                 "transitions": [
  40.                     {
  41.                         "state_name": "warm",
  42.                         "conditions": {
  43.                             "min_index_age": "1d"
  44.                         }
  45.                     }
  46.                 ]
  47.             },
  48.             {
  49.                 "name": "warm",
  50.                 "actions": [
  51.                     {
  52.                         "retry": {
  53.                             "count": 3,
  54.                             "backoff": "exponential",
  55.                             "delay": "1m"
  56.                         },
  57.                         "read_write": {}
  58.                     },
  59.                     {
  60.                         "retry": {
  61.                             "count": 3,
  62.                             "backoff": "exponential",
  63.                             "delay": "1m"
  64.                         },
  65.                         "allocation": {
  66.                             "require": {
  67.                                 "temp": "cold"
  68.                             },
  69.                             "include": {},
  70.                             "exclude": {},
  71.                             "wait_for": false
  72.                         }
  73.                     }
  74.                 ],
  75.                 "transitions": [
  76.                     {
  77.                         "state_name": "cold",
  78.                         "conditions": {
  79.                             "min_index_age": "3d"
  80.                         }
  81.                     }
  82.                 ]
  83.             },
  84.             {
  85.                 "name": "cold",
  86.                 "actions": [
  87.                     {
  88.                         "retry": {
  89.                             "count": 3,
  90.                             "backoff": "exponential",
  91.                             "delay": "1m"
  92.                         },
  93.                         "read_write": {}
  94.                     }
  95.                 ],
  96.                 "transitions": [
  97.                     {
  98.                         "state_name": "delete",
  99.                         "conditions": {
  100.                             "min_index_age": "8d"
  101.                         }
  102.                     }
  103.                 ]
  104.             },
  105.             {
  106.                 "name": "delete",
  107.                 "actions": [
  108.                     {
  109.                         "retry": {
  110.                             "count": 3,
  111.                             "backoff": "exponential",
  112.                             "delay": "1m"
  113.                         },
  114.                         "delete": {}
  115.                     }
  116.                 ],
  117.                 "transitions": []
  118.             }
  119.         ],
  120.         "ism_template": [
  121.             {
  122.                 "index_patterns": [
  123.                     "<log_name>-*"
  124.                 ],
  125.                 "priority": 824
  126.             }
  127.         ]
  128.     }
  129. }


In the Above Index Template be sure to replace <log_name> with appropriate index for the log source you want to ingest to the Datalake. Also the retention period that each index needs to be added as per requirement in the "min_index_age" field for each node states. Each Node states can be identified from the "state_name"
eg. {   "state_name": "delete"}

  1. Creating Index Template

  1. PUT _index_template/template_for_<log_name>
    {
      "index_patterns": ["<log_name>-*"],
      "template": {
       "settings": {
        "plugins.index_state_management.rollover_alias": "<log_name>-",
         "index" : {
            "routing" : {
              "allocation" : {
                "require" : {
                  "temp" : "hot"
                }
              }
            },
            "unassigned" : {
              "node_left" : {
                "delayed_timeout" : "1m"
              }
            },
            "number_of_shards" : "1",
            "number_of_replicas" : "1"
          }
        },
        "mappings" : {
          "dynamic_templates" : [
            {
              "strings" : {
                "match_mapping_type" : "string",
                "mapping" : {
                  "type" : "keyword"
                }
              }
            }
          ],
          "properties" : {
            "@timestamp" : {
              "type" : "date"
            },
            "@version" : {
              "type" : "keyword"
            },
            "account" : {
              "type" : "keyword"
            },
            "action" : {
              "type" : "keyword"
            },
            "amount" : {
              "type" : "keyword"
            },
            "bytesin" : {
              "type" : "keyword"
            },
            "bytesout" : {
              "type" : "keyword"
            },
            "command" : {
              "type" : "keyword"
            },
            "created_at" : {
              "type" : "date"
            },
            "cybDestCountry" : {
              "type" : "keyword"
            },
            "cybEventCategory" : {
              "type" : "keyword"
            },
            "cybEventOriginal" : {
              "type" : "keyword"
            },
            "cybEventType" : {
              "type" : "keyword"
            },
            "cybLogSource" : {
              "type" : "keyword"
            },
            "cybLogTime" : {
              "type" : "date"
            },
            "cybPolicyName" : {
              "type" : "keyword"
            },
            "cybProviderKey" : {
              "type" : "keyword"
            },
            "cybRuleId" : {
              "type" : "long"
            },
            "cybSourceCountry" : {
              "type" : "keyword"
            },
            "cybSubruleId" : {
              "type" : "keyword"
            },
            "dinterface" : {
              "type" : "keyword"
            },
            "dipv4" : {
              "type" : "ip"
            },
            "dnatipv4" : {
              "type" : "keyword"
            },
            "dnatport" : {
              "type" : "keyword"
            },
            "domain" : {
              "type" : "keyword"
            },
            "domainorigin" : {
              "type" : "keyword"
            },
            "dport" : {
              "type" : "keyword"
            },
            "event" : {
              "properties" : {
                "original" : {
                  "type" : "keyword"
                }
              }
            },
            "group" : {
              "type" : "keyword"
            },
            "host" : {
              "properties" : {
                "ip" : {
                  "type" : "keyword"
                }
              }
            },
            "host_name" : {
              "type" : "keyword"
            },
            "login" : {
              "type" : "keyword"
            },
            "object" : {
              "type" : "keyword"
            },
            "objectname" : {
              "type" : "keyword"
            },
            "objecttype" : {
              "type" : "keyword"
            },
            "packetsin" : {
              "type" : "keyword"
            },
            "packetsout" : {
              "type" : "keyword"
            },
            "policy" : {
              "type" : "keyword"
            },
            "process" : {
              "type" : "keyword"
            },
            "processid" : {
              "type" : "keyword"
            },
            "protname" : {
              "type" : "keyword"
            },
            "quantity" : {
              "type" : "keyword"
            },
            "reason" : {
              "type" : "keyword"
            },
            "recipient" : {
              "type" : "keyword"
            },
            "seconds" : {
              "type" : "keyword"
            },
            "sender" : {
              "type" : "keyword"
            },
            "session" : {
              "type" : "keyword"
            },
            "severity" : {
              "type" : "keyword"
            },
            "sinterface" : {
              "type" : "keyword"
            },
            "sipv4" : {
              "type" : "ip"
            },
            "size" : {
              "type" : "keyword"
            },
            "snatipv4" : {
              "type" : "keyword"
            },
            "snatport" : {
              "type" : "keyword"
            },
            "sport" : {
              "type" : "keyword"
            },
            "subject" : {
              "type" : "keyword"
            },
            "tag1" : {
              "type" : "keyword"
            },
            "tag2" : {
              "type" : "keyword"
            },
            "tag4" : {
              "type" : "keyword"
            },
            "url" : {
              "type" : "keyword"
            },
            "vendorinfo" : {
              "type" : "keyword"
            },
            "srcip" : {
              "type" : "ip"
            },
            "dstip": {
              "type": "ip"
            },
            "vmid" : {
              "type" : "keyword"
            }
          }
        }
       }
    }


  1. Create bootstrap Index

  1. PUT /%3C<log_name>-%7Bnow%2Fd%7D-000001%3E
    {
    "aliases": {
    "<log_name>-":{
    "is_write_index": true
    }
    }
    }

Now you can start the Log ingestion of the respective logs in the Datalake

Kindly create an Index Pattern for the respective log sources to view them in the datalake.

Create Index Pattern in Datalake

Index pattern is a way of specifying and matching a set of index names. Index patterns are commonly used in various contexts, such as when defining mappings, searching across multiple indices, or setting up index aliases.

  1. Navigate to Stack Management in Datalake as shown below:


  1. Open stack Management and open on "Index Patterns" Now click on Create Index Pattern as shown below.





Users Management

User Management in Cybot

Overview 

The Users section provides administrative capabilities for creating, managing, and controlling user accounts and permissions within CYBOT™. It empowers administrators to handle user registration, user management and access control, ensuring a secure and personalized user experience. Administrators can add, modify, or remove user accounts and assign appropriate roles to each user, ensuring that the tool is used securely and efficiently. 

 

Fig. Settings: Users 

Details 

  • The landing page displays the list of all existing users. 

  • The current user is listed along with a 'Current User' badge. 

  • The SSO status of each account can be viewed in the 'Is SSO User' column. 

  • The 'Status' column displays the status of each user account. The status can be either 'Active,' which indicates that the account is accessible, or 'Disabled,' which indicates that the account is disabled. 

 

Features 

The following features can be executed from the User Management section: 

1. Create a new user account 

  • Using the Add User button provided in the top-right corner, new users can be added to CYBOT™.




    • Initially, there will be a prompt to enter the user details, and the group to which the newly created user will be added can be selected. 

    • A structured approach to user access and permissions ensures that each user group has the appropriate level of control and responsibility within the tool. 

    • The three user categories in CYBOT™ are: 

    • User: Users have basic permissions within the tool. They can access and view certain information and features, typically for monitoring purposes but do not have the ability to make significant configuration changes or modifications. 

    • Admin: Admins have more extensive permissions compared to regular users. They can configure and manage settings, set up alerts, and have control over specific functionalities and are responsible for day-to-day management and maintenance of the tool. 

    • Super Admin: Super Admins have the highest level of permissions. They have full control and access to all aspects of CYBOT™ and can configure advanced settings, manage users, and make critical changes, making them responsible for the overall administration and security of the tool. 

    • The ‘SSO Access’ toggle button can be used to control SSO (Single Sign-On) access for individual accounts. Enabling this feature allows users to securely access multiple services and applications with a single set of login credentials. 

    • In the Modules section, the permission to Read, Write, or Delete data within each CYBOT™ module can be configured for the new user using the provided checkboxes. 

    • The Advanced Permissions button can be used to configure access and permissions for sub-modules within a module. Clicking the button will open an editable JSON file that lists the permissions, allowing users to make necessary configuration changes. 

     

    • Permissions can be changed by hovering over the required field in the JSON file. A pen icon will appear, and upon clicking it, a text box will be displayed for inputting the value, which can be saved using the check icon. 
       

     

     

    • If SSO access is not enabled for the newly created user, a popup containing the password for the new user will appear. Upon closing the popup, redirection will occur to the Users’ page. 

     

     

     

    • At the bottom, locate the 'Create User' button, click on it, and you will successfully create the new user account in CYBOT™. Kindly make sure to securely record the password for future reference. Additionally, users have the option to change their password after their initial login. 

     

    2. Edit an existing user account 

    • Using the Edit icon provided under the Manage column, details and permissions of an existing user can be edited. 

     

     

    Fig. Manage Users: Edit User 

    • In the Modules section, the permission to Read, Write, or Delete data within each CYBOT™ module can be configured for the selected user using the provided checkboxes. 

    • The user's ability to edit permissions and the extent to which they can modify the details and permissions of other users depends on the user category of the current user. 

     

    3. Disable a user account 

    • Using the Enable/Disable User icon provided under the Manage column, the access to CYBOT™ for the selected account can be enabled/disabled. 

     

    Fig. Manage Users: Disable User 

    • Note: The current user account cannot be disabled.




User Management in Datalake 

 

To introduce a new user to the Datalake, please begin by creating a user in the Datalake system. 

For creating a new SSO user in Datalake, kindly follow the steps below: 

  1. Navigate to the taskbar from the left side of the Home Page as shown below and click on "Security. 

 

  1. Now open Internal Users” and click on “Create User” to Create a new user in Datalake. 

                   

  1. Create User by providing the Username, Password and the appropriate role required for the User. 

 


Note: Now you have succesfully created an SSO user in Datalake

 


    • Related Articles

    • CYBOT™ Architecture

      This document will offer a deep dive into the intricate components that form the foundation of CYBOT™ architecture. Understanding its architecture is essential for developers, researchers, and enthusiasts looking to harness its potential. Components ...
    • About CYBOT™

      Introduction to CYBOT™ CYBOT™ is a cutting-edge solution developed by ActiveBytes Innovations to address the challenges posed by the overwhelming volume and complexity of logs generated in today's enterprise landscape. This groundbreaking platform ...
    • Cybot™ Platform

      CYBOT™ is a one-of-a-kind platform that is custom-built to fit needs of enterprises with critical infrastructure. It acts as a cyber security foundation for organizations that requires deep visibility into their infrastructure with a multilayer ...
    • Cybot Analytical Engine Installation

      Cybot Analytical Engine Installation This guide will lead you through the process of deploying a Cybot Analytical Engine. Downloading the OVA The download link for the ova will be shared by the ActiveBytes Support team. Download the OVA and proceed ...
    • Configure IP in Cybot™ Platform

      Login to the Cybot machine via SSH. Once logged in successfully to Cybot machine, please find the following steps to assign IP address to Cybot. Enter as root user in the console using the following command. >>>> sudo –s Provide the password for the ...