Skip to main content

Prometheus Installation

Goal:
This blog will help you with prometheus installation.

Installation Steps.

    Create user without a home directory.
    sudo useradd --no-create-home --shell /bin/false prometheus

    Create directories to copy prometheus config and library files and give permission to user that you crated.
    sudo mkdir /etc/prometheus
    sudo mkdir /var/lib/prometheus
    sudo chown prometheus:prometheus /etc/prometheus
    sudo chown prometheus:prometheus /var/lib/prometheus

    Download and unzip prometheus from github.
    curl -LO https://github.com/prometheus/prometheus/releases/download/v2.3.2/prometheus-2.3.2.linux-amd64.tar.gz 
    tar -xvf prometheus-2.3.2.linux-amd64.tar.gz
    mv prometheus-2.3.2.linux-amd64 prometheus-files
    

    Copy prometheus binary files to bin directory and give permission.
    sudo cp prometheus-files/prometheus /usr/local/bin/
    sudo cp prometheus-files/promtool /usr/local/bin/
    sudo chown prometheus:prometheus /usr/local/bin/prometheus
    sudo chown prometheus:prometheus /usr/local/bin/promtool
    

    Copy prometheus config files to directory we have created and give permission.
    sudo cp -r prometheus-files/consoles /etc/prometheus
    sudo cp -r prometheus-files/console_libraries /etc/prometheus
    sudo chown -R prometheus:prometheus /etc/prometheus/consoles
    sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries
    

    Open  prometheus YAML file and change server details from where you want to scrap log that has to be monitor.
    sudo vi /etc/prometheus/prometheus.yml
    

    Copy below content in file.
    global:
      scrape_interval: 10s
         
    scrape_configs:
      - job_name: 'prometheus'
        scrape_interval: 5s
        static_configs:
          - targets: ['localhost:9090']
    

    Give permission to above file.
    sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml
    

    Create prometheus service file to start/stop/reload prometheus as daemon.
    sudo vi /etc/systemd/system/prometheus.service
    

    Copy below content and close file.
    [Unit]
    Description=Prometheus
    Wants=network-online.target
    After=network-online.target
         
    [Service]
    User=prometheus
    Group=prometheus
    Type=simple
    ExecStart=/usr/local/bin/prometheus \
            --config.file /etc/prometheus/prometheus.yml \
            --storage.tsdb.path /var/lib/prometheus/ \
            --web.console.templates=/etc/prometheus/consoles \
            --web.console.libraries=/etc/prometheus/console_libraries
         
    [Install]
    WantedBy=multi-user.target
        

    Give permission to above file.
    sudo systemctl daemon-reload
    sudo systemctl start prometheus
    sudo systemctl status prometheus
    

 

Hope You are able to install, Comment below if you have any doubt I will try to help ASAP.

Comments

Popular posts from this blog

Curator

Goal: In these tutorial we gonna cover deletion of old logs in ELK Stack. We gonna achive these by deleting old indices created by Logstash while dumping logs in Elasticsearch. Prerequisites: Old logs to delete... 😜😜 Let's Begin the exercise: Install curator Curator is a package in Elasticsearch  repository to delete old indices. Create a file sudo vi /etc/yum.repos.d/curator.repo paste following lines Save and Exit file Run yum install sudo yum install elasticsearch-curator Configure Curator Create a directory mkdir ~/.curator/ Open a file sudo vi ~/.curator/curator.yml paste following code Save and Exit file Deletion pattern Create file to define delete pattern in Elasticesearch sudo vi ~/.curator/delete_indices.yml paste following lines in file Create a log file for curator on the location you defined in configuration, and assign permission to right into file. sudo touch /var/log/curator #to assign permission to write l

GoReplay - Testing Your Site with Actual Traffic

Goal:   In these article we gonna learn How to capture your Real Time traffic from production and reuse it at your testing/development environment. Prerequisite: One web server running, or If you are just playing around then you can run goreplay test ftp server. Let's Begin Load Testing for site serving millions user wasn't be that easy before I came to know GoReplay . Here I am not gonna explain you How great go replay is, You will automatically get to know after following steps above step to capture and replay your request logs. FYI GoReplay capture logs from tcpdump. Installation: Download zip file from there git repo and unzip it. # create a directory mkdir ~/goreplay # go to directory you created cd ~/goreplay # download tar file from goreplay git repo wget https://github.com/buger/goreplay/releases/download/v0.16.1/gor_0.16.1_x64.tar.gz # unzip it tar -xf gor_0.16.1_x64.tar.gz After Unzipping Check GoReplay binary File is available in directory. Ca

Elastalert

Goal Trigger alert on elastic search stream.   Install Elastalert sudo yum install gcc sudo pip install elastalert sudo yum install git Just to get basic elastalert rules reference clone following git repository. git clone https://github.com/Yelp/elastalert.git example_elastalert Create your own rules sudo mkdir -p /etc/elastalert/rules_folder/ sudo cp ~/example_elastalert/example_rules/example_frequency.yaml /etc/elastalert/rules_folder/frequency.yaml sudo cp ~/example_elastalert/config.yaml.example /etc/elastalert/config.yaml Configure Elastalert sudo vi /etc/elastalert/config.yaml Search for 'rules_folder', 'es_host' and change values in file rules_folder: "/etc/elastalert/rules_folder" es_host: localhost Now Open File  and change conf sudo vi /etc/elastalert/rules_folder/frequency.yaml filter: - term: _type: "apache-error" - query: query_string: query: "host:HostName" alert: - "email" include: [