Simple Things You Can Learn From Cassandra Nodetool (Monitor/Manage) For DC/OS


As we know, DC/OS gave us minimal packages of services for running on apache mesos like cassandra. Cassandra services only give us the functionality for storing data , not other things like native tools as `nodetool`, `cqlsh`, `cassandra-stress` and more. Today, we are looking into the way for using `nodetool`, for monitoring DC/OS cassandra services nodes or cluster.


Nodetool is a native command line interface for managing and monitoring cassandra cluster. While our cassandra services are running on DC/OS,  we can manage and monitor cluster using nodetool.  We need to execute nodetool on DC/OS using cassandra docker image on any of the DC/OS cassandra nodes by performing below steps:

  1. First, need to ssh any of  DC/OS cassandra node: ssh @
  2. Second, execute the docker command for download cassandra docker image and pass nodetool command for execution as below:
$ sudo docker run -t --net=host cassandra:3.0 nodetool tablestats -H .

Note: Sometime nodetool…

View original post 852 more words


Akka HTTP Routing


In this blog, I try to describe some basic routes of Akka-HTTP, redirection of the route and how to handle extra parameters that come in route.

so before writing the routes, you have to set the environment for it.

implicit val system = ActorSystem("my-system")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher

These are the variables that are you need to define provisionally before writing the routes.

so now let’s writing the route –

val route =
      path("hello") {
        get {
          complete("Say hello to akka-http")

now you have to bind this route to a port

val bindingFuture = Http().bindAndHandle(route, "localhost", 8080)

Here we define simplest route “/hello”  and whatever response you want to return you have to give into complete() method.
and we bind this route to the localhost:8080.

so let’s move another type of rote –

If you want to send some segment in the…

View original post 384 more words

Difference between RDD , DF and DS in Spark


In this blog I try to cover the difference between RDD, DF and DS. much of you have a little bit confused about RDD, DF and DS. so don’t worry after this blog everything will be clear.

With Spark2.0 release, there are 3 types of data abstractions which Spark officially provides now to use: RDD, DataFrame and DataSet.

so let’s start some discussion about it.

Resilient Distributed Datasets (RDDs) – Rdd is is a fault-tolerant collection of elements that can be operated on in parallel.
By the rdd, we can perform operations on data on the different nodes of the same cluster parallelly so it’s helpful in increasing the performance.

How we can create the RDD

Spark context(sc) helps to create the rdd in the spark. it can create the rdd from –

  1. external storage system like HDFS, HBase, or any data source offering a Hadoop InputFormat.
  2. parallelizing an…

View original post 659 more words

Automatic Deployment Of Lagom Service On ConductR On DCOS


In our previous blog we have seen how to deploy the lagom service on conductr –

In this blog we will create script to deploy your Lagom service on conduct present or running on top of DCOS.

There are two types of automatic deployments can be done –

  1. Deploying from scratch (stop the current Lagom bundle and deploy the new one)
  2. Rolling/Incremental Deployment( Overriding the already running Lagom bundle)

Note* – Currently for automatic deployment on conductr running on DCOS, one need Enterprise Version of DCOS cluster setup rather than the open source. Another way is to disable the authentication.

  • Deploying From Scratch –

In this approach one need to stop and unload the running bundle and then need to run the below script.

The Script and its details are as follow –

View original post 360 more words

Spark Cassandra Connector On Spark-Shell


Using Spark-Cassandra-Connector on Spark Shell

Hi All , In this blog we will see how we can execute our spark code on spark shell using Cassandra . This is very efficient at testing or learning time , where we have to execute our code on spark shell rather than doing on any IDE .

Here we will use spark version –  1.6.2 

you can download the version from Here

and off course its appropriate spark Cassandra connector as

Cassandra Connector –  spark-cassandra-connector_2.10-1.6.2.jar .

you can download the connector(jar file) from Here .

So lets Begin : –

Step 1 ) Create any test table in your Cassandra ( I am using Cassandra version Cassandra 3.0.10 ) .

CREATE TABLE test_smack.movies_by_actor (
actor text,
release_year int,
movie_id uuid,
genres set,
rating float,
title text,
PRIMARY KEY (actor, release_year, movie_id)
) WITH CLUSTERING ORDER BY (release_year DESC, movie_id ASC)

View original post 303 more words

Event Sourcing with Eventuate


Hi all,

Knoldus had organized an hours session on 3rd February 2017 at 4:00 PM. Topic was Event Sourcing with Eventuate. Many enthusiasts  joined and learned from the session. I am  sharing the slides of the session here. Please let me know if you have any question related to linked slides.

you can also watch the video on youtube :

You can find complete code here.

Thanks … !!!


View original post

Lambda Architecture with Spark


Hello folks,

Knoldus  organized a knolx session on the topic : Lambda Architecture with Spark.

The presentation covers lambda architecture and implementation with spark.In the presentaion we will discuss components of lambda architecure like batch layer,speed layer and serving layer.We will also discuss it’s advantages and benefits with spark.

You can watch the video of presentation :

Here you can check slide :

Thanks !!


View original post