Akka HTTP Routing


In this blog, I try to describe some basic routes of Akka-HTTP, redirection of the route and how to handle extra parameters that come in route.

so before writing the routes, you have to set the environment for it.

implicit val system = ActorSystem("my-system")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher

These are the variables that are you need to define provisionally before writing the routes.

so now let’s writing the route –

val route =
      path("hello") {
        get {
          complete("Say hello to akka-http")

now you have to bind this route to a port

val bindingFuture = Http().bindAndHandle(route, "localhost", 8080)

Here we define simplest route “/hello”  and whatever response you want to return you have to give into complete() method.
and we bind this route to the localhost:8080.

so let’s move another type of rote –

If you want to send some segment in the…

View original post 384 more words


Difference between RDD , DF and DS in Spark


In this blog I try to cover the difference between RDD, DF and DS. much of you have a little bit confused about RDD, DF and DS. so don’t worry after this blog everything will be clear.

With Spark2.0 release, there are 3 types of data abstractions which Spark officially provides now to use: RDD, DataFrame and DataSet.

so let’s start some discussion about it.

Resilient Distributed Datasets (RDDs) – Rdd is is a fault-tolerant collection of elements that can be operated on in parallel.
By the rdd, we can perform operations on data on the different nodes of the same cluster parallelly so it’s helpful in increasing the performance.

How we can create the RDD

Spark context(sc) helps to create the rdd in the spark. it can create the rdd from –

  1. external storage system like HDFS, HBase, or any data source offering a Hadoop InputFormat.
  2. parallelizing an…

View original post 659 more words

Automatic Deployment Of Lagom Service On ConductR On DCOS


In our previous blog we have seen how to deploy the lagom service on conductr – https://blog.knoldus.com/2017/05/25/deploying-the-lagom-service-on-conductr/

In this blog we will create script to deploy your Lagom service on conduct present or running on top of DCOS.

There are two types of automatic deployments can be done –

  1. Deploying from scratch (stop the current Lagom bundle and deploy the new one)
  2. Rolling/Incremental Deployment( Overriding the already running Lagom bundle)

Note* – Currently for automatic deployment on conductr running on DCOS, one need Enterprise Version of DCOS cluster setup rather than the open source. Another way is to disable the authentication.

  • Deploying From Scratch –

In this approach one need to stop and unload the running bundle and then need to run the below script.

The Script and its details are as follow –

View original post 360 more words

Spark Cassandra Connector On Spark-Shell


Using Spark-Cassandra-Connector on Spark Shell

Hi All , In this blog we will see how we can execute our spark code on spark shell using Cassandra . This is very efficient at testing or learning time , where we have to execute our code on spark shell rather than doing on any IDE .

Here we will use spark version –  1.6.2 

you can download the version from Here

and off course its appropriate spark Cassandra connector as

Cassandra Connector –  spark-cassandra-connector_2.10-1.6.2.jar .

you can download the connector(jar file) from Here .

So lets Begin : –

Step 1 ) Create any test table in your Cassandra ( I am using Cassandra version Cassandra 3.0.10 ) .

CREATE TABLE test_smack.movies_by_actor (
actor text,
release_year int,
movie_id uuid,
genres set,
rating float,
title text,
PRIMARY KEY (actor, release_year, movie_id)
) WITH CLUSTERING ORDER BY (release_year DESC, movie_id ASC)

View original post 303 more words

Event Sourcing with Eventuate


Hi all,

Knoldus had organized an hours session on 3rd February 2017 at 4:00 PM. Topic was Event Sourcing with Eventuate. Many enthusiasts  joined and learned from the session. I am  sharing the slides of the session here. Please let me know if you have any question related to linked slides.

you can also watch the video on youtube :

You can find complete code here.

Thanks … !!!


View original post

Lambda Architecture with Spark


Hello folks,

Knoldus  organized a knolx session on the topic : Lambda Architecture with Spark.

The presentation covers lambda architecture and implementation with spark.In the presentaion we will discuss components of lambda architecure like batch layer,speed layer and serving layer.We will also discuss it’s advantages and benefits with spark.

You can watch the video of presentation :

Here you can check slide :

Thanks !!


View original post

Transaction Management in Cassandra


As we are all from the Sql Background and its been ages SQL rules the market , so transaction are something favorite to us .
While Cassandra does not support ACID (transaction) properties but it gives you the ‘AID’ among it .
That is Writes to Cassandra are atomic, isolated, and durable in nature. The “C” of ACID—consistency—does not apply to Cassandra, as there is no concept of referential integrity or foreign keys.

Cassandra offers you to tune your Consistency Level as per your needs . You can either have partial or full consistency that is You might want a particular request to complete if just one node responds, or you might want to wait until all nodes respond .

We will talk here about the ways we can implement the so called transaction concept in Cassandra .

Light Weight Transactions
They are also known as compare and set transactions

View original post 777 more words