Case Class Inheritance

Knoldus Blogs

Hello friends, as discussed in the previous blog we know what are case classes and what boilerplate code do they generate.

Now in this blog, we will be discussing case class inheritance, how it can be achieved as the case to case inheritance is prohibited due to the equals and hashcode method case classes generate. We will also be telling the bad side of case classes.

Case class inheritance breaks equivalence relation (symmetric and transitive). Any case class successor which constricts a definition area must redefine the equality because pattern matching must work exactly as equality.

Now the question arises, how inheritance can be achieved in case classes. The answer is simple: Case Class can extend another Class, trait or Abstract Class.

Using Abstract Class-

  • Create an abstract class which encapsulates the common behavior used by all the classes inheriting the abstract class.
  • Extend abstract class and create a case class…

View original post 218 more words

Case Class In Scala

Knoldus Blogs

Case Class is one of the great tool introduced by Scala in the functional programming world.

Let’s explore what’s hidden behind the above class.

Case class is similar to a class but it provides a lot of boilerplate code for developers to use. By default, they are immutable i.e. once declared cannot be changed.

If we compile the case class using scalac command and decompile it using javap command, following code is generated:

Case classes automatically generate the following:

  • Getter methods for the constructor arguments.

  • Hashcode and equals.

  • Copy method.

  • Companion object with apply/unapply.

  • toString.

If a case class is a data holder, then a corresponding companion object is a service for that case class. Singleton object serves the purpose of both a factory and an extractor.

As a factory, it provides apply method which allows us to create an object without new keyword.


As an extractor, it provides 

View original post 288 more words

Simple Things You Can Learn From Cassandra Nodetool (Monitor/Manage) For DC/OS

Knoldus Blogs

As we know, DC/OS gave us minimal packages of services for running on apache mesos like cassandra. Cassandra services only give us the functionality for storing data , not other things like native tools as `nodetool`, `cqlsh`, `cassandra-stress` and more. Today, we are looking into the way for using `nodetool`, for monitoring DC/OS cassandra services nodes or cluster.

Brief:

Nodetool is a native command line interface for managing and monitoring cassandra cluster. While our cassandra services are running on DC/OS,  we can manage and monitor cluster using nodetool.  We need to execute nodetool on DC/OS using cassandra docker image on any of the DC/OS cassandra nodes by performing below steps:

  1. First, need to ssh any of  DC/OS cassandra node: ssh @
  2. Second, execute the docker command for download cassandra docker image and pass nodetool command for execution as below:
$ sudo docker run -t --net=host cassandra:3.0 nodetool tablestats -H .

Note: Sometime nodetool…

View original post 852 more words

Akka HTTP Routing

Knoldus Blogs

In this blog, I try to describe some basic routes of Akka-HTTP, redirection of the route and how to handle extra parameters that come in route.

so before writing the routes, you have to set the environment for it.

 
implicit val system = ActorSystem("my-system")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher

These are the variables that are you need to define provisionally before writing the routes.

so now let’s writing the route –

val route =
      path("hello") {
        get {
          complete("Say hello to akka-http")
  }
}

now you have to bind this route to a port

val bindingFuture = Http().bindAndHandle(route, "localhost", 8080)

Here we define simplest route “/hello”  and whatever response you want to return you have to give into complete() method.
and we bind this route to the localhost:8080.

so let’s move another type of rote –

If you want to send some segment in the…

View original post 384 more words

Difference between RDD , DF and DS in Spark

Knoldus Blogs

In this blog I try to cover the difference between RDD, DF and DS. much of you have a little bit confused about RDD, DF and DS. so don’t worry after this blog everything will be clear.

With Spark2.0 release, there are 3 types of data abstractions which Spark officially provides now to use: RDD, DataFrame and DataSet.

so let’s start some discussion about it.

Resilient Distributed Datasets (RDDs) – Rdd is is a fault-tolerant collection of elements that can be operated on in parallel.
By the rdd, we can perform operations on data on the different nodes of the same cluster parallelly so it’s helpful in increasing the performance.

How we can create the RDD

Spark context(sc) helps to create the rdd in the spark. it can create the rdd from –

  1. external storage system like HDFS, HBase, or any data source offering a Hadoop InputFormat.
  2. parallelizing an…

View original post 659 more words

Automatic Deployment Of Lagom Service On ConductR On DCOS

Knoldus Blogs

In our previous blog we have seen how to deploy the lagom service on conductr – https://blog.knoldus.com/2017/05/25/deploying-the-lagom-service-on-conductr/

In this blog we will create script to deploy your Lagom service on conduct present or running on top of DCOS.

There are two types of automatic deployments can be done –

  1. Deploying from scratch (stop the current Lagom bundle and deploy the new one)
  2. Rolling/Incremental Deployment( Overriding the already running Lagom bundle)

Note* – Currently for automatic deployment on conductr running on DCOS, one need Enterprise Version of DCOS cluster setup rather than the open source. Another way is to disable the authentication.

  • Deploying From Scratch –

In this approach one need to stop and unload the running bundle and then need to run the below script.

The Script and its details are as follow –

View original post 360 more words

Spark Cassandra Connector On Spark-Shell

Knoldus Blogs

Using Spark-Cassandra-Connector on Spark Shell

Hi All , In this blog we will see how we can execute our spark code on spark shell using Cassandra . This is very efficient at testing or learning time , where we have to execute our code on spark shell rather than doing on any IDE .

Here we will use spark version –  1.6.2 

you can download the version from Here

and off course its appropriate spark Cassandra connector as

Cassandra Connector –  spark-cassandra-connector_2.10-1.6.2.jar .

you can download the connector(jar file) from Here .

So lets Begin : –

Step 1 ) Create any test table in your Cassandra ( I am using Cassandra version Cassandra 3.0.10 ) .

CREATE TABLE test_smack.movies_by_actor (
actor text,
release_year int,
movie_id uuid,
genres set,
rating float,
title text,
PRIMARY KEY (actor, release_year, movie_id)
) WITH CLUSTERING ORDER BY (release_year DESC, movie_id ASC)

View original post 303 more words