Tuesday, August 8, 2017

Design interview questions

https://softwareengineering.stackexchange.com/questions/336282/design-elevator-system-algorithm-and-classes

https://www.quora.com/What-is-the-best-answer-to-Design-an-elevator-system

https://www.youtube.com/watch?v=fITuhLSwbt8

https://softwareengineering.stackexchange.com/questions/336282/design-elevator-system-algorithm-and-classes

https://github.com/joeblau/sample-elevator-control-system

http://massivetechinterview.blogspot.com/2015/07/thought-works-object-oriented-design.html

http://blog.gssxgss.me/category/programming/java/

http://blog.gssxgss.me/

http://blog.gssxgss.me/binary-search-tree-learning-note-1/

http://thought-works.blogspot.com/2012/11/object-oriented-design-for-elevator-in.html

https://www.careercup.com/question?id=1808669

https://stackoverflow.com/questions/493276/modelling-an-elevator-using-object-oriented-analysis-and-design

https://tunaozkasap.wordpress.com/page/2/

http://practice.geeksforgeeks.org/problems/design-elevator



MOTHER of All Design questions 

http://www.shuati123.com/blog/categories/design/



For the Spark Related stuff.


https://weishungchung.com/












Sunday, August 6, 2017

Different between soprt & orderby in spark.

OrderBy is just an alias for the Sort function and should give the same result.

The below is from the Spark documentation:

/**
   * Returns a new Dataset sorted by the given expressions.
   * This is an alias of the `sort` function.
   *
   * @group typedrel
   * @since 2.0.0
   */
  @scala.annotation.varargs
  def orderBy(sortCol: String, sortCols: String*): Dataset[T] = sort(sortCol, sortCols : _*)
Both will order across partitions. To get an understanding of how Spark does a sort take a look at the explanation in the link below:

http://stackoverflow.com/questions/32887595/how-does-spark-achieve-sort-order

If you would like to sort within a partition then you can use repartitionAndSortWithinPartitions.

https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/rdd/OrderedRDDFunctions.html#repartitionAndSortWithinPartitions(org.apache.spark.Partitioner)


More about the Spark-orderby
Sorting in Spark is a multiphase process which requires shuffling:

input RDD is sampled and this sample is used to compute boundaries for each output partition (sample followed by collect)
input RDD is partitioned using rangePartitioner with boundaries computed in the first step (partitionBy)
each partition from the second step is sorted locally (mapPartitions)
When data is collected all what is left is to follow the order defined by the partitioner.

Above steps are clearly reflected in a debug string:

scala> val rdd = sc.parallelize(Seq(4, 2, 5, 3, 1))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at ...

scala> rdd.sortBy(identity).toDebugString
res1: String =
(6) MapPartitionsRDD[10] at sortBy at <console>:24 [] // Sort partitions
 |  ShuffledRDD[9] at sortBy at <console>:24 [] // Shuffle
 +-(8) MapPartitionsRDD[6] at sortBy at <console>:24 [] // Pre-shuffle steps
    |  ParallelCollectionRDD[0] at parallelize at <console>:21 [] // Parallelize


https://stackoverflow.com/questions/40603202/difference-between-sort-and-orderby-spark


Saturday, August 5, 2017

Apache Spark ML Sparse Vector vs Dense Vector

Sparse Vector vs Dense Vector


import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;

// Create a dense vector (1.0, 0.0, 3.0).
Vector dv = Vectors.dense(1.0, 0.0, 3.0);
// Create a sparse vector (1.0, 0.0, 3.0) by specifying its indices and values corresponding to nonzero entries.
Vector sv = Vectors.sparse(3, new int[] {0, 2}, new double[] {1.0, 3.0});


Sparse vectors are when you have a lot of values in the vector as zero. While a dense vector is when most of the values in the vector are non zero.

If you have to create a sparse vector from the dense vector you specified, use the following syntax:

Vector sparseVector = Vectors.sparse(4, new int[] {1, 3}, new double[] {3.0, 4.0});
The below link is pretty useful for getting a good understanding of the overall concept.

What is a Sparse Vector

A vector is a one-dimensional array of elements. So in a programming language, an implementation of a vector is as a one-dimensional array. A vector is said to be sparse when many elements of a have zero values. And when we write programs it will not be a good idea from storage perspective to store all these zero values in the array.

So the best way of representation of a sparse vector will be by just specifying the location and value.

Example: 3 1.2 2800 6.3 6000 10.0 50000 5.7

This denotes at position:

3 the value is 1.2
2800 holds value 6.3
6000 holds 10.0
50000 holds value 5.7
When you use the sparse vector in a programming language you will also need to specify a size. In the above example the size of the sparse vector is 4.

Representation of Sparse Vector in Spark

The Vector class of org.apache.spark.mllib.linalg has multiple methods to create your own dense and sparse Vectors.

The most simple way of creating one is by using:

sparse(int size, int[] indices, double[] values)

This method creates a sparse vector where the first argument is the size, second the indices where a value exists and the last one is the values on these indices.

Rest of the elements of this vector have values zero.

Example:

Let’s say we have to create the following vector {0.0, 2.0, 0.0, 0.0, 4.0}. By using the sparse vector API of Spark this can be created as stated below:

Vector sparseVector = Vectors.sparse(5, new int[] {1, 4}, new double[] {2.0, 4.0});

If the same vector can also be created using the dense vector API

Vector denseVector = Vectors.dense(0.0, 2.0, 0.0, 0.0, 4.0);

Apache Spark K- Mean clustering

K.1 : K- Mean clustering:

This is how you run K-Mean clustering using Dataframes in Spark.

These are the libs that you need to import:
import org.apache.spark.mllib.clustering.{KMeans, KMeansModel}
import org.apache.spark.mllib.linalg.Vectors
import scala.sys.process._

//Pull data from hive or any other DB ( for this example we assume that the data is stored in hive )

val query = "select query, impression from table_search"
//Now execute the queryval

q1 = hqlContext.sql(query).select("query", "page_impression")

val rowRDD = q1.rdd.map
(r =>
(
r.getString(0),
r.getLong(1),
Vectors.dense(r.getLong(1))
)
)

rowRDD.cache()

val upper_cnt =  8000
val lower_cnt = 100
//select the training set

val trainSet = rowRDD.filter(r => r._2 < upper_cnt && r._2 >= lower_cnt)
val numClusters = 3
val numIterations = 20
val feature = trainSet.map(r => r._3)

val kMeansModel = KMeans.train(feature, numClusters, numIterations)

//Here we are segregating all the queries in 3 seperate clusters

val predDF = trainSet.map(r => kMeansModel.predict(r._3) match {
  case 1 => (r._1, r._2, 0)
  case 2 => (r._1, r._2, 1)
  case 0 => (r._1, r._2, 2)
  case _ => (r._1, r._2, -1)
}).toDF("queries", "page_impression", "tier")


//Now saving the data in a
hdfspredDF.coalesce(10).write.mode(SaveMode.Overwrite).parquet(final_part_dir)




K.2: Logistic regression example:

val data = Seq(  (1.0, .52,0.34),  (0.0, 0.6, 0.43),  (0.0, 1.9, 0.54),  (1.0, 0.11, 0.11),  (1.0, 0.222, 0.33),  (1.0, 0.333, 0.99),  (0.0, 0.314, 0.86),  (0.0, 0.9888, 0.34),  (1.0, 0.264, 0.55))

val df = sc.parallelize(data).toDF("label", "x1", "x2").cache()

val train_set = df.rdd.map(row => LabeledPoint(row.getAs[Double](0), Vectors.dense (row.getAs[Double](1), row.getAs[Double](2)))).toDF

train_set.show()

val lr =new LogisticRegression().setMaxIter(10).setRegParam(0.3)

val lrModel = lr.fit(train_set)

//Prepare the test data
val test = df.select ("x1", "x2").rdd.map(row => Feature(Vectors.dense(row.getAs[Double](0), row.getAs[Double](1) ) ) ).toDF("features")

val predictionsDF = lrModel.transform (test)
predictionsDF.show()


Run the Hive Queries on Spark

Case statements
This is how case statements can be written in DataFrames:
Hive Query:
select key, case when value is null then 0 else value as new_value from users limit 10;
In data frames:
val sqlQuery = "select key, value from users"
val data = hqlContext.sql(sqlQuery)
val caseStatement=data.withColumn("new_value", when(data("value") isNull, 0.0).otherwise(data("value"))).take(10)
Now here withColumn function creates one more column other than key and value So if we do printSchema() on caseStatement object then it will return:
caseStatement.printSchema()
----key: int
----value: int
----new_value:int





Source:
http://sparksofdata.blogspot.com/


SQL Aggregation functions, CUBE, RollUP & Grouping Sets.



ROLLUP() explained in SQL

The SQL standard defines special functions that can be used in the GROUP BY clause: the grouping functions. These functions can be used to generate several groupings in a single clause. This can best be explained in SQL. Let's take ROLLUP() for instance:


ROLLUP() grouping function provides N+1 groupings, when N is the number of arguments to the ROLLUP() function. Each grouping has an additional group field from the ROLLUP() argument field list. The results of the second query might look something like this:

-- ROLLUP() with one argument
SELECT AUTHOR_ID, COUNT(*)
FROM BOOK
GROUP BY ROLLUP(AUTHOR_ID)


-- ROLLUP() with two arguments
SELECT AUTHOR_ID, PUBLISHED_IN, COUNT(*)
FROM BOOK
GROUP BY ROLLUP(AUTHOR_ID, PUBLISHED_IN)


+-----------+--------------+----------+
| AUTHOR_ID | PUBLISHED_IN | COUNT(*) |
+-----------+--------------+----------+
|         1 |         1945 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         1 |         1948 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         1 |         NULL |        2 | <- GROUP BY (AUTHOR_ID)
|         2 |         1988 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         2 |         1990 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         2 |         NULL |        2 | <- GROUP BY (AUTHOR_ID)
|      NULL |         NULL |        4 | <- GROUP BY ()
+-----------+--------------+----------+



CUBE() explained in SQL

CUBE() is different from ROLLUP() in the way that it doesn't just create N+1 groupings, it creates all 2^N possible combinations between all group fields in the CUBE() function argument list. Let's re-consider our second query from before:


-- CUBE() with two arguments
SELECT AUTHOR_ID, PUBLISHED_IN, COUNT(*)
FROM BOOK
GROUP BY CUBE(AUTHOR_ID, PUBLISHED_IN)


The results would then hold:
+-----------+--------------+----------+
| AUTHOR_ID | PUBLISHED_IN | COUNT(*) |
+-----------+--------------+----------+
|      NULL |         NULL |        2 | <- GROUP BY ()
|      NULL |         1945 |        1 | <- GROUP BY (PUBLISHED_IN)
|      NULL |         1948 |        1 | <- GROUP BY (PUBLISHED_IN)
|      NULL |         1988 |        1 | <- GROUP BY (PUBLISHED_IN)
|      NULL |         1990 |        1 | <- GROUP BY (PUBLISHED_IN)
|         1 |         NULL |        2 | <- GROUP BY (AUTHOR_ID)
|         1 |         1945 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         1 |         1948 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         2 |         NULL |        2 | <- GROUP BY (AUTHOR_ID)
|         2 |         1988 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
|         2 |         1990 |        1 | <- GROUP BY (AUTHOR_ID, PUBLISHED_IN)
+-----------+--------------+----------+



GROUPING SETS()

GROUPING SETS() are the generalised way to create multiple groupings. From our previous examples
ROLLUP(AUTHOR_ID, PUBLISHED_IN) corresponds to GROUPING SETS((AUTHOR_ID, PUBLISHED_IN), (AUTHOR_ID), ())
CUBE(AUTHOR_ID, PUBLISHED_IN) corresponds to GROUPING SETS((AUTHOR_ID, PUBLISHED_IN), (AUTHOR_ID), (PUBLISHED_IN), ())



More explanation on the aggregation functions.

Aggregate functions can be nested:

SELECT
   AVG(
      MAX(SAL)
   )
FROM
   EMP
GROUP BY
   DEPTNO;

AVG(MAX(SAL))
-------------
   3616.66667

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |     7 |     4  (25)| 00:00:01 |
|   1 |  SORT AGGREGATE     |      |     1 |     7 |     4  (25)| 00:00:01 |
|   2 |   SORT GROUP BY     |      |     1 |     7 |     4  (25)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| EMP  |    14 |    98 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

The average of the maximum salary of each department is returned.

A query containing a nested aggregation requires one group by clause and returns one row.

The CUBE, ROLLUP and GROUPING SETS functions are used in the GROUP BY clause to generate totals and subtotals.

SELECT
   DEPTNO,
   SUM(SAL)
FROM
   EMP
GROUP BY
   ROLLUP(DEPTNO);

    DEPTNO   SUM(SAL)
---------- ----------
        10       8750
        20      10875
        30       9400
                29025

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     3 |    21 |     4  (25)| 00:00:01 |
|   1 |  SORT GROUP BY ROLLUP|      |     3 |    21 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | EMP  |    14 |    98 |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------

The sum of salaries is aggregated per department, and then an additional row containing the total of all employees' salaries is returned.

The ROLLUP has generated four rows:

One for each group

One for the department

One for the total of all groups

ROLLUP can have more than one dimension, generating grand totals and subtotals:

SELECT
   DEPTNO,
   JOB,
   SUM(SAL)
FROM
   EMP
GROUP BY
   ROLLUP(DEPTNO,JOB);

  DEPTNO JOB         SUM(SAL)
-------- --------- ----------
      10 CLERK           1300
        10 MANAGER         2450
      10 PRESIDENT       5000
      10                 8750
      20 CLERK           1900
      20 ANALYST         6000
      20 MANAGER         2975
      20                10875
      30 CLERK            950
      30 MANAGER         2850
      30 SALESMAN        5600
      30                 9400
                        29025

The sum generates a grand total, a subtotal per department and a sub-subtotal per department and job.

It also possible to mix ROLLUP groups with normal groups in the GROUP BY clause:

SELECT
   DEPTNO,
   JOB,
   SUM(SAL)
FROM
   EMP
GROUP BY
   DEPTNO,ROLLUP(JOB);

  DEPTNO JOB         SUM(SAL)
-------- --------- ----------
      10 CLERK           1300
    10 MANAGER         2450
    10 PRESIDENT       5000
    10                 8750
    20 CLERK           1900
    20 ANALYST         6000
    20 MANAGER         2975
    20                10875
    30 CLERK            950
    30 MANAGER         2850
    30 SALESMAN        5600
    30                 9400

A subtotal per department and a sub-subtotal per department and job are returned. As DEPTNO is a standard GROUP BY expression, there is no grand total.

SELECT
   MAX(ENAME)
   KEEP
   (
      DENSE_RANK
         FIRST
      ORDER BY
         SAL DESC
   ) ENAME,
   DEPTNO,
   JOB,
   MAX(SAL) SAL
FROM
   EMP
GROUP BY
   CUBE(DEPTNO,JOB);

ENAME          DEPTNO JOB              SAL
---------- ---------- --------- ----------
KING                                  5000
MILLER                CLERK           1300
SCOTT                 ANALYST         3000
JONES                 MANAGER         2975
ALLEN                 SALESMAN        1600
KING                  PRESIDENT       5000
KING               10                 5000
MILLER             10 CLERK           1300
CLARK              10 MANAGER         2450
KING               10 PRESIDENT       5000
SCOTT              20                 3000
ADAMS              20 CLERK           1100
SCOTT              20 ANALYST         3000
JONES              20 MANAGER         2975
BLAKE              30                 2850
JAMES              30 CLERK            950
BLAKE              30 MANAGER         2850
ALLEN              30 SALESMAN        1600

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |    11 |   231 |     4  (25)| 00:00:01 |
|   1 |  SORT GROUP BY       |      |    11 |   231 |     4  (25)| 00:00:01 |
|   2 |   GENERATE CUBE      |      |    11 |   231 |     4  (25)| 00:00:01 |
|   3 |    SORT GROUP BY     |      |    11 |   231 |     4  (25)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| EMP  |    14 |   294 |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------

CUBE generates nine rows for the various job/departments, three rows for the subtotal per department, three rows for the subtotal per job and one row for the grand total. For each job and department, the maximum salary is returned, and for this salary, the name of the corresponding employee is returned.  In case of duplicates, the MAX function is applied to the name.

With this result set, a superaggregation is performed for each dimension, the department, the job, and the overall. In addition to the rows that are aggregated by ROLLUP, CUBE also produces a row for each job.

GROUPING SETS simplifies the management of the subtotals:

SELECT
   DEPTNO,
   JOB,
   MIN(SAL)
FROM
   EMP
GROUP BY
   GROUPING SETS
   (
      (JOB),
      (DEPTNO)
   );


    DEPTNO JOB         MIN(SAL)
---------- --------- ----------
           ANALYST         3000
           CLERK            800
           MANAGER         2450
           PRESIDENT       5000
              SALESMAN        1250
        10                 1300
        20                  800
        30                  950

GROUPING SET is a subset of CUBE. The minimum salary in each job and the minimum salary in each department are returned.

Compare with:

SELECT
   CASE
      WHEN GROUPING_ID(JOB, DEPTNO)=1
      THEN 'Count per job'
      WHEN GROUPING_ID(JOB, DEPTNO)=2
      THEN 'Count per department'
   END " ",
   CASE
      WHEN GROUPING(JOB)=0
      THEN JOB
      ELSE '========='
   END JOB,
   CASE
      WHEN GROUPING(DEPTNO)=0
      THEN TO_CHAR(DEPTNO,'99999')
      ELSE '======'
   END DEPTNO,
   COUNT(*)
FROM
   EMP
GROUP BY
   CUBE(JOB, DEPTNO)
HAVING
   GROUPING_ID (JOB, DEPTNO) in (0,1,2);

                     JOB       DEPTNO   COUNT(*)
-------------------- --------- ------ ----------
                     CLERK         10          1
                     CLERK         20          2
                     CLERK         30          1
                     ANALYST       20          2
                     MANAGER       10          1
                     MANAGER       20          1
                     MANAGER       30          1
                     SALESMAN      30          4
                     PRESIDENT     10          1
Count per job        CLERK     ======          4
Count per job        ANALYST   ======          2
Count per job        MANAGER   ======          3
Count per job        SALESMAN  ======          4
Count per job        PRESIDENT ======          1
Count per department =========     10          3
Count per department =========     20          5
Count per department =========     30          6

CUBE generates counts in every possible dimension including overall counts. The HAVING clause evaluates the GROUPING_ID function. When no superaggregation occurs, it gets the value of 0; when the first argument group is a subtotal, it gets 1 (20); when the second is a subtotal, it gets 2 (21). If it is a grand total for both the first and the second arguments, it gets 3 (20+21). Only the 0, 1 and 2 grouping ids are returned. GROUPING returns 1 when the column is summarized and otherwise, 0.

There is one more function related to grouping that is called GROUP_ID which is useful when there are duplicate subtotals:

SELECT
   CASE
      WHEN GROUPING(ENAME)=0
      THEN NULL
      WHEN GROUP_ID()=0
      THEN 'SUM'
      WHEN GROUP_ID()=1
      THEN 'AVG'
   END TYPE,
   ENAME,
   DEPTNO,
   CASE
      WHEN GROUPING(ENAME)=0
      THEN NULL
      WHEN GROUP_ID()=0
      THEN SUM(SAL)
      WHEN GROUP_ID()=1
      THEN AVG(SAL)
   END VALUE
FROM
   EMP
GROUP BY
   GROUPING SETS
   (
      (ENAME,DEPTNO),
      (DEPTNO),
      (DEPTNO),
      (),
      ()
   )
   DEPTNO,
   ENAME,
   GROUP_ID();

TYP ENAME          DEPTNO      VALUE
--- ---------- ---------- ----------
     CLARK              10
    KING               10
    MILLER             10

SUM                    10       8750
AVG                    10 2916.66667

    ADAMS              20
    FORD               20
    JONES              20
    SCOTT              20
    SMITH              20

SUM                    20      10875
AVG                    20       2175
      ALLEN              30
    BLAKE              30
    JAMES              30
    MARTIN             30
    TURNER             30
    WARD               30

SUM                    30       9400
AVG                    30 1566.66667

SUM                            29025
AVG                       2073.21429