Tuesday, December 9, 2008

3 sum problem

The problem is defined as given a array find a set of elements such that c=a+b.

1. Sort the array
2. for each element c=arr[i] in the array
3. left=0; right=i;
4. while left<right
5. check if c==arr[left]+arr[right]
6. then add c to output
7. else if c<arr[left]+arr[right] right--
8. else left++
9. end

#include <iostream>

using namespace std;

int compare (const void * a, const void * b);
void sum3(int arr[], int n);
void main(){
int arr[]={2,3,5,1,4,9};



void sum3(int arr[], int n){
int l,r;
int count=0;
int index[6];
int curr=0;
for(int i=n-1;i>=0;i--)
l=0; r=i;
curr=arr[l] +arr[r];
if(arr[i] == curr){
cout<<arr[i]<<" ";
else if(arr[i] < curr)
else l++;
int compare (const void * a, const void * b)
return ( *(int*)a - *(int*)b );

Complexity of the above procedure is O(n^2). Apparently a better solution for the problem doesn't exist. Also, a number of problems can be reduced to the 3 sum problem and are called 3sum hard.

Implement a queue using 2 stacks

Use 2 stacks, S and T

1. Enqueue

2. Dequeue
if T.count==0
while (S.count>0) t.push(s.pop());


The operations take O(1) amortized time.

C SC 545 Exam Fall 08

Kth smallest in merge of 2 sorted arrays

The solution goes like this

A[1...m] B[1...n]

1. Compare A(k/2) and B(k/2)
if A(k/2) < B(k/2)

So the kth smallest can't be in A[1..k/2] and B[k/2+1 .. n]. A[1...k/2] < Kth smallest < B[k/2+1 ... n]

So we discard these elements. And now search for the k/2th smallest number in the 2 remaining arrays

if the reverse is true we do the same operation but with B and A swapped

2. if m<k

we discard B[1...k-m-1]

and look for (m+1)th (= k-(k-m-1)) smallest element in remaining arrays

const trick

1. const int* x=5;

Declares a pointer to a constant.
So now a assignment *x=10; will fail

2. int* const x=5;
Declares a constant pointer to a variable.
so *x=10 will PASS while x=10 will fail.

3. const int * == int const *
The above 2 mean the same thing

4. const int const *x=5;

Using 3. statement 4 Declares a pointer to a constant. (same as 1)

Wednesday, December 3, 2008

New vs. Malloc

New calls the "operator new(...)" method to allocate memory (like what malloc does), but memory allocation is followed by a call to the appropriate constructor. Also new is type-safe, yet malloc is not.

Static function in C

A static function is a function with the static qualifier applied:

static void foo ( void )
/* Blah blah */
What it does is restrict visibility of the function to the translation unit in which it's declared. Functions are implicitly declared as extern by default, which means they're visible across translation units.


Monday, December 1, 2008

Maximum Sum Sub array

This is a interesting problem which can be solved in a variety of ways. This problem has a chunk of a chapter devoted in the Programming Pearls book by Jon Bentley.

Method 1. Brute Force O(n^2)

This involves taking each index i between 0 to len and for every other index j>i calculate the sum while keeping track of the maximum.

private static int sumbruteforce(int [] arr){
int sum=0;
int max=Integer.MIN_VALUE;
int maxi=0;
int maxj=0;
for(int i=0;i<arr.length;i++){
for(int j=i;j<arr.length;j++){
System.out.println(maxi+" "+maxj);
return max;

2. Method 2: Divide and Conquer: O(n log n)
This involves dividing the array recursively into 3 parts left, right and mid. And comparing max sum of the three.

Calculating max(left) and max(right) is trivial using recursion. For the mid element, we know the left bound (l) lies in left half and right bound (r) lies in right half. So we calculate l s.t sum of elements between l and mid is max and r s.t the sum is maximum for elements between mid and right.

private static int sumDivConquer(int [] arr, int l, int h){
return 0;
return arr[l];

int m=(int) Math.floor((double)((l+h)/2));
int max=0;

max=sumDivConquer(arr, l, m);
max=Math.max(max,sumDivConquer(arr, m+1, h));

int L=0;
int R=0;
int S=0;
for(int i=m;i>=l;i--){
L=S>L? S:L;
for(int i=m+1;i<=h;i++){
R=S>R? S:R;
max=Math.max(max, L+R);

return max;

3. Dynamic programming O(n)

An optimized DP solution takes O(n).

The naive DP solution acts exactly like method 1 and generates a matrix A[i,j] containing sum of elements with left bound i and right bound j.

The optimized DP constructs a single dimension table of array size, with
table[i]: Represents the sum of subarray with maximum sum terminating at i;

for i=0, table[0]=arr[0]
for i>0, table[i]= Max{ arr[i], arr[i]+table[i-1]}

Once we have the table finding max sum takes only a single pass through the above table, we can fasten the procedure up by keeping track of max side by side.

private static int sumDP(int [] arr){
int table[]=new int[arr.length];
for(int i=1;i<arr.length;i++){
table[i]=Math.max(arr[i], arr[i]+table[i-1]);
int max=Integer.MIN_VALUE;
for(int i=0;i<arr.length;i++){
return max;

A simpler algorithm based on the same idea as the DP method is called Kadane's algorithm and it works without the need of maintaining the auxiliary table.

The code is derived from here

private static int sumKadane(int arr[]){
int max=Integer.MIN_VALUE;
int a,b; a=b=0;
int curr = 0;
int aa = 1;
for (int bb= 1;bb<arr.length;bb++){
curr = curr + arr[bb];
if (curr > max) {

if (curr < 0) {
curr = 0;
aa = bb + 1;
System.out.println(a+" "+b);
return max;

Counting Number of set bits in a number

1. Method 1

Complexity= O(n)

int numOfOnes(int n){
int count=0;
if((n & 1) == 1)
return count;

Method 2:
The method works as follows:

The operation:
resets the last set bit of n.

So the number of time the above operation can be done before n is equal to 0 are the number of set bits in n originally.

Complexity= O(m) where m is the number of set bits in n.

int numOfOnes2(int n){
int count=0;
return count;

Also note that if count above is 1, then number n is a power of 2.

Generating Combinations

The algorithm for this is very similar to the algorithm for generating permutations.

1. Remove Item i from elements and put in output array
2. Display Output
3. Generate combinations for rest of the array

public void combine(String str){
int l=str.length();
StringBuilder out=new StringBuilder();
char []in=str.toCharArray();
private void doCombine(char[] in, StringBuilder out, int n, int start){
for(int i=start;i<n;i++){
doCombine(in, out, n, i+1);


Generating Permutations

This has been a problem I always had difficulty with. The only resort I had till now was to memorize the solution and spit it out when needed. But now, with a algorithms class behind me and a little bit more insight into the problem, I can atlast cook up the logic here.

SO, the Question is:
Given n items (numbers, chars) generate/print all possible permutation of them

1. Take a item i and put it in the perm array
2. Generate Permutation for rest of the n-1 elements (Print perm array when n==0)
3. Put back Item i in elements

public class Permute {
public void permute(String str){
int l=str.length();
boolean [] used=new boolean[l];
StringBuilder out=new StringBuilder();
char []in=str.toCharArray();
doPermute(in,out,used, l, 0);

private void doPermute(char[] in, StringBuilder out, boolean[] used, int n,
int level) {
if (level==n){

for(int i=0;i<n;i++){
if(used[i]) continue;
doPermute(in, out, used, n, level+1);



1. Programming Interviews Exposed
2. http://www.paulgriffiths.net/program/c/perm.php

Sunday, November 16, 2008


Shuffling is a very interesting programming problem, Almost everybody can come up with a good algorithm using a simple rand() function, but it gets a little tricky when one has to perform a in place shuffle (i.e. w/o using any extra memory).

Knuth's Algorithm described in the Art of Computer programming, Vol 2 (which is based on Fisher Yates algorithm) is regarded as one the best known algorithm for the problem.

The description on wikipedia is a little clearer and goes like this:
1. Let A1 := 1, A2 := 2 and so on up to AN := N, and let n := N.
2. Pick a random number k between 1 and n inclusive.
3. If k ≠ n, swap the values of Ak and An.
4. Decrease n by one.
5. Repeat from step 2 until n is less than 2.

public static void shuffle (int[] array)
Random rng = new Random(); // i.e., java.util.Random.

// The number of items left to shuffle (loop invariant).
int n = array.length;

while (n > 1)
int k = rng.nextInt(n); // 0 <= k < n.
n--; // n is now the last pertinent index;
// swap array[n] with array[k] (does nothing if k == n).
int temp = array[n];
array[n] = array[k];
array[k] = temp;

Alternately, instead of going n to 1 we can do a forward pass like here:

int N = array.length;
for (int i = 0; i < N; i++)
int r = i + (int) (Math.random() * (N-i));
int t = array[r];
array[r] = array[i];
array[i] = t;

An alternative method is we can assign each element of the set to be shuffled a random number and the set is then sorted according to these numbers. This may be faster in practice, despite having worse asymptotic time complexity (O(n log n) vs. O(n)).

Like the Fisher-Yates shuffle, this method will also produce unbiased results if correctly implemented, and may be more tolerant of certain kinds of bias in the random numbers. However, care must be taken to ensure that the assigned random numbers are never duplicated, since sorting algorithms in general won't order elements randomly in case of a tie.

Wikipedia also defines a interesting algorithm very similar to the Knuth's algorithm called the Sattolo algorithm for generating uniformly distributed cyclic permutations. The only difference between the 2 algorithms is that in Sattolo's algorithm in step 2, the random number k is chosen from the range between 1 and n−1 (rather than between 1 and n) inclusive.

To turn the Java example above into an example of Sattolo's algorithm, simply replace rng.nextInt(n) with rng.nextInt(n-1) in the code. This simple change modifies the algorithm so that the resulting permutation always consists of a single cycle.

1. TAOCP Vol 2 Section 3.4.5 by Donald Knuth
2. http://en.wikipedia.org/wiki/Knuth_shuffle

Thursday, November 13, 2008

Finding a loop in a singly linked list

There are three "decent" solutions for it

1. Pass through the list, use a hashtable to store the addresses of the visited nodes, if 2 nodes hash to same key, there's a loop.

function boolean hasLoop(Node startNode){
HashSet nodesSeen = new HashSet();
Node currentNode = startNode;
do {
if (nodesSeen.contains(currentNode)) return true;
} while (currentNode = currentNode.next());
return false;

2. Floyd's Cycle-Finding Algorithm

Simultaneously go through the list by ones (slow iterator) and by twos (fast iterator). If there is a loop the fast iterator will go around that loop twice as fast as the slow iterator. The fast iterator will lap the slow iterator within a single pass through the cycle. Detecting a loop is then just detecting that the slow iterator has been lapped by the fast iterator.

function boolean hasLoop(Node startNode){
Node slowNode = Node fastNode1 = Node fastNode2 = startNode;
while (slowNode && fastNode1 = fastNode2.next() && fastNode2 = fastNode1.next()){
if (slowNode == fastNode1 || slowNode == fastNode2) return true;
slowNode = slowNode.next();
return false;

Alternately, in C

}while p!=q or p!=null;
if p==q
loop exists
no loop

To find the smallest index where the cycle starts:

// To find the smallest index where the cycle starts
// P is set from last algorithm
int index=0;
while p!=q{
index ++;
return index;

To find cycle length

// P & Q is set from last algorithm
int len=0;
while p!=q{
len ++;
return len;

See reasoning for len, index functions at wiki

3. Brent' Algorithm


while p!=q {
if (power==len){
power*=2 , len=0;
p=p->next; len++;
if(p==NULL) return (no cycle);
return (cycle), len;

For finding Index


while p!=q {
return index;


Delete a node in single linked list

The solution to this is to copy the data from the next node into this node and delete the next node!. Ofcourse this wont work if the node to be deleted is the last node. Mark it as dummy in that case. If you have a Circular linked list, then this might be all the more interesting.

Tuesday, November 11, 2008

Two problems for the day

Robotic Arm movement: http://icpcres.ecs.baylor.edu/onlinejudge/index.php?option=com_onlinejudge&Itemid=8&category=3&page=show_problem&problem=37

Binary Search: http://codekata.pragprog.com/2007/01/kata_two_karate.html

To be updated...

Monday, November 10, 2008

Optimizing Loops

A few ways to optimize loops

Iterating through loops has the following structure in many cases:

for(int k=0; k<someVar.length(); k++)

Check it out, you execute the k=0 command, then you execute the comparison as well as the length method as many times as you have n variables to loop through. Finally you increment k n+1 times. This all leads to a lot of executions. To make it go faster:

int length = someVar.length();
for(int k=0; k<length; k++)

Taking the method call to length() out will increase the loop's speed slightly. However, you're still calling length from the variable table n times. So try it this way instead:

int length = someVar.length();
for(int k=length; k>0; --k)

This will iterate backwards. What that means, is that now you won't be pulling up 2 variables in the comparison. You'll only compare one variable to 0. Although, it isn't necessarily that much more beneficial to call length outside of the loop statement now since your call to length is only executed once.

Monday, November 3, 2008

Virtual Destructors

If an object (with a non-virtual destructor) is destroyed explicitly by applying the delete operator to a base-class pointer to the object, the base-class destructor function (matching the pointer type) is called on the object.

There is a simple solution to this problem: declare a virtual base-class destructor. This makes all derived-class destructors virtual even though they don't have the same name as the base-class destructor. Now, if the object in the hierarchy is destroyed explicitly by applying the delete operator to a base-class pointer to a derived-class object, the destructor for the appropriate class is called.

Virtual constructor: Constructors cannot be virtual. Declaring a constructor as a virtual function is a syntax error.

To be completed .. (Example)


Thursday, October 23, 2008

Amortized Analysis

Amortized analysis refers to finding the average running time per operation over a worst-case sequence of operations.

The method requires knowledge of which series of operations are possible. This is most commonly the case with data structures, which have state that persists between operations. The basic idea is that a worst case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost.

As a simple example, in a specific implementation of the dynamic array, we double the size of the array each time it fills up. Because of this, array reallocation may be required, and in the worst case an insertion may require O(n). However, a sequence of n insertions can always be done in O(n) time, so the amortized time per operation is O(n) / n = O(1).

The approach is going to be to somehow assign an artificial cost to each operation in the sequence. This artificial cost is called the _amortized cost_ of an operation. The key property required of amortized cost is that the total real cost of the sequence should be bounded by the total of the amortized costs of all the operations.

There are several techniques used in amortized analysis:

* Aggregate analysis determines the upper bound T(n) on the total cost of a sequence of n operations, then calculates the average cost to be T(n) / n.

* Accounting method determines the individual cost of each operation, combining its immediate execution time and its influence on the running time of future operations. Usually, many short-running operations accumulate a "debt" of unfavorable state in small increments, while rare long-running operations decrease it drastically.

* Potential method is like the accounting method, but overcharges operations early to compensate for undercharges later.


Sunday, September 14, 2008

GCD of 2 numbers

The algorithm reduces the problem of finding the GCD by repeatedly applying these identities:

  1. gcd(0, v) = v, because everything divides zero, and v is the largest number that divides v. Similarly, gcd(u, 0) = u. gcd(0, 0) is not defined.
  2. If u and v are both even, then gcd(uv) = 2·gcd(u/2, v/2), because 2 is a common divisor.
  3. If u is even and v is odd, then gcd(uv) = gcd(u/2, v), because 2 is not a common divisor. Similarly, if u is odd and v is even, then gcd(uv) = gcd(uv/2).
  4. If u and v are both odd, and u ≥ v, then gcd(uv) = gcd((uv)/2, v). If both are odd and u <v, then gcd(uv) = gcd((v-u)/2, u). These are combinations of one step of the simple Euclidean algorithm, which uses subtraction at each step, and an application of step 3 above. The division by 2 results in an integer because the difference of two odd numbers is even.
  5. Repeat steps 3–4 until u = v, or (one more step) until u = 0. In either case, the result is 2kv, where k is the number of common factors of 2 found in step 2.

Since this definition is tail-recursive, a loop can be used to replace the recursion.

unsigned int gcd(unsigned int u, unsigned int v)
int shift;

/* GCD(0,x) := x */
if (u == 0 || v == 0)
return u | v;

/* Let shift := lg K, where K is the greatest power of 2
dividing both u and v. */
for (shift = 0; ((u | v) & 1) == 0; ++shift) {
u >>= 1;
v >>= 1;

while ((u & 1) == 0)
u >>= 1;

/* From here on, u is always odd. */
do {
while ((v & 1) == 0) /* Loop X */
v >>= 1;

/* Now u and v are both odd, so diff(u, v) is even.
Let u = min(u, v), v = diff(u, v)/2. */
if (u <= v) {
v -= u;
} else {
int diff = u - v;
u = v;
v = diff;
v >>= 1;
} while (v != 0);
return u;

Sunday, September 7, 2008

Strategy Pattern

The strategy pattern (also known as the policy pattern) allows algorithms to be selected at runtime.

The strategy pattern defines a family of algorithms, encapsulate each one, and makes them interchangeable. It lets the strategy (algorithm) to vary independently from the clients that use it.

The pattern uses composition instead of inheritance thus giving the flexibility to change the algorithm dynamically by a simple setStrategy() call. Each algorithm is given its own class.

//StrategyExample test application
public class StrategyExample {

public static void main(String[] args) {
Context context;

context = new Context(new ConcreteStrategyA());

context.setStrategy(new ConcreteStrategyB());
interface IStrategy {
void execute();

class ConcreteStrategyA implements IStrategy {
public void execute() {
System.out.println( "Called ConcreteStrategyA.execute()" );

class ConcreteStrategyB implements IStrategy {
public void execute() {
System.out.println( "Called ConcreteStrategyB.execute()" );

class Context {
IStrategy strategy;

// Constructor
public Context(IStrategy strategy) {
this.strategy = strategy;

public void setStrategy(IStrategy is){
this.strategy = is;

public void execute() {

(Example based on the example from wikipedia)
In this example the Context class execute method calls the execute method of the IStrategy class to do some task. The IStrategy interface is implemented by ConcreteStrategyA and ConcreteStrategyB, and the Context's class's method setStrategy can be called anytime to change the strategy from A to B, so that the execute function can switch the algorithm is uses whenever we change the strategy object.

Monday, August 18, 2008


From wikipedia:
Memoization is an optimization technique used primarily to speed up computer programs by having function calls avoid repeating the calculation of results for previously-processed inputs. Memoization is also used in Dynamic Programming.

Memoization typically requires maintaining a look up table where calculated values of previous function calls are kept.

Naive examples where memoizaton can help is:
1. Factorial
2. Fibonacci Series

I will take up a small problem to show the technique:
Lets say we have to calculate value of a function A which is defined by:
An = A[n/p] + A[n/q] for all n >= 1, where [x] denotes the floor function of x.

for a given value of n.

A naive implementation looks like this:

using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;

namespace TC1 {
class Program {
Dictionary mem =new Dictionary() ;
long calc1 (long n, int p, int q) {
if (n == 0)
return 1;
long np = (long) Math.Floor((double)n / p);
long nq = (long) Math.Floor((double) n / q);
return calc1(np ,p , q) + calc1(nq, p ,q);
long calc2 (long n, int p, int q) {
if (n == 0)
return 1;
long val;
if (mem.TryGetValue(n, out val)) {

} else {
long np = (long) Math.Floor((double) n / p);
long nq = (long) Math.Floor((double) n / q);
val = calc2(np, p, q) + calc2(nq, p, q);
mem.Add(n, val);
return val;

static void Main(string [] args) {
Program p = new Program();
Stopwatch w = new Stopwatch();
Console.WriteLine(p.calc1(10000000, 2, 3));
Console.WriteLine(p.calc2(10000000, 2, 3));

First of all, this is not the best way to benchmark techniques, but it does give a good general indication. As the value of n increases the memoization techniques is faster by more that a factor of 10 in many cases.

However, in some cases you have to tread carefully when there is a space constraint too. In most cases, the problem can be solved by using a threshold above which no memoization is done.

Saturday, August 2, 2008

First Post

This is the blog which I have started to keep track my efforts to become a better programmer. The focus of the blog would vary from learning new languages, algorithms, design patterns etc.

Wednesday, July 30, 2008

Test Post

Test Post

static void main()
Console.Writeline("Testing 1<2 3");

Related Posts with Thumbnails