SPOJ Problem: Adjacent Bit Counts

combinatorics, number-theory

A Little Background

I was trying to solve this problem on SPOJ. It’s a dynamic-programming problem. I tried to find some recursive relation. I spent hours. But no breakthrough.

I even tried looking for hints in comments and I came to know it’s a 3D dynamic-programming problem. Some of the users were even able to reduce it to a 2D technique.

Then I realized I need to rethink the entire approach. So somehow, I started treating it like a combinatorics problem and I landed on a solution — a formula that guarantees linear time execution. I love moments like these when you solve an interesting problem with an uncommon approach. And that’s why I am writing this post.

The Problem

The problem involves dealing with bit strings. A bit string is basically a sequence where each term can have value either or .

Usually, bit strings are represented without commas. So the bit string is abbreviated as .

Now we define a function which takes a bit string and gives the output:

This function is what they call the adjacent bit count of . This function basically tells us the number times the bit string appears in .

E.g.   ,   ,   ,   .

Now we’re ready to state our problem.

Given positive integers and , find the number of bit strings of length which satisfies .

For example, let and . The possible bit strings are ,   ,   ,   ,   ,   . So our answer is .

Solution

We are given the fixed values and . We introduce a variable value which we define as the number of continuous blocks of 1-bits in a bit string. Now onwards, we will refer to this as 1-block.

For example, has and has .

If we fix the value of , our problem becomes simpler as we will see later. We then have to calculate adjacent bit count for each possible value of and then sum them to obtain our final answer.

Before we jump to the actual solution, we need to observe a few things.

Let be the number of 1-bits in the 1-blocks from left to right respectively. E.g., for we have , and .

Note the following points:

In a 1-block containing 1-bits, the bit string appears times. This gives adjacent bit count for a single 1-block. So for the entire bit string, we must have . So we get

Since a bit string consists of only zeros and ones, we get

Since , we must have at least one 1-block, and hence . Also note that we must have at least 0-bits for separating the 1-blocks. This gives . So the range of is

where denotes the floor function.

Now we move to the actual solution.

For a fixed value of , we employ Beggar’s Method to our rescue. We take an empty bit string and break our problem into two parts.

Number of ways to fill 1-bits in the bit string: We have 1-bits, which we want to distribute in 1-blocks, which are currently empty. Number of ways to fill 0-bits in the bit string: We have 0-bits, which we want to distribute in the gaps between the 1-blocks.

So, for fixed , our answer will be the product of above two values. Our final answer will be sum of these products.

Filling 1-bits

Imagine our 1-blocks as empty buckets lined up in a row from left to right, with their positions fixed. We need to find the number of ways we can distribute available 1-bits in them.

But we have to keep in mind that none of the bucket remains empty. So first, lets put one 1-bit in each of the buckets. Now, we have to distribute the remaining 1-bits in whatever way we wish.

Let be the distribution of remaining 1-bits in the buckets from left to right, respectively.

By Beggar’s Method, we get

Filling 0-bits

We have buckets, which gives us gaps to fill zeros in.

The first and the last gaps can remain empty. But the inner gaps must have at least one 0-bit, otherwise we will have less than 1-blocks which contradicts that we have blocks.

So we distribute one 0-bit in each of the inner gaps. Now we can freely distribute the remaining 0-bits in gaps.

Let be the distribution of remaining 0-bits in gaps. Then we have to solve

Applying Beggar’s Method, we get

Final Answer

As discussed earlier, our final answer is the summation of product of the binomial coefficients obtained in and .

Conversion To Code

To achieve linear runtime, it’s clearly visible that we have to run a for loop on , the number of 1-blocks. In each pass of loop we have to calculate the product of binomial coefficients in constant time.

But the usual dynamic programming method takes time for calculating . But we can do it in constant time if we know the binomial coefficients in last pass. We do it using the following formula

Applying the above formula on the binomial coefficient in , we get

For the binomial coefficient , we substitute so as to make it look cleaner. becomes . Using identity on this, we get

Note that the initial values of these coefficients i.e. when , are and respectively. Keeping this in mind and using equations and , this is what our final code solution looks like.

long long x=n-k, y=1, p=n-k+1, sum=0;
sum += x*y;

for(int a=2; a<=p/2; a++){
	x = (x*(p-2*a+1)*(p-2*a+2))/(a*(p-a+1));
	y = (y*(k+a-1))/(a-1);
	sum += x*y;
}

As you can see, the for loop runs times. Hence, the time complexity of program is .

You can find the complete program here.