Parity of the number of bits == 1

Basic Parity Calculation

The function parity(n: int) -> int calculates the parity of the number n. Parity is 1 if the number of 1s in the binary representation of n is odd, and 0 otherwise. The function iterates over each bit of n, using result ^= n & 1 to flip the result if the current bit is 1, and then right-shifts n by one bit.

def parity(n: int) -> int:

        result = 0
while n:
result ^= n & 1
n >>= 1
return result

Complexity: The time complexity is O(m), where m is the number of bits in the number. This is because the algorithm checks each bit exactly once.

Kernigan's Algorithm for Parity Calculation

The function kernigan_parity(n: int) -> int also calculates the parity of the number n. It optimizes the basic approach by directly skipping to the next set bit using the operation n &= n - 1, which drops the lowest set bit of n. The result is XORed with 1 for each set bit found.

def kernigan_parity(n: int) -> int:

    result = 0
while n:
result ^= 1
n &= n - 1 # drop the lowest set bit
return result

Complexity: The time complexity is O(k), where k is the number of bits set to 1 in the number. This improvement over the basic method occurs because it processes only the bits that are 1, skipping all zeros.

I benchmarked it as follows:

First, I had to generate example inputs for the different n,k combinations. I did that like so:

```


@staticmethod
def get_inputs(k: int, m: int) -> list:
# k is number of bits that are 1
# m is the bitlength
random_numbers = ParityBenchmarking.gen_random_numbers(k, m, 25)
return random_numbers

@staticmethod
def gen_random_number(k, m):
r = Random()
randbits = r.getrandbits(m)
while bin(randbits).count('1') != k:
random_num_between_0_and_m = r.randint(0, m-1)
if bin(randbits).count('1') < k:
randbits |= 1 << random_num_between_0_and_m
else:
randbits &= ~(1 << random_num_between_0_and_m)
return randbits

@staticmethod
def gen_random_numbers(k, m, numNumbers) -> list:
return [ParityBenchmarking.gen_random_number(k, m) for _ in range(numNumbers)]
``` 

Benchmarking Code:

@staticmethod
def benchmark(my_func: Callable[[int], None], inputs: list = None, n: int = 5000) -> timedelta:
start_time = datetime.datetime.now()
for _ in range(n):
    for input_val in inputs:
    my_func(input_val)
return end_time - start_time


@staticmethod
def print_benchmark3(k: int, m: int, benchmarks: Dict, my_func: Callable[[int], None], impl: str) -> None:
inputs = ParityBenchmarking.get_inputs(k, m)
    benchmark_key = (k,m,impl)
    duration = Benchmarker.benchmark(my_func, inputs=inputs).microseconds
print(f"Input: {inputs} ({benchmark_key}, Time taken: {duration} microseconds")
        benchmarks[benchmark_key] = duration

Results:

The summarized output from the benchmarking data displays the walltime measurements for computing parity using the Kernighan and standard parity algorithms over various configurations of 'k' (the number of set bits) and 'm' (the total number of bits). The walltime generally increases with both higher values of 'k' and 'm'.

For the **Kernighan parity algorithm**, the results show a clear trend where the walltime increases with both the number of set bits 'k' and the total number of bits 'm'. The increase in walltime is more pronounced with an increase in 'k' compared to an increase in 'm'. For example, as 'k' increases from 1 to 32 at a fixed 'm', the walltime shows significant growth. Similarly, for a fixed 'k', increasing 'm' also results in increased walltime, but the effect is generally less dramatic than changes in 'k'.

For the **standard parity calculation** (simply labeled "parity" in the table), the results also show an increase in walltime with increases in 'k' and 'm', but the growth pattern appears less systematic than with the Kernighan method.






Correlations






|   m     | algorithm       |   correlation |
|---------|-----------------|---------------|
| 48 | parity | 0.777746 |
| 148 | parity | 0.693694 |
| 248 | parity | -0.590393 |
| 348 | parity | 0.369503 |
| 448 | parity | -0.459705 |
| 548 | parity | 0.388659 |
| 48 | kernigan_parity | 0.999983 |
| 148 | kernigan_parity | 0.999875 |
| 248 | kernigan_parity | 0.999627 |
| 348 | kernigan_parity | 0.999851 |
| 448 | kernigan_parity | 0.999986 |
| 548 | kernigan_parity | 0.999858 |



| k | algorithm | correlation |
|---------|-----------------|---------------|
| 1 | parity | 0.337478 |
| 2 | parity | 0.0487518 |
| 4 | parity | -0.387477 |
| 8 | parity | 0.317987 |
| 16 | parity | 0.348491 |
| 32 | parity | 0.0311046 |
| 1 | kernigan_parity | 0.936622 |
| 2 | kernigan_parity | 0.981157 |
| 4 | kernigan_parity | 0.395702 |
| 8 | kernigan_parity | 0.986231 |
| 16 | kernigan_parity | 0.994522 |
| 32 | kernigan_parity | 0.603758 |

  - The relationship between 'm' and walltime is also significant but slightly weaker than 'k', still showing a strong positive correlation.

- For **standard parity**:

  - The correlation between 'k' and walltime is also positive, but much weaker compared to Kernighan's parity, suggesting less predictability.

  - The correlation between 'm' and walltime shows variability, with some coefficients indicating weak to moderate relationships.

In summary, the Kernighan algorithm’s performance is more consistently influenced by the number of set bits and the total number of bits, showing predictably higher times with increases in these parameters. The standard parity algorithm exhibits increased times with larger 'k' and 'm' but with more variability and generally less efficiency compared to Kernighan’s method.

Statistical Analysis

From the regression and correlation coefficients, it's clear that the correlation between 'k' and walltime is very strong for Kernighan's parity, suggesting that 'k' is a reliable predictor of performance for this algorithm. The relationship between 'm' and walltime is significant but slightly weaker. For the standard parity calculation, the correlation is positive but much weaker compared to Kernighan's parity, indicating less predictability.


Profiling

I used python memory-profiler (`@profile`). Unfortunately, I couldn't use perf due to its absence on OSX, and I didn't delve into using dtrace. 


Questions


1. How can a bit string, in practical systems, keep growing without running into overflow, considering limited word sizes in CPUs?

   - Some modern programming environments support extended or arbitrary-precision arithmetic that handles integers larger than a CPU's standard word size.


2. Does the single assembly instruction for decrementing a register/address alter the computational complexity?

   - Both basic and Kernighan's algorithms operate at the bit level. The difference in computational complexity (O(m) vs. O(k)) does manifest in practical performance benefits, as fewer operations are needed in Kernighan's method when fewer bits are set.


Further exploration could include leveraging CPU-specific features to see if they substantially alter the computational complexity or the practical efficiency of these algorithms and understanding how compiler optimizations impact operational speed and efficiency in different computing environments.



No comments:

Post a Comment

Fixing an Incorrectly Resolved Git Rebase Conflict After a Force Push

Scenario You resolved a rebase conflict, but did it wrong. You’ve already committed and force-pushed the branch to your remote (e.g.,  origi...