Do You Think Better with a Pencil?

One thing I need to ask around is if people in general think better with a pen in hand.

For me, I feel like there’s a clear difference in writing something out with a pen when I’m thinking vs. just thinking about a topic vs. writing with a keyboard.

The pen seems to add clarify and reinforcement to my thoughts. I strongly prefer

Is there something fundamental about writing? Or have we been using it so much during our school days that it’s grown on us as trigger? As in the minute we grab it, our brains became so used to learning with it that we unconsciously kick off some process?

Anyhoo, a few things come to mind why writing is more potent in learning and organizing thoughts compared to typing:

  • There’s no restriction imposed on when you’re using a pen. When you’re typing on the computer, you’re confined to the characters on the keyboard. There are less restrictions imposed on writing.
  • If you’re looking at the paper when you’re writing, you’re getting continuous feedback. You feel the stroke of the pen tracing the paper, and you’re seeing the ink make marks in real time. Compared to the screen. You’re typing each character one by one, and there is definitely a delay between when you type vs. what you see. It’s very discrete.
  • There’s some sense of uniqueness to your writing or all your drawing. Chances are, of course there will be people who have a very similar handwriting, but your handwriting is unique. Compare this to again typing on a keyboard, where all fonts pretty much look more similar.

That’s all I have so far. Maybe I’ll come back to this topic when I have more data or insight.

BFS and DFS in Problem Solving

I think a lot of problem solving can be summarized into two steps.

  1. Generate possible approaches to solve said problem.
  2. Go drill down on that approach and go as far as you can with that approach. Pray to God you didn’t make an assumption that will kill your approach and lead you to a dead end.

In computer science, the two of the most fundamental graph searching algorithms are breadth first search and depth first search. I think #1 ties into breadth first search, and #2 ties into depth first search.

I’ve come up with a good example of how both works!

Let’s say that you want to get a phone number of a student at your school. Let’s call him Greg.

Now the constraint is that you don’t have a phone book(which is effectively an index). No Google, no internet. NOTHING. Pretend this was in the 70s. How would you get this guy’s number?

Breadth First Search Approach:

One approach is ask all of your friends. You ask your close friends Sarah, Matt, and Caleb.

They say no, but they give you a list of their friends who could have Greg’s number.

So you put those people’s names, and you go hunt them down. Let’s call them Abby, Bacon, Chris, Duncan, and Erfan.

But they don’t have Greg’s number either, so they all give you their friends contact. You poll them into a huge list.

Now you use that list to ask those people. You do this over and over until you find Greg’s number. Or the man himself =D.

Depth first Search Approach

Instead of asking your friends one by one, you pick one friend you like the most. Matt.

You ask Matt. Matt doesn’t have Greg’s number. But Matt points you to David.

You ask David. David doesn’t have Greg’s number, but he thinks Peter has Greg’s number.

You ask Peter, who refers you to Nathan, and Nathan refers you do whoever else, and you do this until you find the person who has Greg’s number.

I would like to say that both approaches to get Greg’s number are valid, but there are definitely tradeoffs of picking either breath first search or depth first.

Problem Solving and Graph Algorithms

First, you have to generate all possible approaches. You go through all of them, evaluate their potential. Now these ideas can lead to other sub-ideas, so you explore them at a very shallow level. But you don’t want to chase the rabbit too hard - you want to see things at surface level.

Afterwards, you pick an idea that you predict to be effective. You drill down on that idea hard, and go for it.

Effectively, you’re using bfs and dfs for problem solving. Of course, depending on your results of dfs, you could climb back up and swap to bfs instead. So the analogy here isn’t 1:1.

The Man in the IDE

Teddy Roosevelt’s the Man in the Arena is great. Then it occured to me! What if I could rewrite for modern day software engineers?

Lo and behold, The man in the IDE!

The Man in the IDE

It is not management who counts; not the VP who points out how lousy the features are, or how the developer could have done it better.

The credits belongs to the engineer who is actually fighting the compiler, whose eyes are marred by parenthesis and asterisks; who strives viciously; who errs, who comes short and short again, because there is no effort without exceptions and stack overflows; but who does actually strive to do the deeds; who knows great shortcuts, the great macros; who spends himself for a worthy line of code; who at the best knows in the end the triumph of low cpu cycles, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid losers who neither know binary nor linked lists.

Peakfinding with Matrix Multiplication

A coworker of mine got really into learning algorithms, and shared with me about how the peak finding algorithm is O( n log n).

The problem is stated as follows:

Given a list of numbers, for example: [ 1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4] find the peak number. A peak number is defined as a number where the one that comes before it and after it are less than the current value.

For example, 5, 10 and 4 are peaks.

You can do a simple linear search, or even do a binary search.

A linear search will take O(n) time as you have to traverse through the array.

If you’re looking one peak, then you can use binary search for O(log n). But if you have more than one peak, you have to run binary search over and over.

I think this problem becomes kind of challenging if you try to solve it with binary search, given that you want to mark the already found peaks.

You have to assume a lot of things about the data. For example, one way decide to put a dummy marker for all the indexes, i.e. -1, if all the values are positive. But this could lead to a drastic error.

One strategy to do this is to mark both left, right, and the value of the peak.

Another strategy is to keep a list of all the indices that are peaks so far, and pass them down on each iteration call.

I mean, you have to know whether or not you are going to require multiple peaks. If you are sure that there’s going to be more than one peak, then you should go with a linear peak finder. However, if 1st peak that you are looking for is at the end of the array, you’re kind of screwed. On the other hand, if you just use binary search, then you’re increasing the complexity since it’s O(n log n).

These are all extraenous details! What I REALLY wanted to find out is whether or not you can do this in matrix algebra.

And it turns out that you can! If you take this problem, you can turn it into a linear algebra problem.

Suppose you have an original array: [ 1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

If you shift every value to the left, and append a -1, you get:

[ 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4, -1]

Shift everything to the right, and you get:

[ -1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

Combine all these matrices together, and you get a 3 x n matrix:

[ 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4, -1]

[ 1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

[ -1, 2, 5, 4, 6, 7, 8, 9, 10, 3, 5, 4]

Now, what you need to do is transpose this matrix, and multiply the matrix by a [ -1, 1, -1] kernel.

My burning question is: Will this be faster than direct left and right comparisons? My reasoning is that modern processors have SSE instruction sets which were heavily optimized for linear algebra. I wanted to use numpy, which I believe does use optimizations for SSE and linear algebra.

I had a kernel for [-1, 1] and [1, -1] for the top and middle row of data, and button and middle row of data. I applied those kernels, which gave me two matrices. I then multiplied those two matrices together, and if the number was positive, then there was a peak.

    def peak_finder_binary(ignore_idx, array, beg, end):
        
        if end - beg <= 1:
            return -1
        mid = (beg + end) /2 
        mid_val = array[mid]
        if mid not in ignore_idx: 
            if mid -1 < 0 and array[mid+1] < array[mid]:
                return mid
            if mid + 1 >= len(array) and array[mid-1] < array[mid]:
                return mid
            if mid -1 < 0 or mid+1 >= len(array):
                print("herro")
                return -1
            if array[mid -1] < mid_val and array[mid+1] < mid_val and mid not in ignore_idx:
                return mid
        left = beg
        right = end
        left_peak = peak_finder_binary(ignore_idx, array, left, mid)
        if left_peak >=0:
            return left_peak
        right_peak =  peak_finder_binary(ignore_idx, array, mid, right)
        if right_peak >=0:
            return right_peak
        return -1
        
        
    test_1 = [10, 20, 15, 2, 23, 90, 67]
    ignore_idx = []
    peak_idx = peak_finder_binary(ignore_idx, test_1, 0, len(test_1) )
    while(peak_idx != -1):
        peak_idx = peak_finder_binary(ignore_idx, test_1, 0, len(test_1) )
        if(peak_idx > 0):
            print(test_1[peak_idx])
        ignore_idx.append(peak_idx)
       
    def peak_finder_linear(array):
        idxes = []
        start = time.time()
        for i in xrange(1, len(array)-1, 1):
            if array[i-1] < array[i] and array[i+1] < array[i]:
                idxes.append(i)
        end = time.time()
        print end - start
    import numpy as np
    import copy
    import time
    def peak_finder_matrix_algebra(array):
        
        left = copy.copy(array)
        right = copy.copy(array)
        mid = copy.copy(array)
        left.pop(0)
        left.append(-1)
        right.pop()
        right.insert(0, -1)
        
        mtx_left = np.matrix([np.array(left), np.array(mid)], dtype=np.int32)
        mtx_left = mtx_left.transpose()
        kernel_left = np.array([[-1], [1]])
        mtx_right = np.matrix([np.array(mid), np.array(right)], dtype=np.int32)
        mtx_right = mtx_right.transpose()
        kernel_right = np.array([[1], [-1]])
        start = time.time()
        rst_left = np.matmul(mtx_left, kernel_left)
        rst_right = np.matmul(mtx_right, kernel_right)
        rst_left = rst_left>0
        rst_right = rst_right > 0
        rst = np.multiply(rst_left,rst_right)
        end = time.time()
        print end - start
    #Questions: Will the values be unique? Will the values be negative? Will there be repeats?
    #Will there be memory constraints? 
    #When solving problems, the assumptions must be crystal clearT
    #Will we have negative numbers?
    #Array is a terrible choice for a parameter name...
    import random
    my_randoms = [random.randint(0, 100000000) for r in xrange(1000000)]
    #print(my_randoms)
    peak_finder_matrix_algebra(my_randoms)
    peak_finder_linear(my_randoms)
    # herro
    # 0.0135238170624
    # 0.0769588947296

Unless the range of numbers is huge, then you won’t get a performance boost. it’s better to just loop through once to find the peaks. But if there’s a variability in the range of numbers, or maybe if the numbers are floating point, I think this is worth a shot. I didn’t time the data copying, because theoretically, you should be able to point the 1st element of the top and bottom joined number array to the original middle array shifted by a certain number.

All of this tell us that - there’s so much going on under the HOOD!!!

Ramblings on Identity Verification in the Future

Recently I read this post on Hacker News (https://news.ycombinator.com/item?id=18180153) about how a PSN account got poached by another user. And I got a too good to be true email from someone, and questioned whether or not it was generated by a computer. Because AI has gotten really good…

This led me to question how identity verification will be done in the future. So far, the surest way seems to be two factor authentication. ANYTHING that triggers suspicious behavior on any website, such as logging in from a location that is highly unusual, will cause a two factor authentication. Via SMS.

For phone calls, it’s usually done by pins, social security numbers, addresses, and so on.

But all this information can be compromised. Even the SMS can, theoretically, be hijacked since it’s a wireless signal that is sent to your phone. I’m sure there’s a way to do it.

With the advent of machine intelligence, and compromises in our data, will there be a surefire way to guarantee that the person that is using the device or on the phone is who they are?

Afterall, lyrebird.ai already can mimic your voice, and there are projects like deepfakes which can replace your face. Sure, they are not generated in real time, but they could be if there is sufficient computing power. For example, if it takes a human 300 milliseconds to respond, then with sufficient computing power, a computer could generate a response and time it at 300 milliseconds to reply. Even if you establish a connection between two parties, and time the response that it takes, and make it so that the bounds are near impossible for computers to process and relay that information back and forth, what if the connection was hijacked already from the beginning?(Side question: How would one figure out the absolute lower bound number on a computation? Is this impossible?)

The only sensible solution that I can think up of for now is that there needs to be a solution where the captured data cannot be faked. For example, if I was speaking through a phone, the voice recording that I transfer over the network has to be guaranteed that it’s from me, and that it hasn’t been tampered.

First, there must be some kind of coding scheme embedded in our devices that guarantees that the data capture device’s time taken is foolproof. So no more of this metadata that can be easily changed by programs when you capture photos, recordings, etc. It has to be tamper proof. Maybe, via public/private key cryptography, we can engrave the private key on the device, in silicon or something, and everyone would have access to the public key. By sending the encrypted data, applications with the public key can decrypt it, but the private key would guarantee that hopefully there’s only one person that has that key.

Now the question is how to protect that key - which leads me to wonder if there’s some way that we can use our biological features to generate a consistent key. Can we take some biological signal that cannot change no matter how much we change, and put it into some function, that will always produce a consistent private key, guaranteeing that this person ? Who knows. Maybe, we can measure our telomeres at that instant down to the nearest microsecond? I have no idea…

The other thing I remembered was quantum entanglement. Maybe, just maybe, we can create quantum entangled devices, somehow embed it onto ourselves, and use that to communicate.

But now the other question is: how do you guarantee that the agent on the other side doesn’t get their devices robbed?

Anyhoo, this is an interesting question to think about. How do we guarantee that the person we are talking is who they are? And how do we guarantee that stuff that is made in our society is genuine and real, when computers keep getting better and better?

Site built with http://lanyon.getpoole.com/ template. Thanks to @mdo for the original template!