Archive for November, 2008

Lab 7: Flip Flops Revisited

November 26, 2008

As said in one of the first entries on this blog Flip-Flops are logic devices that “hold” data. Also said in that entry, I could not come up with a particularly compelling reason to use these devices other than “you can”. Well, I was sadly mistaken, flip-flops are incredibly useful.

For instance, if you were building a clock or counter, it would be incredibly useful to remember where you were last. A calculator would also be a good place so that you could “enter” each number and operation. Both applications require vastly different approaches. We’ll start with the calculator, since it’s easier.

To build a calculator, let’s use our imaginations for a second. On a traditional digital calculator you enter your first number then the operation then the next number, repeating the process over and over until you get to your final answer. Alright, let’s imagine a canal, each number is held in a dike and pushed forward through the canal one at a time. Then let us imagine that our operations are each a path this water can go through. Now, the problem is how to do we hold the number so we can continue to add/subtract as long as we see fit. Basically input-operation-hold-more input-operation-etc.

So we need a method of holding since we already created an adder/subtractor that works perfectly well. A “D” flip-flop does the trick very well, actually by definition. A D flip-flop takes in an entry at an instant (defined by a periodic clock usually) and only changes that input when the clock hits again. Usually you’d just program a regular frequency for a clock, but in this case, the clock operation is going to be the add or subtract queue. Like when you press + on a simple digital calculator and the screen changes, the “clock pulse” will be the + button.

Once we understand the underlying problem and solution, the design is exceedingly simple. We just need to plug the parts together, see the diagram:

Image source

With the following diagram it’s a matter of plug-and-chug to create the circuit. The user input (switches for binary numbers) is tied to one input of the full adder (see lab 3) while the output is fed into a D flip flop. The D’s output (read as D+) is sent both to the Y input of the full adder (so initially it’s user’s input + 0, then all further operations are “previous number” + “user input”) and to the ROM we designed in lab 5/6. Same use, same setup. The ROM translates the final output to normal decimal.

Some important things to make note of:
-the number of flip flops depends on number of inputs. You would arrange them to cover all possible outcomes. So, if there are 2n combinations, then you’d need N flip flops.
-There are many types of flip flops with their own usages. I will cover them further in another post.
-The XOR thing in the full schematic is to handle subtraction. This was covered in Lab 3-part b.

Full sized schematic can be found [HERE]

Lab 5/6: Usable Output (7-segment displays)

November 12, 2008

So after going through the previous labs, I understand that the content is a little much. The major problem is that the systems work in binary while we all know decimal. What would make this all much better is a “translator” so that we could read the outputs quickly and easily without being some computer wiz-kid.

For numbers we will use a 7-Segment display, as seen below:

These readers can be found in all sorts of gadgets. Microwaves, calculators, watches, billboards, etc. They are easy to read and we are all very familiar with them.

The Catch: We need a way to translate from binary to a form that turns on/off the proper segments.

There are two ways we can tackle the problem. Both with widely different solutions.

Use BCD:
By using 4 bits to convey each digit, we can solve the problem with logic gates. For instance: 8 = 1000. We change that into 1111111 or 0000000 (depending on the device) and we get a usable output.

Pros:
-Can build with basic parts
-Can (if you had too much free time) count with other bases up to base 16

Cons:
-Hard to expand
-Can display errors
-Extra outcomes

By using some Boolean tricks we can minimize/eliminate much of the errors. The final outcome looks something like [LINK].

The other possibility is a full translator. A custom piece of machinery that just spews out an answer for whatever input provided. This tool is called a ROM.

For a ROM, you program all possible outputs so that it just “answers” the input automatically, regardless of “logic”.

Pros:
-Easily implemented
-Scalable (up to max size)

Cons:
-Must be programed (time consuming)
-Has MAX scale. Larger arrays will require larger ROMs

This comes in handy with large systems. For instance, we can logically solve a system with up to 6 inputs. Any larger than that requires a 4th dimension or powerful computing equipment. So, for a system of 8 inputs (in this case) a programed ROM saves a lot of time.

For this case, the programed ROM will handle numbers from 0->255. This way, we can have a reusable part (much like the Full Adder from lab 3) that we can implement in further labs.

For instance, much like the previous image, we can have a 8-bit translator that is much simpler to implement.

NOTE Quartis “simplifies” the inputs/outputs into busses. Those thick lines are technically several inputs in one. Also, it compresses inputs/outputs into one symbol that you assign for all “addresses”.
~Locke
Writer’s Note: I will post images later tonight. I do not have access to all the parts on this machine, but the tags have already been added. Check back soon!

Lab 4: Multiplyer

November 11, 2008

After creating an adder/subtracter, a multiplier is a good next step.

Once again, before we start talking about any circuit logic or design, we need to figure out the mechanics of the problem. Multiplication, like addition, isn’t very difficult. We all know the rules, but once again, binary adds a new perspective.

Note 1: Multiplication consists of 2 parts. First the multiplication, then addition. Because we will need to multiply numbers larger than 1, such as 8 (1000), we are forced to do the addition step.

Note 2: With binary, 1×1 still equals 1, so no need to carry bits in the multiplication step.

So, starting with regular multiplication let’s take a look:

As you can see, there are some digits highlighted. As said by note 2, we will not have to carry numbers in the multiplication step, only the addition step. Therefore, the multiplication is easy.

0x0=0
0x1=0
1×0=0
1×1=1

Which by definition matches an AND gate (ironic, no?). The tough part is figuring out how to handle all the additions. As I said in both the adder posts, the adder can be expanded infinitely to add TWO sets of bits together. When you start having to add multiple lines of bits, things get sticky.

The layout of our math (to keep the notation straight) is gonna look like this:

An and Bn are the bits being multiplied. Design does depend on how many digits are being combined.
Xn = Bn x A0
Yn = Bn x A1
Zn = Bn x A2
Vn = Bn x A3

Also, in all this mess is carry bits. We will tackle that in the design stage.

So, we have the basic framework to build our multiplier, but how to put the pieces together? Let’s look at the image again.
F0 = X0 Easy!
F1 = Y0 + X1 also easy. No carry!
F2 = Z0 + Y1 + X2 Okay, problematic…

How to solve this? Well, simple design can be just as useful as talking about it.

The image may be hard to see in this blog, so try a LINK.

As you can see, the adders are in a similar orientation to our multiplication tables at the top. The addition is sandwiched between the bits. If you look across the top of the design, the An bits are taken care of vertically, while the Bn bits are added in horizontally (to make the design less of a rat-trap). All the layers are added together like VZYX0 + VZYZ1 THEN + VZYZ2 and so on. The design is easy to expand or condense and is convenient.

So, to expand the design, here are the equations:
Full adders: B across by (A-1) down (number of bits)
AND gates: B x A
Fn (outputs) = A + B

The real trouble of the design is making sure that all wires are connected properly. One foul connection and the system will not work.
~Locke

Rock-Paper-Scissors

November 3, 2008

This was a fun programing assignment (if only because I heard so much whining and head-scratching from my fellow classmates).

The task: Create a game of rock-paper-scissors to play against the computer.

For this, we need a random number generator so that it’s like playing the game against a dice. You don’t know what it’s outcome will be and the computer isn’t able to “cheat”. Luckily enough, java comes with a random number generator, so it’s just a matter of handling the outcomes.

I assembled my program in two halves. The first half, it’s own method, checked the user input and returns a number to correspond with the choice (1 for rock, 2 for paper, 3 for scissors, and 4 if the user doesn’t enter the right value). Then the computer generates a random number and compares it against the user input using a switch/catch tree with an if/else tree inside. So, it catches the computer input and tests it against the user input.

Thankfully, it worked on the first try and is rather clean and concise. The code.
~Locke

Number theory and lab 3 continued

November 3, 2008

So, the schematics below may not make a terrible amount of sense, so let me go through the process.

The idea is that you have a set of tools, namely AND, OR, NOT and XOR. With these four operators you can express just about any set of outcomes. In the case of lab 3, we are looking for a way of expressing addition and subtraction in a simple manner that can be infinitely expanded.

This may sound daunting, but it’s not; it’s only a puzzle. So, like a puzzle, we are looking for not only correct combination, but efficient ones. This usually requires stripping a problem into smaller chunks and analyzing what the exact parts.

Let’s start with normal addition, namely two numbers at a time. If you add just single digit numbers it may look like the following:

This is the simplest addition possible. No carry, only 1 digit numbers. We can also do larger numbers and carries, but how to look at them? Well, for instance, you will never carry a number larger than 1 in this kind of addition, even in the largest numbers we can muster.

So, it can stand to reason that you will never have to carry a number larger than 1 in any base. A valuable piece of information.

Next, we figure out that we will have up to 3 numbers being added, the “x” bit, the “y” bit and the “carry” bit. It also stands to reason that if all 3 were 1’s, the largest outcome is 1 with a carry of 1. So we split the system twice. X + Y + Carry (handled by using XOR gates) and then figuring out IF there is another carry. It stands to reason that if ANY 2 of the 3 inputs are 1, then the carry will need to be one.

Great, so how do we implement that? Well, we construct a table of all possibilities, like so:

I unfortunately do not have the space to explain the methods to how I built the “carry” system in my adder, this is what the entire system looks like:

The next section we need is to be able to handle subtraction. Usually, in binary we use two’s compliment. To oversimplifiy, you express negative numbers by inverting the bits + 1. This also requires an extra bit. So +9 (01001) can be expressed as -9 (10110+1=11000). This may sound overly complicated, but it makes life very easy on the logic level. By tying one end of an XOR to a switch (creating 0 for addition and 1 for subtraction) then the other to the “y” input (in this case) it will flip the bits as so. This way with using our “full adder” we can also have a subtracter with the flip of a switch.

Next Lab: Multiplying
~Locke