Any number of flip flops can be grouped together that share the same clock signal and work as a single unit. Common numbers of flip flops grouped together in a register are 4, 8, 16, and 32 (corresponding to 2^2, 2^3, 2^4 and 2^5).
A register functions as a more complete unit of memory within a circuit, grouping together data with a similar meaning in most cases, or just a more compact way of storing a bunch of bits. Most registers are made out of D flip flops due to the lower pin count needed for signals (JK would need much larger IC's because of the need for more control pins space).
There are four main kinds of registers, categorized by the way in which data is put in and taken out.
Parallel In, Parallel Out Registers
This kind is the simplest of registers. A parallel in/ parallel out is just a collection of flip flops that share a common clock signal but have independent data signals and outputs.
Their main application is storing data or state information (represented in binary digits) for use in later steps in sequential circuits.
Serial In, Parallel out Registers
Serial in/ parallel out registers get their input from a single data line. The output of the flip flop is used as one of the outputs, as well as connecting it to the data line of the next flip flop. What this accomplishes is that for every clock signal, the bits stored move one place and a new bit is captured at the first flip flop; the last flip flop is used just as another output, so its data is rewritten after every clock pulse.
This type of registers are used as buffers in digital data lines, where data is sent using only one wire but each bit is needed separately for further use.
Parallel in, Serial Out Register
Parallel in/ serial out registers have special control circuitry (sometimes a simple multiplexor suffices) that can select whether to use an external set of bits or the previous flip flop's output as input.
This is so that there's a possibility to get an external set of bits all at one (in parallel), and send them along one by one. Since each clock pulse the data moves to the next flip flop, the last can be used as the register's serial output, sending the data it stores one bit at a time.
In contrast to the receiving and "semimultiplexing" action of the serial in/ parallel out, the parallel in/ serial out combines number of data lines into a single one, most likely for transmitting over a digital line.
Serial in, Serial out Register
Kind of a special purpose register. This is always wired so that the first flip flop gets external data, and all internal flip flop's outputs are connected to the input of the next, the last one being used as output.
This register's main purpose is to delay the transmission of data in a digital data line.
Electronic circuit diagrams and tutorials for beginners and hobbyists.
You can find many more electronic circuits for beginners by clicking the link!
The synchronous digital counter
Most situations need a counter that outputs only one value per transition, or in other terms, that all outputs change at the same time. In a ripple counter, when two or more bits need to change to reach the final output, intermediate outputs are generated, which can interfere with the correct function of other circuits that depend on the counter.
To overcome this limitation, the same clock signal applied to every flip flop to ensure that they all transition at the same time. The flip flop control signals now need some external logic to set them in the correct level for them to switch to the needed state.
The logic is simple: if all the previous flip flops are 1, then toggle the state (if 1, go to 0; if 0, go to 1). This is easily accomplished with an AND gate with N inputs where N is the number of previous flip flops, or its equivalent "cascaded" AND gates (one extra AND for every extra input, the output of a previous goes to an input of the next gate), which output will be connected to both J and K inputs in a JK flip flop, or adding an extra AND gate whose input is connected to the inverted output of the D flip flop.
The second flip flop is a special case, since there is only one previous flip flop to check, no logic is needed because it can be driven directly by the first flip flop's output.
To overcome this limitation, the same clock signal applied to every flip flop to ensure that they all transition at the same time. The flip flop control signals now need some external logic to set them in the correct level for them to switch to the needed state.
The logic is simple: if all the previous flip flops are 1, then toggle the state (if 1, go to 0; if 0, go to 1). This is easily accomplished with an AND gate with N inputs where N is the number of previous flip flops, or its equivalent "cascaded" AND gates (one extra AND for every extra input, the output of a previous goes to an input of the next gate), which output will be connected to both J and K inputs in a JK flip flop, or adding an extra AND gate whose input is connected to the inverted output of the D flip flop.
The second flip flop is a special case, since there is only one previous flip flop to check, no logic is needed because it can be driven directly by the first flip flop's output.
Applications of Sequential Logic: Digital Counters
Counters are one of the many applications of sequential logic that has a widespread use from simple digital alarm clocks to computer memory pointers. A counter is a collection of flip flops, each representing a digit in a binary number representation (which means each bit, depending on position, means a different number).
One of easier ways to build a circuit is to make a flip flop that controls the activation or switching of the second, and so on. This type of counter is called a ripple counter, since the switching signal propagates from one flip flop to the next as in a wave.
The Ripple Counter
For a simple ripple counter, JK flip flops with both inputs tied to 1 are the best option, since it will toggle state on the clock edge. For simplicity, the falling edge is used and assuming all flip flops start in reset state.
The first FF (Flip Flop) in the sequence gets the its input directly from the variable that needs to be counted. When it transitions from low to high to low again (this last transition generating a falling edge), in other words, when the input pulses, the FF changes to Set.
Since the First FF's output has not made a falling edge transition, the second FF remains Reset.
When another pulse appears at the input, the first FF changes to Reset again, creating a falling edge at its output, which triggers the second FF to transition to the Set state.
Another pulse, the first FF changes to Set; No falling edge at its output, the second FF keeps its state. Yet another pulse (Now four if you have been keeping the count), the first FF goes back to Reset, producing a falling edge; the second FF also goes back to Reset, producing a falling edge at its output that will trigger a third FF and making it Set.
If we assign the value of 1 to the first flip flop, 2 to the second and 4 to the third, we have a 3 bit binary number represented in there. Remember that with binary numbers, we add the values where the binary bit is set to 1.
As you can see, when all the transitions have occurred, the counter ends up with the count of input pulses it has received, representing them in a binary number.
The main drawback of this ripple counter is the fact that as the transition propagates from the first flip flop to the next, all the way to the last one, intermediate numbers are being set at the output, which introduces error and some confusion if a pure counter is needed.
This problem becomes most apparent when all the bits in the counter are set to 1 and another input pulse is applied: The output will be subtracted each time. In a 4 bit ripple counter, the max number that can be represented is 15 (1111), as the transition propagates, intermediate numbers 14 (1110). 12 (1100), 8 (1000) will appear before finally settling into 0 (0000).
One of easier ways to build a circuit is to make a flip flop that controls the activation or switching of the second, and so on. This type of counter is called a ripple counter, since the switching signal propagates from one flip flop to the next as in a wave.
The Ripple Counter
For a simple ripple counter, JK flip flops with both inputs tied to 1 are the best option, since it will toggle state on the clock edge. For simplicity, the falling edge is used and assuming all flip flops start in reset state.
The first FF (Flip Flop) in the sequence gets the its input directly from the variable that needs to be counted. When it transitions from low to high to low again (this last transition generating a falling edge), in other words, when the input pulses, the FF changes to Set.
Since the First FF's output has not made a falling edge transition, the second FF remains Reset.
When another pulse appears at the input, the first FF changes to Reset again, creating a falling edge at its output, which triggers the second FF to transition to the Set state.
Another pulse, the first FF changes to Set; No falling edge at its output, the second FF keeps its state. Yet another pulse (Now four if you have been keeping the count), the first FF goes back to Reset, producing a falling edge; the second FF also goes back to Reset, producing a falling edge at its output that will trigger a third FF and making it Set.
If we assign the value of 1 to the first flip flop, 2 to the second and 4 to the third, we have a 3 bit binary number represented in there. Remember that with binary numbers, we add the values where the binary bit is set to 1.
As you can see, when all the transitions have occurred, the counter ends up with the count of input pulses it has received, representing them in a binary number.
The main drawback of this ripple counter is the fact that as the transition propagates from the first flip flop to the next, all the way to the last one, intermediate numbers are being set at the output, which introduces error and some confusion if a pure counter is needed.
This problem becomes most apparent when all the bits in the counter are set to 1 and another input pulse is applied: The output will be subtracted each time. In a 4 bit ripple counter, the max number that can be represented is 15 (1111), as the transition propagates, intermediate numbers 14 (1110). 12 (1100), 8 (1000) will appear before finally settling into 0 (0000).
The JK FlipFlop: An Improved SR Latch
The D flipflop has some advantages over the standard latch, but there are some applications where the flexibility of extra control signals is just too much of an advantage to pass.
The transformation from simple RS latch to JK flip flop is done by connecting them in a MasterSlave configuration like the D flip flop, the non inverted output of the first latch to the Set input of the second, and the inverted output to the Reset.
At this point we still have just an edge triggered SR flip flop, meaning the forbidden condition is still there. To prevent it from happening, we extend the enabling logic of the first latch from a two input AND gate to a three input gate (or alternatively, connecting the output of the first and gate to one of the inputs at the second, and use the second output as the final output of the now three input gate).
The extra input is taken from the inverted output of the flip flop as a whole (the output of the second latch) for Set and the non inverted output for Reset. What this does is that when the flip flop is Set (output = 1) the Set connection gets disabled (negated output = 0) by since not all the inputs of the AND gate will be 1; This is done because if the Flip flop is already set, there's no point in "setting" it again, so that input can be disabled safely.
When the flip flop is Reset, the output will be 0 and disable the Reset input, since it cannot be "More Reset" than it already is, so it can be safely disabled.
What happens when both are 1 at the same time? Since the only time the output will change states is at one of the clock edges, and since at any point one of the inputs will be disabled, the output will depend only on the input that is active.
If the flip flop is Reset and both inputs are 1, only the Set input will be enabled. At the clock edge, the flip flop will be set.
If the flip flop is Set and both inputs are 1, only the Reset input will be enabled. At the clock edge, the flip flop will be Reset.
As you can see, The JK flip flop will toggle states when both inputs are 1, eliminating the forbidden state and its disadvantages while still maintaining the flexibility of multiple control signals.
This Flip Flop is called JK just to distinguish it from the RS latch and its forbidden combination; they work very similar, but the hazard free of the JK flip flop made it deserve a special name for itself.
The transformation from simple RS latch to JK flip flop is done by connecting them in a MasterSlave configuration like the D flip flop, the non inverted output of the first latch to the Set input of the second, and the inverted output to the Reset.
At this point we still have just an edge triggered SR flip flop, meaning the forbidden condition is still there. To prevent it from happening, we extend the enabling logic of the first latch from a two input AND gate to a three input gate (or alternatively, connecting the output of the first and gate to one of the inputs at the second, and use the second output as the final output of the now three input gate).
The extra input is taken from the inverted output of the flip flop as a whole (the output of the second latch) for Set and the non inverted output for Reset. What this does is that when the flip flop is Set (output = 1) the Set connection gets disabled (negated output = 0) by since not all the inputs of the AND gate will be 1; This is done because if the Flip flop is already set, there's no point in "setting" it again, so that input can be disabled safely.
When the flip flop is Reset, the output will be 0 and disable the Reset input, since it cannot be "More Reset" than it already is, so it can be safely disabled.
What happens when both are 1 at the same time? Since the only time the output will change states is at one of the clock edges, and since at any point one of the inputs will be disabled, the output will depend only on the input that is active.
If the flip flop is Reset and both inputs are 1, only the Set input will be enabled. At the clock edge, the flip flop will be set.
If the flip flop is Set and both inputs are 1, only the Reset input will be enabled. At the clock edge, the flip flop will be Reset.
As you can see, The JK flip flop will toggle states when both inputs are 1, eliminating the forbidden state and its disadvantages while still maintaining the flexibility of multiple control signals.
This Flip Flop is called JK just to distinguish it from the RS latch and its forbidden combination; they work very similar, but the hazard free of the JK flip flop made it deserve a special name for itself.
MasterSlave Latch or FlipFlop
There are several drawbacks with using the level gating approach, one of the most serious is the fact that during the time the enable signal is on, the control signals pass easily through the now transparent latch, causing every change in control signals to be passed along the output of the latch. This is of special concern when working with the bouncy contacts of a mechanical switch.
A good way to overcome this is by connecting two of these transparent latches in what is called a master slave configuration in which one latch is active during the high level of the enable signal and a second, which gets its input from the first, gets activated with the low level of the enable signal.
The latches themselves are not changed in any way, only the enable signal will be connected in a special way: the first gets the signal directly and the second gets an inverted version of it.
When the enable signal is low, the first latch is disabled by the control logic, so it holds its state (no change at the output). At the same time, the second latch is enabled by the inverted signal (~0 = 1) and so its output is the same as its data input, which is connected to the first latch. Since the first latch is not changing at this point, the output of the second latch will not change either.
The moment the enable signal goes high, the first latch is enabled and its input will be the same as its data signal due to the input logic used to overcome the forbidden combination. The second latch is at the same time disabled by the inverted enable signal (~1 = 0), so even when its data input changes due to the first latch changing its output, the final output of this configuration will not change.
As the enable signal falls back to a low level (goes to 0), the first latch gets disabled and holds the last bit of data it got, while the second now gets enabled and starts transferring its data input, which is connected to the output of the first latch, to its output. It is at this point where any data applied through the cycle gets its transfer to the final output of the circuit.
A masterslave latch, commonly known as flipflop, is an edge triggered device, meaning that it will only perform its full function when the signal changes instead of using a signal level. In this example the D flipflop is falling edge triggered, since the output only changes when the enable (or more commonly known as clock in edge triggered circuits) goes from high (1) to low (0).
A good way to overcome this is by connecting two of these transparent latches in what is called a master slave configuration in which one latch is active during the high level of the enable signal and a second, which gets its input from the first, gets activated with the low level of the enable signal.
The latches themselves are not changed in any way, only the enable signal will be connected in a special way: the first gets the signal directly and the second gets an inverted version of it.
When the enable signal is low, the first latch is disabled by the control logic, so it holds its state (no change at the output). At the same time, the second latch is enabled by the inverted signal (~0 = 1) and so its output is the same as its data input, which is connected to the first latch. Since the first latch is not changing at this point, the output of the second latch will not change either.
The moment the enable signal goes high, the first latch is enabled and its input will be the same as its data signal due to the input logic used to overcome the forbidden combination. The second latch is at the same time disabled by the inverted enable signal (~1 = 0), so even when its data input changes due to the first latch changing its output, the final output of this configuration will not change.
As the enable signal falls back to a low level (goes to 0), the first latch gets disabled and holds the last bit of data it got, while the second now gets enabled and starts transferring its data input, which is connected to the output of the first latch, to its output. It is at this point where any data applied through the cycle gets its transfer to the final output of the circuit.
A masterslave latch, commonly known as flipflop, is an edge triggered device, meaning that it will only perform its full function when the signal changes instead of using a signal level. In this example the D flipflop is falling edge triggered, since the output only changes when the enable (or more commonly known as clock in edge triggered circuits) goes from high (1) to low (0).
The D Latch
So far we've been working with the same RS latch and all its limitations, such as the forbidden combination of inputs. By providing additional input logic and merging the control signals we have a more robust latch that is less prone to the unstable conditions of the simple RS latch.
For this new latch, the set and reset control signals are merged into one, the data signal. To control the flip flop with only one signal, we have to make sure that the latch is set when the data signal goes high (set to 1) and reset when it goes low (set to 0).
With only one signal this task is easily accomplished with an inverter gate. The non inverted signal will go to the set input and the inverted signal goes to the reset input. The enable circuitry goes after this new input logic to make sure that when the latch is not enabled it will maintain its current state, which would not be possible without this interface logic (one signal would always be high due to the inverter).
This simple arrangement of input logic combined with a two gate latch make it a very popular choice for high density Integrated Circuits (IC's). Sometimes the IC's whole memory and sequential logic is implemented using only latches very similar to these.
For this new latch, the set and reset control signals are merged into one, the data signal. To control the flip flop with only one signal, we have to make sure that the latch is set when the data signal goes high (set to 1) and reset when it goes low (set to 0).
With only one signal this task is easily accomplished with an inverter gate. The non inverted signal will go to the set input and the inverted signal goes to the reset input. The enable circuitry goes after this new input logic to make sure that when the latch is not enabled it will maintain its current state, which would not be possible without this interface logic (one signal would always be high due to the inverter).
This simple arrangement of input logic combined with a two gate latch make it a very popular choice for high density Integrated Circuits (IC's). Sometimes the IC's whole memory and sequential logic is implemented using only latches very similar to these.
The RS Latch
This circuit is called an SR latch because of its Set and Reset function and control signals.
A basic latch can be built with two NOR gates, with the output of one to one of the inputs of the other. The free input on both gates are used as control signals, one to SET the output to 1, and the other to RESET the output to 0. When no control signal is applied, the latch keeps its previous state via the conditions set at the output being fed back as inputs, which result in a condition that keeps the output the same.
The way in which the gates are wired make the output of one gate being the non inverted output of the latch, and the other being the inverted output.
There's one combination that will break that relation: when both Set and Reset are 1, both outputs will be at a 0 level. This in itself is not a cause for major concern, but when we make both control signals go from high to low at the same time, a race condition occurs.
This is called race condition because the output depends on which of the control signals stay high longer will determine the latch's output. If the set signal goes to 0 first, then the latch will have a 0 output (Reset), if the reset signal goes to 0 first, then the latch will have a 1 output (set).
The combination that produces this behavior is called a restricted or forbidden combination, because there's no way to actually know which signal will end up designating the latches output, which is not very good in a logic design where everything is supposed to be in one state or the other with full certainty.
This kind of latch is called a transparent latch because there are no synchronization or enabling signals, which means that the output will change as soon as the signals make it change; the latch doesn't restrict the flow of data through it.
Controlling the latch: Level Gated Latch
One way to have better control over the functioning of the latch, a layer of AND gates are connected between the control signals and the actual latch's gates inputs. One of the AND inputs is shared and is connected to a new control signal: the enable signal.
This new signal controls the availability of the set and reset inputs to the actual latch. When the enable signal is 0, no matter what the set and reset signals are, the latch will never receive them, only 0 will appear since it is connected at the AND gate's outputs, so it will only hold its previous state.
When the enable signal is at 1, the output of the AND gates will depend on the set and reset signals, essentially letting them pass through to the latch. As you can see, this new layer of gates and a new signal allow us to control whether we want the the latch to function (enable it) or to just keep its previous state (when enable = false [0])
This kind of enable mechanism is called level gating, since the control signals will only reach the latch when the gates it needs to pass are enabled by the level of the enable signal.
A basic latch can be built with two NOR gates, with the output of one to one of the inputs of the other. The free input on both gates are used as control signals, one to SET the output to 1, and the other to RESET the output to 0. When no control signal is applied, the latch keeps its previous state via the conditions set at the output being fed back as inputs, which result in a condition that keeps the output the same.
The way in which the gates are wired make the output of one gate being the non inverted output of the latch, and the other being the inverted output.
There's one combination that will break that relation: when both Set and Reset are 1, both outputs will be at a 0 level. This in itself is not a cause for major concern, but when we make both control signals go from high to low at the same time, a race condition occurs.
This is called race condition because the output depends on which of the control signals stay high longer will determine the latch's output. If the set signal goes to 0 first, then the latch will have a 0 output (Reset), if the reset signal goes to 0 first, then the latch will have a 1 output (set).
The combination that produces this behavior is called a restricted or forbidden combination, because there's no way to actually know which signal will end up designating the latches output, which is not very good in a logic design where everything is supposed to be in one state or the other with full certainty.
This kind of latch is called a transparent latch because there are no synchronization or enabling signals, which means that the output will change as soon as the signals make it change; the latch doesn't restrict the flow of data through it.
Controlling the latch: Level Gated Latch
One way to have better control over the functioning of the latch, a layer of AND gates are connected between the control signals and the actual latch's gates inputs. One of the AND inputs is shared and is connected to a new control signal: the enable signal.
This new signal controls the availability of the set and reset inputs to the actual latch. When the enable signal is 0, no matter what the set and reset signals are, the latch will never receive them, only 0 will appear since it is connected at the AND gate's outputs, so it will only hold its previous state.
When the enable signal is at 1, the output of the AND gates will depend on the set and reset signals, essentially letting them pass through to the latch. As you can see, this new layer of gates and a new signal allow us to control whether we want the the latch to function (enable it) or to just keep its previous state (when enable = false [0])
This kind of enable mechanism is called level gating, since the control signals will only reach the latch when the gates it needs to pass are enabled by the level of the enable signal.
Sequential Logic: Circuits with memory
By using a circuit output as an input to itself so that the next output depends not only on the input signals that are applied at the moment but also on its current state (by feeding back the signal, which was in turn generated by a combination of previous inputs and outputs), we can create circuit that work in steps (sequentially).
To accomplish this we first need a subcircuit that will hold an output even if the inputs change. The most basic circuit that accomplishes this is called a latch.
To accomplish this we first need a subcircuit that will hold an output even if the inputs change. The most basic circuit that accomplishes this is called a latch.
Quick Logic Synthesis
Multiplexors are a simple combinational circuit whose function is to select one input line to pass to its only output, by selecting it using address/select lines, the number represented using the binary number system is the input that is being selected.
The input lines available often are a power of two (2, 4, 8, 16), and the number of select lines are the power to which 2 must be elevated to obtain the number of input lines (2^n = L, where n is the number of select lines and L is the number of input lines)
One way to look at the functioning of a multiplexer is that the select lines represent a row in the circuit's truth table, and the value connected to the corresponding input line is the output of that row, which will be passed to the final output of the multiplexer if that line is selected.
As you can see, this circuit can be used to implement an arbitrary truth table of n input variables (remember in the previous equation we used n to represent the number of select lines).
An advantage of this method over discrete gates in implementing a truth table is the fact that only one integrated circuit is used in practice, gates ussually needing more IC's because of lower integration (uses less gates per IC).
Boolean equations can also be implemented by first generating the truth table for it by evaluating the output variable for every possible input variable's value combination.
One disadvantage in using this method over discrete logic gates is the fact that since the multiplexer is not optimized for any particular configuration, so it tends to be slower in practice but such speed penalty affects only high speed and high gate count circuits.
Extending Quick Synthesis: ROM Logic Synthesis
The idea of using a somewhat generic circuit to implement any truth table without modifying the underlying circuit by using a multiplexor can be extended to the use of ROM modules in order to extend the number of input variables available (by having more address/select lines) and the number of outputs per combination (by having more output lines)
A ROM module is a type of memory circuit that is either built with its contents (Hardwired or Masked) or programmed (Programmable ROM). Each address selects a cell of memory (as opposed to a single line in the multiplexor) that contains the information to be passed to the output in groups of size that is a power of two (8 [2^3] and 16 [2^4] being the most common).
This allows us to implement simultaneous truth tables by programming each output line of each address as the output of one of those truth tables. It's basically like having many multiplexors connected to the same select lines, each implementing a different (or even the same) truth table.
The input lines available often are a power of two (2, 4, 8, 16), and the number of select lines are the power to which 2 must be elevated to obtain the number of input lines (2^n = L, where n is the number of select lines and L is the number of input lines)
One way to look at the functioning of a multiplexer is that the select lines represent a row in the circuit's truth table, and the value connected to the corresponding input line is the output of that row, which will be passed to the final output of the multiplexer if that line is selected.
As you can see, this circuit can be used to implement an arbitrary truth table of n input variables (remember in the previous equation we used n to represent the number of select lines).
An advantage of this method over discrete gates in implementing a truth table is the fact that only one integrated circuit is used in practice, gates ussually needing more IC's because of lower integration (uses less gates per IC).
Boolean equations can also be implemented by first generating the truth table for it by evaluating the output variable for every possible input variable's value combination.
One disadvantage in using this method over discrete logic gates is the fact that since the multiplexer is not optimized for any particular configuration, so it tends to be slower in practice but such speed penalty affects only high speed and high gate count circuits.
Extending Quick Synthesis: ROM Logic Synthesis
The idea of using a somewhat generic circuit to implement any truth table without modifying the underlying circuit by using a multiplexor can be extended to the use of ROM modules in order to extend the number of input variables available (by having more address/select lines) and the number of outputs per combination (by having more output lines)
A ROM module is a type of memory circuit that is either built with its contents (Hardwired or Masked) or programmed (Programmable ROM). Each address selects a cell of memory (as opposed to a single line in the multiplexor) that contains the information to be passed to the output in groups of size that is a power of two (8 [2^3] and 16 [2^4] being the most common).
This allows us to implement simultaneous truth tables by programming each output line of each address as the output of one of those truth tables. It's basically like having many multiplexors connected to the same select lines, each implementing a different (or even the same) truth table.
Minterms and Maxterms
Truth tables help determine the input combination that will yield a certain output value; This is useful when we want to translate a given truth table into a boolean equation that can be much easily manipulated and simplified before actually building a circuit, hopefully making the wiring eaasier and cheaper by using less components.
There are two complementary terms that we use to accomplish this: Minterms and Maxterms.
A minterm represents each row of the truth table that has an output of 1. To translate a truth table row into the corresponding minterm we AND (or multiply) each of the terms at the input, inverting (applying a NOT operator) to each variable whose state for that particular row happens to be zero.
For example, the three input truth table:
A B C Z
0 0 0 0
0 0 1 0
0 1 0 1 <
0 1 1 0
1 0 0 1 <
1 0 1 0
1 1 0 0
1 1 1 1 <
The rows marked with an arrow represent the minterms of the table. The equation for this table would be
Z = (~A * B * ~C) + (A * ~B * ~C) + (A * B * C)
Note that all three minters go together in the same equation, since any of them can trigger an output 1 (if the first is true [1] OR [+] the second OR the third, the output is also true [1]).
Minterms are also called the sum of products representation because of the way they end up arranged in the equation.
Maxterms are the complementary operation of minterms. Maxterms are obtained from the rows that have a zero in them as output. Using the above example, all the rows not marked with an arrow are the table's maxterms.
To translate from the table to a boolean equation we OR (sum) each of the terms acting as input, applying a NOT operation to any input that happens to be a 1 for that particular row. Notice how the operation (AND for minterms, OR for maxterms) and the criteria for negation (when the input variable is 1 for minterms and when it is 0 for maxterms) are opposite of each other.
The equation in Maxterms for the example would be
Z = (A + B +C)*(A + B + ~C)*(A + ~B + ~C)*(~A + B + C)*(~A + ~B + C)
For this particular table, you can see that the equation in maxterms has more of them, this is because there are more 0's as output than 1's, and since each row having them is one term in the equation, the more there are the more terms the resulting equation will have.
The maxterm representation is also called a product of sums, because of the way they are arranged.
They are arranged in such a way because if any of them is 0, then the output should be 0 as well, even if the other terms are 1. This is because if one of the terms is 0, then it means that the combination of inputs matches one of the rows of the table that results in a 0 output, and no matter what the other terms are (any number multiplied by 0 is 0), the output should be 0 in order for the equation to work the same as what's specified in the table.
There are two complementary terms that we use to accomplish this: Minterms and Maxterms.
A minterm represents each row of the truth table that has an output of 1. To translate a truth table row into the corresponding minterm we AND (or multiply) each of the terms at the input, inverting (applying a NOT operator) to each variable whose state for that particular row happens to be zero.
For example, the three input truth table:
A B C Z
0 0 0 0
0 0 1 0
0 1 0 1 <
0 1 1 0
1 0 0 1 <
1 0 1 0
1 1 0 0
1 1 1 1 <
The rows marked with an arrow represent the minterms of the table. The equation for this table would be
Z = (~A * B * ~C) + (A * ~B * ~C) + (A * B * C)
Note that all three minters go together in the same equation, since any of them can trigger an output 1 (if the first is true [1] OR [+] the second OR the third, the output is also true [1]).
Minterms are also called the sum of products representation because of the way they end up arranged in the equation.
Maxterms are the complementary operation of minterms. Maxterms are obtained from the rows that have a zero in them as output. Using the above example, all the rows not marked with an arrow are the table's maxterms.
To translate from the table to a boolean equation we OR (sum) each of the terms acting as input, applying a NOT operation to any input that happens to be a 1 for that particular row. Notice how the operation (AND for minterms, OR for maxterms) and the criteria for negation (when the input variable is 1 for minterms and when it is 0 for maxterms) are opposite of each other.
The equation in Maxterms for the example would be
Z = (A + B +C)*(A + B + ~C)*(A + ~B + ~C)*(~A + B + C)*(~A + ~B + C)
For this particular table, you can see that the equation in maxterms has more of them, this is because there are more 0's as output than 1's, and since each row having them is one term in the equation, the more there are the more terms the resulting equation will have.
The maxterm representation is also called a product of sums, because of the way they are arranged.
They are arranged in such a way because if any of them is 0, then the output should be 0 as well, even if the other terms are 1. This is because if one of the terms is 0, then it means that the combination of inputs matches one of the rows of the table that results in a 0 output, and no matter what the other terms are (any number multiplied by 0 is 0), the output should be 0 in order for the equation to work the same as what's specified in the table.
Logic Equations
Aside from representing the functioning of a logic gate with a truth table and a grammatical (with words) definition, the use of logic equations can be used not only to represent logic gates and circuits, but also with the usage of some theorems and equivalences, to reduce the number of terms involved, simplifying the equation.
In logic equation every boolean variable involved is assigned a letter or symbol, very similar to the algebraic representation of unknown numerical values using letters; In fact, this approach to logic is called boolean algebra due to their similarity (remember that it is called boolean variables and algebra because of the person who did extensive work on the subject, George Bool).
Each input variable is usually assigned one of the first letters of the alphabet (A, B, C, and so on), and the output variables are assigned the last letters (W, Y, Z, and so on; note that X was specifically left out, this is because it is used as a "Don't Care" condition in logic simplification). This assignment of letters is arbitrary, any other letter or symbol can be used instead, but it is a common way to assign them and most people working in the area follow this pattern.
The logic operations are either written in uppercase (OR, AND, NOT) or represented by their logical symbol (V for OR, ^ for AND, ~ or the variable name overlined for NOT). Parenthesis are used to order the operations and force precedent evaluation before using in other operations, similar as in algebra where the operations in deeper levels of parenthesis are evaluated first.
For example, to represent the AND operation, using A and B as input variables and Z as output, you can write
Z = A AND B
or alternatively
Z = A ^ B
For a more complex circuit where the order is not always clear (similar to algebra, the evaluation is always left to right since there are no operations of higher priority like division or multiplication are in mathematical algebra), the use of parenthesis is encouraged, for example:
Z = A AND B OR B AND C
could mean very different things depending on how it is interpreted, so the equivalent form
Z = (A AND B) OR (B AND C)
being much more explicit in what gets evaluated first is preferred.
Another way to represent the operations in a logical equation is to simply use the mathematical operators that closely resemble their operations (+ for OR, * for AND); the NOT gate is an exception to this, as well as most compound gate. The only compound gates that have a symbol associated to them are the XOR gate (a + sign enclosed in a circle) and the XNOR gate (since it represents a logical equivalence, the = sign or the three line equivalence sign is used).
In logic equation every boolean variable involved is assigned a letter or symbol, very similar to the algebraic representation of unknown numerical values using letters; In fact, this approach to logic is called boolean algebra due to their similarity (remember that it is called boolean variables and algebra because of the person who did extensive work on the subject, George Bool).
Each input variable is usually assigned one of the first letters of the alphabet (A, B, C, and so on), and the output variables are assigned the last letters (W, Y, Z, and so on; note that X was specifically left out, this is because it is used as a "Don't Care" condition in logic simplification). This assignment of letters is arbitrary, any other letter or symbol can be used instead, but it is a common way to assign them and most people working in the area follow this pattern.
The logic operations are either written in uppercase (OR, AND, NOT) or represented by their logical symbol (V for OR, ^ for AND, ~ or the variable name overlined for NOT). Parenthesis are used to order the operations and force precedent evaluation before using in other operations, similar as in algebra where the operations in deeper levels of parenthesis are evaluated first.
For example, to represent the AND operation, using A and B as input variables and Z as output, you can write
Z = A AND B
or alternatively
Z = A ^ B
For a more complex circuit where the order is not always clear (similar to algebra, the evaluation is always left to right since there are no operations of higher priority like division or multiplication are in mathematical algebra), the use of parenthesis is encouraged, for example:
Z = A AND B OR B AND C
could mean very different things depending on how it is interpreted, so the equivalent form
Z = (A AND B) OR (B AND C)
being much more explicit in what gets evaluated first is preferred.
Another way to represent the operations in a logical equation is to simply use the mathematical operators that closely resemble their operations (+ for OR, * for AND); the NOT gate is an exception to this, as well as most compound gate. The only compound gates that have a symbol associated to them are the XOR gate (a + sign enclosed in a circle) and the XNOR gate (since it represents a logical equivalence, the = sign or the three line equivalence sign is used).
Truth Tables
In order to graphically and orderly present each possible output from a logic gate or any digital circuit, a truth table is used. These tables present every possible combination of input states and its corresponding output.
The first columns represent each of the input variables, and the last one (or few if there's more than one) represent the output of the circuit. For low number of variables (lower than 4 or 5) the number of possible combinations is small enough to be able to represent in a truth table, and all possible input combinations and their corresponding output can be quickly visualized.
The first columns represent each of the input variables, and the last one (or few if there's more than one) represent the output of the circuit. For low number of variables (lower than 4 or 5) the number of possible combinations is small enough to be able to represent in a truth table, and all possible input combinations and their corresponding output can be quickly visualized.
Binary Representations
A logic system is one that anything it does can be translated to a true or false, present or absent, high or low, in other words, two opposite and contrasting states where the system can only be at one of them at any one time. Digital electronics use only two voltage levels to work with, one to represent a true, 1 or high (usually 3v or 5v) and another to represent false, 0 or low (a connection to ground, which is at 0v), which make the basis of any logic system.
But what does a true represent in a logic circuit? anything you can think of, it depends on what you are using it to model. One of the most used introductory digital systems is that of a car key alarm, where if the door is open while the key is still in the ignition, a buzzer alarm will sound alerting you not to let the key inside the car when you close it.
To construct a digital circuit for this alarm, you use one input to represent whether the door is open (will be true when it is open, false when closed) and another to represent whether the key is in the ignition (will be true when in the ignition, false when not). For this circuit we want the buzzer to sound when both conditions are true: the door is open and the key is in the ignition.
A digital system is not concerned if the key is only half in, at the on or off position or if the car is only half open or it didn't close correctly; all of these situations are either forced to one state of the other, or switching between both at a very high rate, but it must have one of only two values.
As you can see, we have modeled a fairly complex situation (an alarm controlled by a door and a key) to only two inputs that take only two values. This is what makes digital circuits very useful, they are dependable (a half closed door is an open door, just as a slightly open door).
But what does a true represent in a logic circuit? anything you can think of, it depends on what you are using it to model. One of the most used introductory digital systems is that of a car key alarm, where if the door is open while the key is still in the ignition, a buzzer alarm will sound alerting you not to let the key inside the car when you close it.
To construct a digital circuit for this alarm, you use one input to represent whether the door is open (will be true when it is open, false when closed) and another to represent whether the key is in the ignition (will be true when in the ignition, false when not). For this circuit we want the buzzer to sound when both conditions are true: the door is open and the key is in the ignition.
A digital system is not concerned if the key is only half in, at the on or off position or if the car is only half open or it didn't close correctly; all of these situations are either forced to one state of the other, or switching between both at a very high rate, but it must have one of only two values.
As you can see, we have modeled a fairly complex situation (an alarm controlled by a door and a key) to only two inputs that take only two values. This is what makes digital circuits very useful, they are dependable (a half closed door is an open door, just as a slightly open door).
Multistage active filters: The Reactive Voltage Divider Approach
Another method to create active filters using opamps is to create a voltage divider with a resistance and a reactance (from a capacitor). This approach has some advantages over the previously mentioned filters: they are easy to build, easy to understand, and have "programmable" gain.
In the reactive voltage divider, the input is applied to the non inverting input of the opamp. This is so that it can be used as a simple non inverting amplifier, the gain being set by extra resistors that do not interfere or need to be considered much in the filter's working; they are just there to set the amplifier feedback's gain.
The signal is applied in series with one of the components and taken at the input in parallel with the second. The choice of which component is in series and which in paralel with the non inverting input has direct consequences in the functioning of the filter.
If the series component is chosen to be a resistor, then the voltage at the capacitor will determine the signal to be amplified. Since the reactance of the capacitor gets lower with frequency, the higher the frequency the lower the signal available at the opamp input (remember the voltage divider formula: (Vin*R2)/(R1 + R2), in this case, it becomes (Vin*Xc)/(Xc + R) where Xc is the capacitive reactance); This configuration gives us a low pass filter.
With the capacitor being the series component, the voltage at the resistor now determines the signal available at the opamp input. As the frequency gets higher, the capacitor's reactance lowers, up to the point where it acts almost as just a wire; this means that the higher the frequency the more signal available to the opamp. This configuration gives us a high pass filter.
These two main types of voltage divider filters can be cascaded (The output of the first used the the input of the second) in a single stage (one opamp, multiple voltage dividers) or multiple stages (one opamp per voltage divider), the latter having better characteristics due to the opamp's compensating mechanisms.
In the reactive voltage divider, the input is applied to the non inverting input of the opamp. This is so that it can be used as a simple non inverting amplifier, the gain being set by extra resistors that do not interfere or need to be considered much in the filter's working; they are just there to set the amplifier feedback's gain.
The signal is applied in series with one of the components and taken at the input in parallel with the second. The choice of which component is in series and which in paralel with the non inverting input has direct consequences in the functioning of the filter.
If the series component is chosen to be a resistor, then the voltage at the capacitor will determine the signal to be amplified. Since the reactance of the capacitor gets lower with frequency, the higher the frequency the lower the signal available at the opamp input (remember the voltage divider formula: (Vin*R2)/(R1 + R2), in this case, it becomes (Vin*Xc)/(Xc + R) where Xc is the capacitive reactance); This configuration gives us a low pass filter.
With the capacitor being the series component, the voltage at the resistor now determines the signal available at the opamp input. As the frequency gets higher, the capacitor's reactance lowers, up to the point where it acts almost as just a wire; this means that the higher the frequency the more signal available to the opamp. This configuration gives us a high pass filter.
These two main types of voltage divider filters can be cascaded (The output of the first used the the input of the second) in a single stage (one opamp, multiple voltage dividers) or multiple stages (one opamp per voltage divider), the latter having better characteristics due to the opamp's compensating mechanisms.
The Band Stop or Notch Filter
Another variation of the opamp filter is the band stop or notch filter, called like that because it is as if you cut a notch in the frequencies that pass through the filter, allowing all frequencies outside the notch to pass and blocking the frequencies in that range.
Just as the high pass filter is a variation of the low pass filter, changing the reactive element from input to feedback, so is the band stop filter a variation of the band pass filter, but instead of changing components we are going to change the configuration of the components.
For this circuit, the input impedance consists of a resistor and capacitor in parallel (it was in series for the band pass), and the feedback impedance will be a capacitor and resistor in series (was parallel in bandpass). As you can see, only the connections change, the components stay the same.
At low frequencies, the input impedance is dominated by the resistor, since the reactance is much higher than the resistance (the connection is in parallel, the equivalent is always lower than the lowest value). At the same low frequencies, the feedback impedance is dominated by the capacitor's reactance, since it is also high compared to the resistor (the connection is in series, the equivalent is always higher than the highest value).
The gain of the opamp, connected in an inverting amplifier configuration, is given by Zf/Zin. The input impedance Zin is very low, near by the input resistance, and the feedback impedance is very high, driven by the capacitors reactance; this makes the ratio very high, tending towards infinity by the increasing Zf at lower and lower frequencies (it is theoretically infinite at DC, or 0hz frequency).
One way to limit the gain, similar to what was done for the low pass filter is to use a resistor in series with either the whole feedback series connection or just across the capacitor. This makes the extremely high reactance of the capacitor not dominate at very low frequencies, instead the parallel connection is closer to the lower value, in this case the resistor. This is done to ensure that the opamp does not go into saturation, because if it does it clips the signal and distorts it.
At very high frequencies, the input impedance tends towards, since the capacitor acts as a very low value. With the feedback connection, the capacitor is also a very low value, but since there's the series resistor, the impedance will be limited to that value.
Looking at the gain equation (Zf/Zin), you can see that the gain tends towards infinity, since the input impedance goes very low at high frequencies. To limit this, you can put resistor in series with the original parallel combination.
At intermediate frequencies, where the input impedance and feedback impedance are very close, the gain will be close to 1.
With all this, you can see that the notch filter is the opposite of the band pass filter: the band stop filter highly amplifies signals above and below the "notch" or middle frequencies, and doesn't amplify (instead of blocking) the intermediate frequencies. This is in contrast with the band pass that attenuated signals above and below, and also didn't amplify intermediate frequencies (gain of 1).
For all the filters discussed so far there are other far more efficient and that also block undesired signals and amplify the frequencies of interest.
Just as the high pass filter is a variation of the low pass filter, changing the reactive element from input to feedback, so is the band stop filter a variation of the band pass filter, but instead of changing components we are going to change the configuration of the components.
For this circuit, the input impedance consists of a resistor and capacitor in parallel (it was in series for the band pass), and the feedback impedance will be a capacitor and resistor in series (was parallel in bandpass). As you can see, only the connections change, the components stay the same.
At low frequencies, the input impedance is dominated by the resistor, since the reactance is much higher than the resistance (the connection is in parallel, the equivalent is always lower than the lowest value). At the same low frequencies, the feedback impedance is dominated by the capacitor's reactance, since it is also high compared to the resistor (the connection is in series, the equivalent is always higher than the highest value).
The gain of the opamp, connected in an inverting amplifier configuration, is given by Zf/Zin. The input impedance Zin is very low, near by the input resistance, and the feedback impedance is very high, driven by the capacitors reactance; this makes the ratio very high, tending towards infinity by the increasing Zf at lower and lower frequencies (it is theoretically infinite at DC, or 0hz frequency).
One way to limit the gain, similar to what was done for the low pass filter is to use a resistor in series with either the whole feedback series connection or just across the capacitor. This makes the extremely high reactance of the capacitor not dominate at very low frequencies, instead the parallel connection is closer to the lower value, in this case the resistor. This is done to ensure that the opamp does not go into saturation, because if it does it clips the signal and distorts it.
At very high frequencies, the input impedance tends towards, since the capacitor acts as a very low value. With the feedback connection, the capacitor is also a very low value, but since there's the series resistor, the impedance will be limited to that value.
Looking at the gain equation (Zf/Zin), you can see that the gain tends towards infinity, since the input impedance goes very low at high frequencies. To limit this, you can put resistor in series with the original parallel combination.
At intermediate frequencies, where the input impedance and feedback impedance are very close, the gain will be close to 1.
With all this, you can see that the notch filter is the opposite of the band pass filter: the band stop filter highly amplifies signals above and below the "notch" or middle frequencies, and doesn't amplify (instead of blocking) the intermediate frequencies. This is in contrast with the band pass that attenuated signals above and below, and also didn't amplify intermediate frequencies (gain of 1).
For all the filters discussed so far there are other far more efficient and that also block undesired signals and amplify the frequencies of interest.
Active Bandpass filters
When both types of filters are combined into one, that is, a capacitor and resistor in series is used as input and a capacitor and resistor are used in parallel for the feedback, a new type of filter emerges: the bandpass filter.
To see how this works, we need to simplify the circuit to use only one element instead of two, in order to make analysis easier. Since when AC is applied to a capacitor it can be replaced with its capacitive reactance in ohms, we can use that to combine it with the series resistor at the input, and with the parallel resistor for feedback.
This gives us an input impedance (Impedance is a generalization of resistance that also includes reactances, and is also measured in ohms) and a feedback impedance, in a configuration similar to the simple inverting amplifier.
Since both impedances are frequency dependent, the gain will be frequency dependent as well. At low frequencies, the input capacitor's reactance is very high and dominates the series combination with the resistor, so the input impedance becomes very large. At the same time, the feedback capacitor will also have a very high reactance, but this time the resistor dominates because the connection is made in parallel.
Since the gain is defined by the ratio Rf/Rin, generalized to impedances as Zf/Zin, where Z denominates impedances in most electronics literature. Since the feedback impedance is small, limited by the resistor, compared the input impedance which tends to infinity, the ratio will be very small and will attenuate the signal (Zf << Zin, so the ratio is less than 1). In this case, the extremely high input impedance drives the ratio towards zero.
At very high frequencies, the input impedance is dominated by the resistance, since the capacitor's reactance is very small. The opposite effect happens at the feedback, since now the capacitor dominates with its very low reactance, which makes the impedance very low.
Checking the gain ratio Zf/Zin, we can see that now the input impedance is very low, limited by the input resistor, but the feedback impedance will be lower still, going towards zero, not being limited by anything since the capacitor is dominating the connection, so the ratio will again be very small, attenuating the signal. This time, the very small feedback impedance drives the ratio to zero.
At medium frequencies, where no single component dominates each connection, both input and feedback impedance will be very close to each other, since they will be a very similar value, assuming equal components. At the frequency where the series combination and the parallel combination have the same value, the gain will be 1, given by the ratio Zf/Zin, where Zf = Zin; This is called the center frequency, and it is the only signal that will not be attenuated.
The overall effect is that this circuit will attenuate both high and low frequency signals applied to it, and only pass a small range (also called band) of frequencies where both input and feedback impedances have a very similar value, hence the name bandpass filter. This is useful when you need to block noise or extra signals created within a circuit.
To see how this works, we need to simplify the circuit to use only one element instead of two, in order to make analysis easier. Since when AC is applied to a capacitor it can be replaced with its capacitive reactance in ohms, we can use that to combine it with the series resistor at the input, and with the parallel resistor for feedback.
This gives us an input impedance (Impedance is a generalization of resistance that also includes reactances, and is also measured in ohms) and a feedback impedance, in a configuration similar to the simple inverting amplifier.
Since both impedances are frequency dependent, the gain will be frequency dependent as well. At low frequencies, the input capacitor's reactance is very high and dominates the series combination with the resistor, so the input impedance becomes very large. At the same time, the feedback capacitor will also have a very high reactance, but this time the resistor dominates because the connection is made in parallel.
Since the gain is defined by the ratio Rf/Rin, generalized to impedances as Zf/Zin, where Z denominates impedances in most electronics literature. Since the feedback impedance is small, limited by the resistor, compared the input impedance which tends to infinity, the ratio will be very small and will attenuate the signal (Zf << Zin, so the ratio is less than 1). In this case, the extremely high input impedance drives the ratio towards zero.
At very high frequencies, the input impedance is dominated by the resistance, since the capacitor's reactance is very small. The opposite effect happens at the feedback, since now the capacitor dominates with its very low reactance, which makes the impedance very low.
Checking the gain ratio Zf/Zin, we can see that now the input impedance is very low, limited by the input resistor, but the feedback impedance will be lower still, going towards zero, not being limited by anything since the capacitor is dominating the connection, so the ratio will again be very small, attenuating the signal. This time, the very small feedback impedance drives the ratio to zero.
At medium frequencies, where no single component dominates each connection, both input and feedback impedance will be very close to each other, since they will be a very similar value, assuming equal components. At the frequency where the series combination and the parallel combination have the same value, the gain will be 1, given by the ratio Zf/Zin, where Zf = Zin; This is called the center frequency, and it is the only signal that will not be attenuated.
The overall effect is that this circuit will attenuate both high and low frequency signals applied to it, and only pass a small range (also called band) of frequencies where both input and feedback impedances have a very similar value, hence the name bandpass filter. This is useful when you need to block noise or extra signals created within a circuit.
An active high pass filter: The differentiator Revisited
For the differentiator, an input capacitor was used so as to block constant signals and just output the rate of change. Some examples of calculated derivatives where for constantly changing which resulted in a constant, and the sinusoidal wave which resulted in a cosinusoidal output, which is just a phase shifted sine wave.
To understand the differentiator's use as a high pass filter, we are going to focus on this last derivative and combine with our understanding of capacitive reactance.
Starting with DC and very low frequencies, the reactance of the capacitor becomes essentially infinite, since it blocks all current due to the voltage buildup inside of it. This makes the gain equation of the inverting amplifier it is based on to approach zero.
Vout = Vin (Rf / Rin)
As the frequency increases, less residual charge stays in the capacitor making it less restrictive to the apparent current flow, which results in less reactance, driving the ratio of resistances higher as the reactance approaches zero.
At very high frequencies, the capacitive reactance becomes so low that it is essentially a closed switch, drawing large amounts of current that need to be compensated by the opamp, which reaches saturation on each semicycle of the input signal; At high frequencies the gain approaches infinity.
To limit the gain at high frequencies, a resistor is used in series with the input capacitor. What this does is that as the capacitive reactance gets lower to the point of approaching zero, the series resistance becomes the dominant component that prevents the flow of current, limiting the gain to the ratio of that input resistor and the output resistor, just like a simple inverting amplifier.
So you see, the differentiator also works as a high pass filter, being the inverse operation in both mathematical terms as the integrator (a derivative is the inverse operation of the integral) and in filter functionality (blocks the opposite side of the frequencies).
To understand the differentiator's use as a high pass filter, we are going to focus on this last derivative and combine with our understanding of capacitive reactance.
Starting with DC and very low frequencies, the reactance of the capacitor becomes essentially infinite, since it blocks all current due to the voltage buildup inside of it. This makes the gain equation of the inverting amplifier it is based on to approach zero.
Vout = Vin (Rf / Rin)
As the frequency increases, less residual charge stays in the capacitor making it less restrictive to the apparent current flow, which results in less reactance, driving the ratio of resistances higher as the reactance approaches zero.
At very high frequencies, the capacitive reactance becomes so low that it is essentially a closed switch, drawing large amounts of current that need to be compensated by the opamp, which reaches saturation on each semicycle of the input signal; At high frequencies the gain approaches infinity.
To limit the gain at high frequencies, a resistor is used in series with the input capacitor. What this does is that as the capacitive reactance gets lower to the point of approaching zero, the series resistance becomes the dominant component that prevents the flow of current, limiting the gain to the ratio of that input resistor and the output resistor, just like a simple inverting amplifier.
So you see, the differentiator also works as a high pass filter, being the inverse operation in both mathematical terms as the integrator (a derivative is the inverse operation of the integral) and in filter functionality (blocks the opposite side of the frequencies).
A low pass active filter: The Integrator Revisited
When we first used capacitors as feedback element of an opamp the workings of the circuit was only looked at in terms of direct current, charging the capacitor.
With your new knowledge of capacitive reactance, you can see how when the input signal is an alternating current the capacitor and its reactance control the gain of the opamp.
With low frequencies, the reactance of the capacitor is high because a large current is stored that must be overcome each cycle in order for it to charge in the opposite polarity. Looking at the formula, you can see why this is true:
Xc = 1 / (2 pi f C)
With f approaching zero, the division gets larger and larger, approaching infinity.
One problem with having just a capacitor control the gain is that for low frequencies the gain can be so high as to drive the output to saturation on both polarities for each change in polarity of the input signal. To prevent this, a resistor is connected in parallel to the capacitor in order to limit the gain.
How this works is when the a low frequency is applied as input, the reactance of the capacitor will be extremely high, and since it is in parallel with the resistor, the equivalent resistance of the parallel combination will always be smaller than the smallest of values, so if the reactance is much higher than the resistance, so the resistance will dominate (when a component dominates is when a combination tends to the particular value of that component).
As the frequency at the input increases, the reactance of the capacitor decreases, making the parallel combination lower and lower. This has the effect that the ratio of Rin and Rf is smaller, making the gain of the amplifier lower and lower, given by the equation
Vout = Vin (Rf / Rin)
With very high frequencies, Rf is dominated by the very low reactance of the capacitor, and the gain tends towards zero, so these frequencies are being blocked.
As you can see, the integrator circuit is also a low pass filter, amplifying low frequency signals and attenuating high frequency signals to the point of blocking them.
With your new knowledge of capacitive reactance, you can see how when the input signal is an alternating current the capacitor and its reactance control the gain of the opamp.
With low frequencies, the reactance of the capacitor is high because a large current is stored that must be overcome each cycle in order for it to charge in the opposite polarity. Looking at the formula, you can see why this is true:
Xc = 1 / (2 pi f C)
With f approaching zero, the division gets larger and larger, approaching infinity.
One problem with having just a capacitor control the gain is that for low frequencies the gain can be so high as to drive the output to saturation on both polarities for each change in polarity of the input signal. To prevent this, a resistor is connected in parallel to the capacitor in order to limit the gain.
How this works is when the a low frequency is applied as input, the reactance of the capacitor will be extremely high, and since it is in parallel with the resistor, the equivalent resistance of the parallel combination will always be smaller than the smallest of values, so if the reactance is much higher than the resistance, so the resistance will dominate (when a component dominates is when a combination tends to the particular value of that component).
As the frequency at the input increases, the reactance of the capacitor decreases, making the parallel combination lower and lower. This has the effect that the ratio of Rin and Rf is smaller, making the gain of the amplifier lower and lower, given by the equation
Vout = Vin (Rf / Rin)
With very high frequencies, Rf is dominated by the very low reactance of the capacitor, and the gain tends towards zero, so these frequencies are being blocked.
As you can see, the integrator circuit is also a low pass filter, amplifying low frequency signals and attenuating high frequency signals to the point of blocking them.
Basic Passive Filter: a Reactive voltage divider
Looking back at the voltage divider, it is a circuit where the voltage across the second resistor is proportional to the ratio of the second resistor divided by the total resistance of the divider. Since ohms are used for the calculations, we can replace the second resistor with a capacitive reactance and still get the same results.
In a purely resistive voltage divider, when both resistances are equal results in an output voltage that is half. With a reactance, there's a frequency that will set the reactance to be equal to the resistance, thus only half of the alternating signal will be available for further processing.
There's another widely used frequency where the signal starts to be noticeably attenuated if seen from the output of the divider. This frequency corresponds to the point where the reactance will cause an output of about 70% (0.7071x approx, which is the square root of 1/2) of the original signal being delivered to the output; this frequency is called the cutoff frequency.
The cutoff frequency in a filter is used for both blocking higher or lower frequency signals. If blocking lower frequencies you can think of it as the point where the filter start to conduct a large portion of the input.
This basic circuit is the basis for most passive filters. One of the disadvantages of this simple design is that it only allows for attenuation of a signal, but other times it is of more use to amplify a given range of frequencies and block others, instead of just attenuating.
In a purely resistive voltage divider, when both resistances are equal results in an output voltage that is half. With a reactance, there's a frequency that will set the reactance to be equal to the resistance, thus only half of the alternating signal will be available for further processing.
There's another widely used frequency where the signal starts to be noticeably attenuated if seen from the output of the divider. This frequency corresponds to the point where the reactance will cause an output of about 70% (0.7071x approx, which is the square root of 1/2) of the original signal being delivered to the output; this frequency is called the cutoff frequency.
The cutoff frequency in a filter is used for both blocking higher or lower frequency signals. If blocking lower frequencies you can think of it as the point where the filter start to conduct a large portion of the input.
This basic circuit is the basis for most passive filters. One of the disadvantages of this simple design is that it only allows for attenuation of a signal, but other times it is of more use to amplify a given range of frequencies and block others, instead of just attenuating.
Capacitive Reactance
One of the properties of capacitors is its ability to hold a charge when a voltage is applied to it. The buildup of charges inside the capacitor generates a voltage across it and in opposition of the voltage that is driving the incoming charges, effectively resisting to their flow.
This effect of resisting current flow into and out of (an apparent flow "through") is called reactance and is measured in ohms, the same unit for resistance. This is because an ohm is a unit of opposition to electric current, so it makes sense that reactance is also measured in ohms.
With an alternating signal applied to the capacitor, some charge starts building up inside the capacitor opposing the flow of current, not enough to block it completely, so it appears to go through the capacitor; there's some opposition, but not as much as with a constant current, which it can block completely when fully charged.
As the frequency (number of times the signal completes a cycle of 0v > positive peak > 0v > negative peak > 0v) the charge that accumulates inside the capacitor gets smaller and smaller, up to the point where virtually no current is stored and all of the signal gets apparently through the capacitor.
With an increase in frequency, the capacitive reactance goes down in the same proportion. This has a more formal definition, given by the equation:
Xc = 1 / (2 pi f C)
where Xc is the capacitive reactance in ohms, f is the signal frequency and C is the capacitance of the component.
The 2 pi comes from the fact that reactance is actually dependent on the angular velocity of the incoming signal, but since the 2 pi is constant and increasing angular velocity leads to higher frecuency, it is sometimes better to think of reactance in terms of just variable frequency.
For all practical purposes, capacitive reactance follow the same rules as resistors when combined in series and parallel. This fact is particularly useful for understanding most filters, since they often rely on capacitive reactance as part of a voltage divider.
This effect of resisting current flow into and out of (an apparent flow "through") is called reactance and is measured in ohms, the same unit for resistance. This is because an ohm is a unit of opposition to electric current, so it makes sense that reactance is also measured in ohms.
With an alternating signal applied to the capacitor, some charge starts building up inside the capacitor opposing the flow of current, not enough to block it completely, so it appears to go through the capacitor; there's some opposition, but not as much as with a constant current, which it can block completely when fully charged.
As the frequency (number of times the signal completes a cycle of 0v > positive peak > 0v > negative peak > 0v) the charge that accumulates inside the capacitor gets smaller and smaller, up to the point where virtually no current is stored and all of the signal gets apparently through the capacitor.
With an increase in frequency, the capacitive reactance goes down in the same proportion. This has a more formal definition, given by the equation:
Xc = 1 / (2 pi f C)
where Xc is the capacitive reactance in ohms, f is the signal frequency and C is the capacitance of the component.
The 2 pi comes from the fact that reactance is actually dependent on the angular velocity of the incoming signal, but since the 2 pi is constant and increasing angular velocity leads to higher frecuency, it is sometimes better to think of reactance in terms of just variable frequency.
For all practical purposes, capacitive reactance follow the same rules as resistors when combined in series and parallel. This fact is particularly useful for understanding most filters, since they often rely on capacitive reactance as part of a voltage divider.
Frequency in the loop: Opamp Active Filters
When talking about filters in the context of electronics, it means a circuit that will block signals of a certain frequency and allow others to pass; some examples are signal filters to block signals of a certain frequency to be amplified, and source filters that remove an alternating component from a DC power line.
Most filters rely on the ability of some components, capacitors and inductors, to change their ability to conduct current at certain frequencies to attenuate a signal to the point where it's no longer a problem.
One thing with this "passive" approach is that you can't use multiple stages of filtering because the signal gets smaller smaller with every stage, so the use of "active" filters, those that incorporate an amplifying element, became a necessity.
In this series we have already used two of the most fundamental active filters, even though we didn't see them as such at the time, now you are going to revisit them with new eyes and new information on how exactly their filtering properties emerge.
Most filters rely on the ability of some components, capacitors and inductors, to change their ability to conduct current at certain frequencies to attenuate a signal to the point where it's no longer a problem.
One thing with this "passive" approach is that you can't use multiple stages of filtering because the signal gets smaller smaller with every stage, so the use of "active" filters, those that incorporate an amplifying element, became a necessity.
In this series we have already used two of the most fundamental active filters, even though we didn't see them as such at the time, now you are going to revisit them with new eyes and new information on how exactly their filtering properties emerge.
Opamp Configurations  Schmitt Trigger
This opamp configuration is derived from the simple comparator circuit: set up a reference at the non inverting and use the inverting as signal input. There is one main difference: this circuit uses feedback to move the reference point when the signal passes it.
The feedback goes from output to the non inverting input via a resistor.
This circuit's initial conditions are somewhat random, depending on noise when turned on and other similar things. For simplicity, we'll assume that the output starts full positive.
At turn on, the output is at positive, and the reference es set up using a voltage divider. With the output initially at positive, you can think of it as in parallel with the top resistor of the divider for practical purposes. If both resistors are equal, then the equivalent resistor is half the value; you can further simplify things at this point by making both top and feedback resistors twice the value of the second divider resistor, setting the reference at 0v (assuming the second resistor is connected to the negative rail).
The reference is now set at 0v, with the input starting lower than that, the output remains positive. When the input goes just a bit higher than the reference, the output will swing to full negative by action of the high internal gain.
With the output now negative, the output resistor is now virtually connected to the negative rail, so the parallel combination is now on the lower resistor. Using the parallel resistor formula, you can get the equivalent resistor.
Rt = R1R2/(R1+R2)
With R2 twice that of R1, we get
Rt = 2R^2/(3R) => Rt = 2R/3
With these values now we can calculate the voltage at the reference
Vref = (Vcc+Vee)Rt / (2R + Rt)
where 2R is the top resistor of twice the value of the original lower resistor. Substituting Rt.
Vref = ((Vcc+Vee)2R/3) / (2R + 2R/3)
Some algebraic manupulation.
Vref = (Vcc+Vee)2R / 3(2R + 2R/3)
Vref = (Vcc+Vee)2R / (6R + 2R)
Vref = (Vcc+Vee)2R / 8R
Vref = (1/4)(Vcc+Vee)
This Vref is by measuring from the non inverting terminal to Vee, we need to change this to be from the non inverting to ground; we know that ground is at the half position between Vcc and Vee, so the 1/4 is actually 1/2 of Vee.
As you can see, Vref as moved towards the negative supply, so if any noise at the point where the signal crosses the initial reference drives it momentarily down, the output will not swing again at that point because the new reference is lower than what a typical noise will make the signal move.
When the signal goes down all the way to 1/2 of Vee, then the output will swing to positive, driving the reference voltage up along with it, so the switching action only occurs once even if the signal wiggles near the transition point.
This property is called hysteresis, and is useful in many applications where noise becomes a problem, specially in digital systems where excessive switching from noise can mess up the logic.
The feedback goes from output to the non inverting input via a resistor.
This circuit's initial conditions are somewhat random, depending on noise when turned on and other similar things. For simplicity, we'll assume that the output starts full positive.
At turn on, the output is at positive, and the reference es set up using a voltage divider. With the output initially at positive, you can think of it as in parallel with the top resistor of the divider for practical purposes. If both resistors are equal, then the equivalent resistor is half the value; you can further simplify things at this point by making both top and feedback resistors twice the value of the second divider resistor, setting the reference at 0v (assuming the second resistor is connected to the negative rail).
The reference is now set at 0v, with the input starting lower than that, the output remains positive. When the input goes just a bit higher than the reference, the output will swing to full negative by action of the high internal gain.
With the output now negative, the output resistor is now virtually connected to the negative rail, so the parallel combination is now on the lower resistor. Using the parallel resistor formula, you can get the equivalent resistor.
Rt = R1R2/(R1+R2)
With R2 twice that of R1, we get
Rt = 2R^2/(3R) => Rt = 2R/3
With these values now we can calculate the voltage at the reference
Vref = (Vcc+Vee)Rt / (2R + Rt)
where 2R is the top resistor of twice the value of the original lower resistor. Substituting Rt.
Vref = ((Vcc+Vee)2R/3) / (2R + 2R/3)
Some algebraic manupulation.
Vref = (Vcc+Vee)2R / 3(2R + 2R/3)
Vref = (Vcc+Vee)2R / (6R + 2R)
Vref = (Vcc+Vee)2R / 8R
Vref = (1/4)(Vcc+Vee)
This Vref is by measuring from the non inverting terminal to Vee, we need to change this to be from the non inverting to ground; we know that ground is at the half position between Vcc and Vee, so the 1/4 is actually 1/2 of Vee.
As you can see, Vref as moved towards the negative supply, so if any noise at the point where the signal crosses the initial reference drives it momentarily down, the output will not swing again at that point because the new reference is lower than what a typical noise will make the signal move.
When the signal goes down all the way to 1/2 of Vee, then the output will swing to positive, driving the reference voltage up along with it, so the switching action only occurs once even if the signal wiggles near the transition point.
This property is called hysteresis, and is useful in many applications where noise becomes a problem, specially in digital systems where excessive switching from noise can mess up the logic.
Opamp Configurations  Window comparator
The simple comparator circuit has one inherent problem: it can only tell us if one of the input voltages is higher than the other.
But what if you needed a circuit that tells us if a signal is within a range of values? you would need a circuit that tells you if the signal is higher than a minimum and if it also is lower than the maximum. The problem itself hints at the solution.
For a window comparator, we need one simple comparator set up just like the previous circuit: use the non inverting as reference and the inverting input as the signal entry. This comparator will set the maximum; if the signal goes higher than the reference the output will go negative, signaling an out of range (if we consider positive to be in range).
Another comparator is set by switching the reference and signal inputs, connecting the reference to the inverting input and the signal to the non inverting. If the signal is lower than the reference, the output will go negative, again indicating an out of range; this comparator sets the minimum.
When both opamp outputs go positive, it means that the signal is below the maximum and above the minimum, in other words, the signal is within the window of voltages you have defined.
There's one thing to consider with this configuration, when the signal is out of range, one of the opamps will go full negative (virtual connection to negative supply) and the other will be full positive (virtual connection to positive supply). This causes a short circuit condition that needs to be avoided as it could cause damage to the circuit or the supplies.
One way to protect from this condition is use diodes configured as the logic AND gate. This simply means to connect two diodes at the opamp outputs, connect both their anodes together and to the positive supply via a high value resistor.
What this does is that only when both opamps are at full positive (diodes' conduction blocked, basically disconnecting the opamps from the rest of the circuit) will the output be positive, held by the high value resistor.
When either opamp goes negative, the diode connected to it will be forward biased, basically connecting the output to ground; the other opamp is blocked from connecting to the output by the reverse biased diode (positive opamp output connected to cathode, negative to anode) and no short circuit condition occurs.
But what if you needed a circuit that tells us if a signal is within a range of values? you would need a circuit that tells you if the signal is higher than a minimum and if it also is lower than the maximum. The problem itself hints at the solution.
For a window comparator, we need one simple comparator set up just like the previous circuit: use the non inverting as reference and the inverting input as the signal entry. This comparator will set the maximum; if the signal goes higher than the reference the output will go negative, signaling an out of range (if we consider positive to be in range).
Another comparator is set by switching the reference and signal inputs, connecting the reference to the inverting input and the signal to the non inverting. If the signal is lower than the reference, the output will go negative, again indicating an out of range; this comparator sets the minimum.
When both opamp outputs go positive, it means that the signal is below the maximum and above the minimum, in other words, the signal is within the window of voltages you have defined.
There's one thing to consider with this configuration, when the signal is out of range, one of the opamps will go full negative (virtual connection to negative supply) and the other will be full positive (virtual connection to positive supply). This causes a short circuit condition that needs to be avoided as it could cause damage to the circuit or the supplies.
One way to protect from this condition is use diodes configured as the logic AND gate. This simply means to connect two diodes at the opamp outputs, connect both their anodes together and to the positive supply via a high value resistor.
What this does is that only when both opamps are at full positive (diodes' conduction blocked, basically disconnecting the opamps from the rest of the circuit) will the output be positive, held by the high value resistor.
When either opamp goes negative, the diode connected to it will be forward biased, basically connecting the output to ground; the other opamp is blocked from connecting to the output by the reverse biased diode (positive opamp output connected to cathode, negative to anode) and no short circuit condition occurs.
Opamp Configurations  Comparator circuit
One of the main reasons for using opamps as active devices in circuits is that their internal gain is so high, that even if we reduce it to a tiny fraction, it will still be enough for practical purposes. This particular configuration depends on the very high gain of the opamp to swing the output to one of the extremes; the sign of which tells us which input is more positive than the other.
By connecting the non inverting input to a voltage source, we are setting the reference point of the comparator. Remember that since there's no feedback, and because internally the opamp is just a very high gain difference amplifier, the output will be the non inverting input voltage minus the inverting input voltage, multiplied by the internal gain (in the 100k's).
This means that a difference of just millivolts will drive the output into saturation; if the difference is positive it will swing to full positive, limited by the supply. If the difference is negative, it will swing to full negative, again limited only by the supply.
On most amplifier circuits it is not advisable to drive the opamp into saturation because it clips the signal from going any further on both ends, but in this case we are not so much interested in the signal itself but on the relationship between the signal and a reference, so this circuit serves its purpose.
By connecting the non inverting input to a voltage source, we are setting the reference point of the comparator. Remember that since there's no feedback, and because internally the opamp is just a very high gain difference amplifier, the output will be the non inverting input voltage minus the inverting input voltage, multiplied by the internal gain (in the 100k's).
This means that a difference of just millivolts will drive the output into saturation; if the difference is positive it will swing to full positive, limited by the supply. If the difference is negative, it will swing to full negative, again limited only by the supply.
On most amplifier circuits it is not advisable to drive the opamp into saturation because it clips the signal from going any further on both ends, but in this case we are not so much interested in the signal itself but on the relationship between the signal and a reference, so this circuit serves its purpose.
Opamp Configurations  Differentiator
The inverse function to integration is differentiation, in other words finding the derivative, which the opamp can also perform. The derivative is defined as the rate at which the function changes.
By using an input capacitor instead of a resistor, we can accomplish the same thing. If you remember, a capacitor stores charges in its plates, when one of them starts accumulating charges, the same charges will be pushed out from the other plate, as if current was flowing through the capacitor despite the intrinsic insulating layer.
The capacitor's charges start building up and creating a voltage across itself in opposition to the charging voltage, thus slowing down the incoming charges, slowing down the charging process in general. When enough charges have accumulated, the charges inside the capacitor completely push away the charges coming from the source, no more charges enter the capacitor, and because of this no more charges are pushed out on the other side of the capacitor, so no more apparent flow of current across the capacitor.
When used as input for a signal, if the signal does not change (like a DC input), the capacitor will have an initial apparent current through it as the voltage across it builds up due to incoming charges, and since the input of the amplifier tries to not draw any current, it will create a voltage at its output so that the current through the feedback resistor is the same as the apparent current through the capacitor.
Since the capacitor charges very quickly due to the voltage applied to it and the fact that there's no current limiting component like a resistor, the apparent current through the capacitor falls very quickly as the voltage across it in opposition rises as quickly; the falling current is also causes the opamp to drive the output voltage less, since there's less current to compensate for.
Applying a DC input to the differentiator thus creates a spike in input as well as in output as the capacitor's initial charge is developed, and then goes back to 0v as there's no more apparent current to compensate for; Similar to the operation of finding a constant's derivative, which is always 0.
The fact that there's an initial spike can be mathematically modeled as a period in which there's a function that rises at a very high rate (which actually happens, the voltage doesn't just jump from 0v to the DC input voltage, it rises very rapidly towards it), so its rate of change is very high for a brief period of time; hence the spike.
As the input voltage stabilizes, its rate of change slows down very rapidly as well, going towards zero when fully stabilized; this is reflected in the opamp's output by the fact that as the voltage stabilizes, the output spike goes down very rapidly towards zero and stays there.
Now instead of applying a constant input, you can replace it with a constantly changing input.
If the input is increasing at a constant rate, there will be a constant apparent flow of current through the capacitor, since the voltage buildup across the capacitor is compensated by the increase in input signal. Since there's a constant apparent flow of current through the capacitor, the opamp compensated by setting the output voltage at a level that will make the feedback resistor draw the same amount of current, so that the opamp input does not draw it.
Since the amount of apparent current is constant, a constant output voltage is enough to keep the feedback resistor drawing the current, and the opamp keeps a constant output at the output.
This mode is very similar to using a resistor with constant dc as the input.
The same is true for a constantly decreasing input voltage; the output will just be of reversed polarity. To compare with the mathematical definition of the derivative of a linear variable, the derivative will be a constant.
This can be expanded to other functions, one of the most widely used being the sine function. Since the mathematical derivative of the sin(x) function is cos(x), which is a shifted version of sin(x) by 90 degrees, when you input a sine input at the differentiator amplifier, the output will be the same function shifted 90 degrees, in essence, a cosine function.
By using an input capacitor instead of a resistor, we can accomplish the same thing. If you remember, a capacitor stores charges in its plates, when one of them starts accumulating charges, the same charges will be pushed out from the other plate, as if current was flowing through the capacitor despite the intrinsic insulating layer.
The capacitor's charges start building up and creating a voltage across itself in opposition to the charging voltage, thus slowing down the incoming charges, slowing down the charging process in general. When enough charges have accumulated, the charges inside the capacitor completely push away the charges coming from the source, no more charges enter the capacitor, and because of this no more charges are pushed out on the other side of the capacitor, so no more apparent flow of current across the capacitor.
When used as input for a signal, if the signal does not change (like a DC input), the capacitor will have an initial apparent current through it as the voltage across it builds up due to incoming charges, and since the input of the amplifier tries to not draw any current, it will create a voltage at its output so that the current through the feedback resistor is the same as the apparent current through the capacitor.
Since the capacitor charges very quickly due to the voltage applied to it and the fact that there's no current limiting component like a resistor, the apparent current through the capacitor falls very quickly as the voltage across it in opposition rises as quickly; the falling current is also causes the opamp to drive the output voltage less, since there's less current to compensate for.
Applying a DC input to the differentiator thus creates a spike in input as well as in output as the capacitor's initial charge is developed, and then goes back to 0v as there's no more apparent current to compensate for; Similar to the operation of finding a constant's derivative, which is always 0.
The fact that there's an initial spike can be mathematically modeled as a period in which there's a function that rises at a very high rate (which actually happens, the voltage doesn't just jump from 0v to the DC input voltage, it rises very rapidly towards it), so its rate of change is very high for a brief period of time; hence the spike.
As the input voltage stabilizes, its rate of change slows down very rapidly as well, going towards zero when fully stabilized; this is reflected in the opamp's output by the fact that as the voltage stabilizes, the output spike goes down very rapidly towards zero and stays there.
Now instead of applying a constant input, you can replace it with a constantly changing input.
If the input is increasing at a constant rate, there will be a constant apparent flow of current through the capacitor, since the voltage buildup across the capacitor is compensated by the increase in input signal. Since there's a constant apparent flow of current through the capacitor, the opamp compensated by setting the output voltage at a level that will make the feedback resistor draw the same amount of current, so that the opamp input does not draw it.
Since the amount of apparent current is constant, a constant output voltage is enough to keep the feedback resistor drawing the current, and the opamp keeps a constant output at the output.
This mode is very similar to using a resistor with constant dc as the input.
The same is true for a constantly decreasing input voltage; the output will just be of reversed polarity. To compare with the mathematical definition of the derivative of a linear variable, the derivative will be a constant.
This can be expanded to other functions, one of the most widely used being the sine function. Since the mathematical derivative of the sin(x) function is cos(x), which is a shifted version of sin(x) by 90 degrees, when you input a sine input at the differentiator amplifier, the output will be the same function shifted 90 degrees, in essence, a cosine function.
Opamp Configurations  Integrator
If you replace the feedback resistor with a capacitor, you get an integrating amplifier.
In math, an integration operation is basically the area under a curve. If we have a voltage vs time graph, and the voltage remains constant, the integral of that will be the voltage times the time it stays at that level. As you can see, the longer the time the voltage remains constant, the higher the integral will be.
Back to our integrator, as the input voltage is applied to the inverting input via an input resistor that creates an input current. The Opamp will try to compensate the current by creating a voltage across the feedback element enough to make a current flow equal to that at the input to conform to the current rule: the inputs draw virtually no current.
In the simple inverting amplifier, the feedback resistor developed a constant current at a constant voltage at the output with respect to the inverting input, tied to ground. This time however, the feedback element is a capacitor; an element that can store charge, charge that eventually develops a voltage across it as it gets more and more charged.
If we apply a constant voltage at the input, a current flows through the input resistor. This current the opamp tries to compensate by creating a voltage at the capacitor to induce a current equal to that of the input. If the capacitor is initially completely discharged, the voltage across it is 0v, and its "resistance" is infinite since it is effectively insulating both sides so no current flows.
The gain is initially infinite, since Rfb/Rin tends to infinity by action of Rfb being infinity. This makes the output voltage go down quickly in a small amount of time (remember that the opamp is acting in an inverting configuration). As the capacitor starts charging, the charges entering the out plate of the capacitor push the charges on the other side, effectively creating a current across the capacitor, enough to counteract the input current.
As the charges build up inside the capacitor, a voltage develops across it in opposition of the output voltage, making it seem as if less voltage is applied to it, slowing down the amount of charges getting into the capacitor.
Less new charges going into the capacitor causes less charges being pushed out at the other plate. The Opamp tries to compensate by further lowering the voltage.
As you can see, the charges keep building up and the opamp is always trying to compensate by lowering the output voltage. At one point, the opamp will not be able to lower the output voltage, at which point it is said to be saturated.
The rate of charge of the capacitor depends on the current that is applied to it, and the current depends on the voltage and resistor at the input by ohms law I = V/R. The higher the voltage, the faster the capacitor charges and the output going lower, and the lower the input resistor the more current flows, charging the capacitor faster and resulting in the same faster lower output.
This action is the same as in the integration operation: the higher the value of the graph the higher the integral will be in the same amount of time.
Also if the input goes negative, the capacitor starts discharging and the output will go higher to compensate. If at any point the input goes to 0, the current through the input resistor will be zero, and the opamp will compensate by setting the output voltage at the same level as the capacitor voltage, in order to stop it from being charged or discharged.
Similar to what happens in an integration: if the graph crosses 0 and stays there, the integral will be the sum of areas up until that point and stay there for as long as the graph stays at zero. Also, if the graph goes lower than 0 then the integral will go lower because the area will be negative relative to 0.
In math, an integration operation is basically the area under a curve. If we have a voltage vs time graph, and the voltage remains constant, the integral of that will be the voltage times the time it stays at that level. As you can see, the longer the time the voltage remains constant, the higher the integral will be.
Back to our integrator, as the input voltage is applied to the inverting input via an input resistor that creates an input current. The Opamp will try to compensate the current by creating a voltage across the feedback element enough to make a current flow equal to that at the input to conform to the current rule: the inputs draw virtually no current.
In the simple inverting amplifier, the feedback resistor developed a constant current at a constant voltage at the output with respect to the inverting input, tied to ground. This time however, the feedback element is a capacitor; an element that can store charge, charge that eventually develops a voltage across it as it gets more and more charged.
If we apply a constant voltage at the input, a current flows through the input resistor. This current the opamp tries to compensate by creating a voltage at the capacitor to induce a current equal to that of the input. If the capacitor is initially completely discharged, the voltage across it is 0v, and its "resistance" is infinite since it is effectively insulating both sides so no current flows.
The gain is initially infinite, since Rfb/Rin tends to infinity by action of Rfb being infinity. This makes the output voltage go down quickly in a small amount of time (remember that the opamp is acting in an inverting configuration). As the capacitor starts charging, the charges entering the out plate of the capacitor push the charges on the other side, effectively creating a current across the capacitor, enough to counteract the input current.
As the charges build up inside the capacitor, a voltage develops across it in opposition of the output voltage, making it seem as if less voltage is applied to it, slowing down the amount of charges getting into the capacitor.
Less new charges going into the capacitor causes less charges being pushed out at the other plate. The Opamp tries to compensate by further lowering the voltage.
As you can see, the charges keep building up and the opamp is always trying to compensate by lowering the output voltage. At one point, the opamp will not be able to lower the output voltage, at which point it is said to be saturated.
The rate of charge of the capacitor depends on the current that is applied to it, and the current depends on the voltage and resistor at the input by ohms law I = V/R. The higher the voltage, the faster the capacitor charges and the output going lower, and the lower the input resistor the more current flows, charging the capacitor faster and resulting in the same faster lower output.
This action is the same as in the integration operation: the higher the value of the graph the higher the integral will be in the same amount of time.
Also if the input goes negative, the capacitor starts discharging and the output will go higher to compensate. If at any point the input goes to 0, the current through the input resistor will be zero, and the opamp will compensate by setting the output voltage at the same level as the capacitor voltage, in order to stop it from being charged or discharged.
Similar to what happens in an integration: if the graph crosses 0 and stays there, the integral will be the sum of areas up until that point and stay there for as long as the graph stays at zero. Also, if the graph goes lower than 0 then the integral will go lower because the area will be negative relative to 0.
Opamp Configurations  Summing Amplifier
Let's go back to the inverting amplifier. In its original form, we had one input resistance, one feedback resistance and one input voltage; but what happens if we have two or more inputs?
The math goes like this:
Vrin1 = Vin1  Vinv
The inverting terminal is at the same potential as the non inverting, which is tied to ground, so:
Vrin1 = Vin1
Then separate Vrin into current times voltage, according to ohm's law:
IinRin1 = Vin1 => Iin1 = Vin1/Rin1
But then again, we have more than one input, so for any Nth input, we have
IinRinNth = VinNth => IinNth = VinNth/RinNth
And the voltage at the feedback resistor, same as before
Vfb = Vinv  Vout
Separate by ohm's law
IfbRfb = Vinv  Vout
Ifb = (Vinv  Vout)/Rfb
Since the inputs try to draw no current, the current through the feedback resistor must be equal to the sum of the currents through each input resistor, by kirchoff's laws.
Ifb = Iin1 + Iin2 + ... + IinNth
In terms of the voltages and resistances
(Vinv  Vout)/Rfb = Vin1/Rin1 + Vin2/Rin2 + ... + VinNth/RinNth
Let's simplify to just two inputs, this can be expanded to more if needed; the equation holds true for more inputs.
(Vinv  Vout)/Rfb = Vin1/Rin1 + Vin2/Rin2
Since we are interested in the output voltage, the equation is solved for it
Vinv  Vout = (Vin1/Rin1 + Vin2/Rin2) Rfb
 Vout = (Vin1/Rin1 + Vin2/Rin2) Rfb  Vinv
(1)( Vout) = [1][(Vin1/Rin1 + Vin2/Rin2) Rfb  Vinv]
Vout = (Vin1/Rin1 + Vin2/Rin2) Rfb + Vinv
Vout = Vinv  (Vin1/Rin1 + Vin2/Rin2) Rfb
The voltage at the inverting input will be the same as the voltage at the non inverting, which is tied to ground, so this becomes
Vout =  (Vin1/Rin1 + Vin2/Rin2) Rfb
If we assume equal resistors
Vout =  (Vin1/R + Vin2/R) R
Vout =  (Vin1 + Vin2) (R/R)
Vout =  (Vin1 + Vin2) (1)
Vout =  (Vin1 + Vin2)
Notice how the output is the inverse of the sum of the voltages. This happens because we are using an inverting amplifier base, so as expected the output is inverted. Also note that the ratio of input and feedback resistors also set the gain by multiplying the sum by the ratio of resistances; if all input resistances are the same the gain is controlled by the feedback resistor.
Another variation of this circuit is using different input resistors for each input voltage, thus creating a weighted sum, useful in some very simple digital to analog conversion circuits.
The math goes like this:
Vrin1 = Vin1  Vinv
The inverting terminal is at the same potential as the non inverting, which is tied to ground, so:
Vrin1 = Vin1
Then separate Vrin into current times voltage, according to ohm's law:
IinRin1 = Vin1 => Iin1 = Vin1/Rin1
But then again, we have more than one input, so for any Nth input, we have
IinRinNth = VinNth => IinNth = VinNth/RinNth
And the voltage at the feedback resistor, same as before
Vfb = Vinv  Vout
Separate by ohm's law
IfbRfb = Vinv  Vout
Ifb = (Vinv  Vout)/Rfb
Since the inputs try to draw no current, the current through the feedback resistor must be equal to the sum of the currents through each input resistor, by kirchoff's laws.
Ifb = Iin1 + Iin2 + ... + IinNth
In terms of the voltages and resistances
(Vinv  Vout)/Rfb = Vin1/Rin1 + Vin2/Rin2 + ... + VinNth/RinNth
Let's simplify to just two inputs, this can be expanded to more if needed; the equation holds true for more inputs.
(Vinv  Vout)/Rfb = Vin1/Rin1 + Vin2/Rin2
Since we are interested in the output voltage, the equation is solved for it
Vinv  Vout = (Vin1/Rin1 + Vin2/Rin2) Rfb
 Vout = (Vin1/Rin1 + Vin2/Rin2) Rfb  Vinv
(1)( Vout) = [1][(Vin1/Rin1 + Vin2/Rin2) Rfb  Vinv]
Vout = (Vin1/Rin1 + Vin2/Rin2) Rfb + Vinv
Vout = Vinv  (Vin1/Rin1 + Vin2/Rin2) Rfb
The voltage at the inverting input will be the same as the voltage at the non inverting, which is tied to ground, so this becomes
Vout =  (Vin1/Rin1 + Vin2/Rin2) Rfb
If we assume equal resistors
Vout =  (Vin1/R + Vin2/R) R
Vout =  (Vin1 + Vin2) (R/R)
Vout =  (Vin1 + Vin2) (1)
Vout =  (Vin1 + Vin2)
Notice how the output is the inverse of the sum of the voltages. This happens because we are using an inverting amplifier base, so as expected the output is inverted. Also note that the ratio of input and feedback resistors also set the gain by multiplying the sum by the ratio of resistances; if all input resistances are the same the gain is controlled by the feedback resistor.
Another variation of this circuit is using different input resistors for each input voltage, thus creating a weighted sum, useful in some very simple digital to analog conversion circuits.
Opamp Configurations  Difference Amplifier
Difference Amplifier
So far you've learned about how to make an opamp add an inverted (negative) voltage to a reference, and to add a positive voltage by setting the reference.
Since the opamp has two inputs, one inverting and one non inverting, it should be possible to use both at the same time to add them to one another, and since one will be inverted, the effect will be a difference of voltages.
This one is a bit trickier to derive equations for, since, as you already know, the voltage that will be applied to the non inverting input will also appear at the inverting input via the opamp trying to compensate.
Since we are using resistor ratios in the voltage divider to set the voltage at the non inverting input, the voltage at the inverting one will be in terms of those resistors as well, otherwise the equations are derived the same as for the inverting amplifier.
Lets start with the inverting amplifier equations
Vrin = Vin  Vinv
IinRin = Vin  Vinv => Iin = (Vin  Vinv)/Rin
Same as last time, except Vinv is non zero, set by the voltage divider. Applying the current rule:
Iin = Ifb, Ifb is the feedback current.
Ifb = (Vinv  Vout)/Rfb
Vinv is not tied to ground, so it can't be simplified more at this point. We also have
Iin = Ifb => (Vin  Vinv)/Rin = (Vinv  Vout)/Rfb
Expressed in terms of Vout, this becomes
(Vin  Vinv) (Rfb/Rin) = Vinv  Vout
(Vin  Vinv) (Rfb/Rin)  Vinv =  Vout
Multiply both sides by 1
(1)[(Vin  Vinv) (Rfb/Rin)  Vinv] = (1)( Vout)
 (Vin  Vinv) (Rfb/Rin) + Vinv = Vout
Vinv  (Vin  Vinv) (Rfb/Rin) = Vout
Now, since Vinv is in terms of the non inverting voltage, we have
Vninv = Vin2 R2 / (R1+R2)
And
Vinv = Vninv => Vinv = Vin2 R2 / (R1+R2)
So we can rewrite our Vout equation now in terms of both input voltages
Vinv  (Vin  Vinv) (Rfb/Rin) = Vout
[Vin2 R2 / (R1+R2)]  (Vin  [Vin2 R2 / (R1+R2)]) [Rfb/Rin] = Vout
This seems complicated enough as it is, so from here we are going to simplify by making some assumptions. Lets make all resistors equal.
R = R1 = R2 = Rfb = Rin
The equation then becomes
[Vin2 R/2R]  (Vin  [Vin2 R/2R] [R/R] = Vout
Vin2 (1/2)  (Vin  [Vin2 (1/2)] [ 1/1 ] = Vout
Vin2 (1/2)  (Vin  [Vin2 (1/2)] = Vout
Vin2 (1/2)  (Vin  [Vin2 (1/2)] = Vout
Vin2 (1/2)  Vin + Vin2 (1/2) = Vout
Vin2  Vin = Vout
As you can see, with our assumption of equal resistors, the output will be equal to the difference of voltages applied, the applied at the non inverting minus the one applied at the inverting. In practice if you use the same ratios of resistors, the relation holds. You could also use equal ratios (not precisely 1:1) to set the gain; if you use different ratios you will get a weighted difference.
So far you've learned about how to make an opamp add an inverted (negative) voltage to a reference, and to add a positive voltage by setting the reference.
Since the opamp has two inputs, one inverting and one non inverting, it should be possible to use both at the same time to add them to one another, and since one will be inverted, the effect will be a difference of voltages.
This one is a bit trickier to derive equations for, since, as you already know, the voltage that will be applied to the non inverting input will also appear at the inverting input via the opamp trying to compensate.
Since we are using resistor ratios in the voltage divider to set the voltage at the non inverting input, the voltage at the inverting one will be in terms of those resistors as well, otherwise the equations are derived the same as for the inverting amplifier.
Lets start with the inverting amplifier equations
Vrin = Vin  Vinv
IinRin = Vin  Vinv => Iin = (Vin  Vinv)/Rin
Same as last time, except Vinv is non zero, set by the voltage divider. Applying the current rule:
Iin = Ifb, Ifb is the feedback current.
Ifb = (Vinv  Vout)/Rfb
Vinv is not tied to ground, so it can't be simplified more at this point. We also have
Iin = Ifb => (Vin  Vinv)/Rin = (Vinv  Vout)/Rfb
Expressed in terms of Vout, this becomes
(Vin  Vinv) (Rfb/Rin) = Vinv  Vout
(Vin  Vinv) (Rfb/Rin)  Vinv =  Vout
Multiply both sides by 1
(1)[(Vin  Vinv) (Rfb/Rin)  Vinv] = (1)( Vout)
 (Vin  Vinv) (Rfb/Rin) + Vinv = Vout
Vinv  (Vin  Vinv) (Rfb/Rin) = Vout
Now, since Vinv is in terms of the non inverting voltage, we have
Vninv = Vin2 R2 / (R1+R2)
And
Vinv = Vninv => Vinv = Vin2 R2 / (R1+R2)
So we can rewrite our Vout equation now in terms of both input voltages
Vinv  (Vin  Vinv) (Rfb/Rin) = Vout
[Vin2 R2 / (R1+R2)]  (Vin  [Vin2 R2 / (R1+R2)]) [Rfb/Rin] = Vout
This seems complicated enough as it is, so from here we are going to simplify by making some assumptions. Lets make all resistors equal.
R = R1 = R2 = Rfb = Rin
The equation then becomes
[Vin2 R/2R]  (Vin  [Vin2 R/2R] [R/R] = Vout
Vin2 (1/2)  (Vin  [Vin2 (1/2)] [ 1/1 ] = Vout
Vin2 (1/2)  (Vin  [Vin2 (1/2)] = Vout
Vin2 (1/2)  (Vin  [Vin2 (1/2)] = Vout
Vin2 (1/2)  Vin + Vin2 (1/2) = Vout
Vin2  Vin = Vout
As you can see, with our assumption of equal resistors, the output will be equal to the difference of voltages applied, the applied at the non inverting minus the one applied at the inverting. In practice if you use the same ratios of resistors, the relation holds. You could also use equal ratios (not precisely 1:1) to set the gain; if you use different ratios you will get a weighted difference.
Opamp Configurations  The non inverting amplifier
For a non inverting action, a simple way to obtain it is to keep the feedback loop in place and connecting the terminal where the input used to be connected, to ground, while feeding the input signal to the non inverting input.
This makes the opamp create an output voltage so that the current flowing through the feedback resistor network will be the necessary to develop a voltage at the inverting input that is the same as the non inverting input.
Since we know that the inputs draw virtually no current, then the voltage at the inverting terminal will be defined by the voltage divider created with by the feedback network.
Vinv = VoutR2 / (R1 + R2)
Since Vinv, the inverting input, is at the same potential as the non inverting input, then
Vin = VoutR2/(R1+R2)
The gain is the ratio of output voltage to input voltage
gain = Vout/Vin
A rewrite of the Vin equation gives you
Vin/Vout = R2/(R1+R2)
This las equation is the inverse of what we need, so lets get it straight
Vin = Vout R2/(R1+R2)
Vin (R1+R2) = Vout R2
(R1+R2) = R2 (Vout/Vin)
(R1+R2)/R2 = Vout/Vin
That's an equation for gain, which can be further simplified by separating the terms
(R1/R2) + (R2/R2) = Vout/Vin
(R1/R2) + 1 = Vout/Vin
As you can see, the gain is similar to the inverting amplifier, set by the ratio of the feedback resistors. In this case however, the gain will always be higher than 1. You can think of it as if the amplifier is adding the amplified signal to the non inverting reference voltage, which in fact is the same as the inverting, just that in this case the reference is not ground (0v).
This makes the opamp create an output voltage so that the current flowing through the feedback resistor network will be the necessary to develop a voltage at the inverting input that is the same as the non inverting input.
Since we know that the inputs draw virtually no current, then the voltage at the inverting terminal will be defined by the voltage divider created with by the feedback network.
Vinv = VoutR2 / (R1 + R2)
Since Vinv, the inverting input, is at the same potential as the non inverting input, then
Vin = VoutR2/(R1+R2)
The gain is the ratio of output voltage to input voltage
gain = Vout/Vin
A rewrite of the Vin equation gives you
Vin/Vout = R2/(R1+R2)
This las equation is the inverse of what we need, so lets get it straight
Vin = Vout R2/(R1+R2)
Vin (R1+R2) = Vout R2
(R1+R2) = R2 (Vout/Vin)
(R1+R2)/R2 = Vout/Vin
That's an equation for gain, which can be further simplified by separating the terms
(R1/R2) + (R2/R2) = Vout/Vin
(R1/R2) + 1 = Vout/Vin
As you can see, the gain is similar to the inverting amplifier, set by the ratio of the feedback resistors. In this case however, the gain will always be higher than 1. You can think of it as if the amplifier is adding the amplified signal to the non inverting reference voltage, which in fact is the same as the inverting, just that in this case the reference is not ground (0v).
Opamp Configurations  Inverting amplifier
As you leaned in the intro to opamps, when under negative feedback, the voltage difference across its inputs will be close to 0v. This is achieved via compensation from the opamp output and the feedback loop.
In the simplest way to achieve it is a configuration known as the inverting amplifier. In this configuration, the non inverting input is tied directly to ground, and a feedback loop is made using a resistor connected between inverting input and output.
Another resistor is used to connect the signal source to the amplifier, since the inverting input will be at the same potential by action of the feedback loop, it would be connected to ground and no signal would get to the opamp to get amplified.
The voltage in through the resistor will cause a current going in the direction of the inverting input. Since one of the properties of the op amp is that its inputs draw virtually no current, or at least it will try not to draw current by pulling the output voltage towards a more negative value, in order to create a voltage across the feedback resistor that will draw the same amount of current as what's trying to go through the input resistor.
The math behind this action:
Vrin = Vin  Vinv
The inverting terminal is at the same potential as the non inverting, which is tied to ground, so:
Vrin = Vin
Then separate Vrin into current times voltage, according to ohm's law:
IinRin = Vin => Iin = Vin/Rin
Now you get an equation for the current in. Since the input will not draw current, we have that
Iin = Ifb, Ifb is the feedback current.
Ifb = (Vinv  Vout)/Rfb
Again, Vinv is tied to ground similar to the non inverting, so
Ifb = (0  Vout)/Rfb => Vout/Rfb
We equal both currents to get an equation in terms of only voltages and resistors
Iin = Ifb => Vin/Rin = Vout/Rfb
The variable of interest is Vout, so rewrite it in terms of Vout
(Vin/Rin)Rfb = Vout => Vin(Rfb/Rin) = Vout
From this last equation you can see that the output voltage will be an inverted version of Vin multiplied by the ratio of the input resistor and the feedback resistor; increasing the input resistor gives less gain, while increasing the feedback resistor increases gain.
In the simplest way to achieve it is a configuration known as the inverting amplifier. In this configuration, the non inverting input is tied directly to ground, and a feedback loop is made using a resistor connected between inverting input and output.
Another resistor is used to connect the signal source to the amplifier, since the inverting input will be at the same potential by action of the feedback loop, it would be connected to ground and no signal would get to the opamp to get amplified.
The voltage in through the resistor will cause a current going in the direction of the inverting input. Since one of the properties of the op amp is that its inputs draw virtually no current, or at least it will try not to draw current by pulling the output voltage towards a more negative value, in order to create a voltage across the feedback resistor that will draw the same amount of current as what's trying to go through the input resistor.
The math behind this action:
Vrin = Vin  Vinv
The inverting terminal is at the same potential as the non inverting, which is tied to ground, so:
Vrin = Vin
Then separate Vrin into current times voltage, according to ohm's law:
IinRin = Vin => Iin = Vin/Rin
Now you get an equation for the current in. Since the input will not draw current, we have that
Iin = Ifb, Ifb is the feedback current.
Ifb = (Vinv  Vout)/Rfb
Again, Vinv is tied to ground similar to the non inverting, so
Ifb = (0  Vout)/Rfb => Vout/Rfb
We equal both currents to get an equation in terms of only voltages and resistors
Iin = Ifb => Vin/Rin = Vout/Rfb
The variable of interest is Vout, so rewrite it in terms of Vout
(Vin/Rin)Rfb = Vout => Vin(Rfb/Rin) = Vout
From this last equation you can see that the output voltage will be an inverted version of Vin multiplied by the ratio of the input resistor and the feedback resistor; increasing the input resistor gives less gain, while increasing the feedback resistor increases gain.
Negative Feedback
Opamps have a very high intrinsic gain, something in the order of 150,000 and higher; this is called the open loop gain. This gain is not very useful by itself since it is very unstable; it changes with temperature, supply voltage and also requires extremely small signals to work within a useful range of voltages without clipping the incoming signal.
A method deviced from the conception of the opamp is the use of a feedback loop to limit the gain of the op amp to lower than 100, but that the gain will only depend on external components instead of the built in properties of the device.
The feedback is connected in a way such that any increase in the feedback signal will lower the output, similar to adding a negative, hence it's name.
A method deviced from the conception of the opamp is the use of a feedback loop to limit the gain of the op amp to lower than 100, but that the gain will only depend on external components instead of the built in properties of the device.
The feedback is connected in a way such that any increase in the feedback signal will lower the output, similar to adding a negative, hence it's name.
The Operational Amplifier OpAmp
The operational amplifier is perhaps the most versatile of amplifier circuits, used many different applications as a gain component due to high stability, gain and input impedance, as well as the fact that very little external components are needed for operation.
Internally, the OpAmp is based around a transistorized differential amplifier; two transistors connected to the same emitter resistor, where one of the inputs is inverted and added to the other to essentially subtract one from the other, the difference amplified by a certain factor and fed as the output.
The basic opamp is a simple differential amplifier. Most commercially available opamps have extra internal circuitry to compensate for temperature change, different voltage source values and compensation to get an exact 0v when both inputs are disconnected.
There are two characteristics that make opamps so versatile: The voltage difference across its inputs will be very close to 0v, and its inputs draw virtually no current. This characteristics are only valid only under Negative Feedback.
Internally, the OpAmp is based around a transistorized differential amplifier; two transistors connected to the same emitter resistor, where one of the inputs is inverted and added to the other to essentially subtract one from the other, the difference amplified by a certain factor and fed as the output.
The basic opamp is a simple differential amplifier. Most commercially available opamps have extra internal circuitry to compensate for temperature change, different voltage source values and compensation to get an exact 0v when both inputs are disconnected.
There are two characteristics that make opamps so versatile: The voltage difference across its inputs will be very close to 0v, and its inputs draw virtually no current. This characteristics are only valid only under Negative Feedback.
CMOS: Complementary MOSFET
Let's do a quick review of MOSFET operation.
An P type MOSFET in depletion mode, apply a positive voltage enough to create a wide neutral zone and it turns off by the action of holes at the base drawing electrons to it.
An N type MOSFET in depletion mode, connected to ground, a reservoir of electrons, and they start to push electrons on the other side of the gate away, as if connected to a negative voltage, creating a zone where the material loses its negative charge via the lost electrons, and it turns off.
As you can see, only one of the types of MOSFET is active in a certain configuration: Positive turns the P type OFF and the N type ON, and ground will turn P type ON and the N type OFF.
This interesting characteristic is employed in the making of digital circuits, that work with ON (1) and OFF (0) values only, and the fact that ON is represented by an almost direct connection to a positive rail and 0 is an almost direct connection to the ground rail.
A very simple circuit demonstrates it, called an inverter. Imagine one P type MOSFET's source connected to positive and sink to the source of a N type MOSFET. Also, the sink of the N type is connected to ground.
Both MOSFETs share the same base connection, and the output will be taken at the P sink/N source connection.
When we connect the base to the positive rail, the P type MOSFET will turn off, insulating the output from the positive rail its source is connected to, but the N type MOSFET will be fully on, effectively connecting the output to the ground rail. An ON (1) input gives an OFF (0) output, in other words, the input is inverted.
On the other hand, if we connect the shared base connection to ground, the P type transistor will be fully ON, connecting the output to the positive rail, and the N type will be fully OFF, insulating it from the ground rail. An OFF (0) input gives an ON (1) output, again, the input is inverted.
Many more combinations of this two complementary MOSFETs are possible, creating any kind of digital circuit you can imagine, like all of the microprocessors used to build computers and cell phones.
An P type MOSFET in depletion mode, apply a positive voltage enough to create a wide neutral zone and it turns off by the action of holes at the base drawing electrons to it.
An N type MOSFET in depletion mode, connected to ground, a reservoir of electrons, and they start to push electrons on the other side of the gate away, as if connected to a negative voltage, creating a zone where the material loses its negative charge via the lost electrons, and it turns off.
As you can see, only one of the types of MOSFET is active in a certain configuration: Positive turns the P type OFF and the N type ON, and ground will turn P type ON and the N type OFF.
This interesting characteristic is employed in the making of digital circuits, that work with ON (1) and OFF (0) values only, and the fact that ON is represented by an almost direct connection to a positive rail and 0 is an almost direct connection to the ground rail.
A very simple circuit demonstrates it, called an inverter. Imagine one P type MOSFET's source connected to positive and sink to the source of a N type MOSFET. Also, the sink of the N type is connected to ground.
Both MOSFETs share the same base connection, and the output will be taken at the P sink/N source connection.
When we connect the base to the positive rail, the P type MOSFET will turn off, insulating the output from the positive rail its source is connected to, but the N type MOSFET will be fully on, effectively connecting the output to the ground rail. An ON (1) input gives an OFF (0) output, in other words, the input is inverted.
On the other hand, if we connect the shared base connection to ground, the P type transistor will be fully ON, connecting the output to the positive rail, and the N type will be fully OFF, insulating it from the ground rail. An OFF (0) input gives an ON (1) output, again, the input is inverted.
Many more combinations of this two complementary MOSFETs are possible, creating any kind of digital circuit you can imagine, like all of the microprocessors used to build computers and cell phones.
The MOSFET: Metal Oxide Semiconductor Field Effect Transistor
Sometimes, even that small amount of current is too much, so a new FET design came into being. The Insulated Gate FET (IGFET) is another type of field effect transistor. This time, the P material is completely dumped and replaced by a metal contact. The metal does not come in direct contact with the N material, instead it is insulated by a thin layer of Silicon Dioxide (In other words, glass).
This configuration of materials gives this type of transistor its more common name: MetalOxideSemiconductor FET, or MOSFET for short.
The internal working of the MOSFET is somewhat different from that of the junction FET in action, not in principle, and there are two modes of operating a MOSFET called Depletion mode and Enhancement mode.
In depletion mode, when a gate voltage is applied the metal contact acts as a capacitor and start charging positively. This charge draws electrons to the other side of the oxide insulator, which recombine with the holes of the P material, resulting in a zone of neutral net charge.
This region acts in exactly the same way as the depletion zone of the reverse biased diode, which in effect is a neutral net charge zone inside the semiconductor. As you can see, the net effect is the same, as the gate voltage is increased, more electrons are drawn to towards the gate and neutralize the holes; and also as the voltage at the gate decreases, the electrons are free to move again, the channel widens and more current flows.
In enhancement mode, a layer of N material is built inside the P bar, in a structure similar to the bipolar transistor. This intrinsic layer creates two depletion regions inside the bar, insulating the from each other so no current can flow.
In P channel enhancement mode MOSFETs, the applied voltage is negative, opposite of how it was in depletion mode. When a negative voltage is applied to the gate, it pushes electrons away from that region, leaving only the holes.
In the area where the gate meets either depletion zone, the result is a net positive charge in the material, as if in that zone the material was the same P type material. The free electrons of the intrinsic N type layer are pushed away from the gate, also leaving a zone of free holes that act as P type material.
As you can see, in this mode a channel is created near the gate that connects both ends of the P material, pushing the N middle layer away, allowing current to flow through it. When the voltage is removed, the free electrons again fill the holes and the depletion zones return to their normal neutral net charge state, insulating the layers and preventing current flow.
This configuration of materials gives this type of transistor its more common name: MetalOxideSemiconductor FET, or MOSFET for short.
The internal working of the MOSFET is somewhat different from that of the junction FET in action, not in principle, and there are two modes of operating a MOSFET called Depletion mode and Enhancement mode.
In depletion mode, when a gate voltage is applied the metal contact acts as a capacitor and start charging positively. This charge draws electrons to the other side of the oxide insulator, which recombine with the holes of the P material, resulting in a zone of neutral net charge.
This region acts in exactly the same way as the depletion zone of the reverse biased diode, which in effect is a neutral net charge zone inside the semiconductor. As you can see, the net effect is the same, as the gate voltage is increased, more electrons are drawn to towards the gate and neutralize the holes; and also as the voltage at the gate decreases, the electrons are free to move again, the channel widens and more current flows.
In enhancement mode, a layer of N material is built inside the P bar, in a structure similar to the bipolar transistor. This intrinsic layer creates two depletion regions inside the bar, insulating the from each other so no current can flow.
In P channel enhancement mode MOSFETs, the applied voltage is negative, opposite of how it was in depletion mode. When a negative voltage is applied to the gate, it pushes electrons away from that region, leaving only the holes.
In the area where the gate meets either depletion zone, the result is a net positive charge in the material, as if in that zone the material was the same P type material. The free electrons of the intrinsic N type layer are pushed away from the gate, also leaving a zone of free holes that act as P type material.
As you can see, in this mode a channel is created near the gate that connects both ends of the P material, pushing the N middle layer away, allowing current to flow through it. When the voltage is removed, the free electrons again fill the holes and the depletion zones return to their normal neutral net charge state, insulating the layers and preventing current flow.
Junction FET operation (JFET)
Junction FETs work with the diode junction in reverse bias, that is, a more positive voltage is applied to the cathode instead of the anode, the cathode being the gate terminal.
When a gate voltage is applied, the junction depletion region widens by action of the reverse bias of the PN junction. With enough voltage applied, the depletion region widens enough as to completely divide the P material bar, effectively preventing current from flowing. When the gate voltage is lowered, the depletion region shrinks again and current can flow again.
Even in the absence of a control voltage at the gate, the transistor is able to conduct current through its P material body, and works like a semiconductor resistor. When a gate voltage is present, it effectively increases the resistance of the JFET's body, thus controlling the amount of current flowing through it.
Since the PN junction of the JFET is in reverse bias mode, very little current flows (only leakage current caused by heat), so it is useful in applications where loading of a previous stage can affect its behavior or there's a need to limit the amount of consumed current, as in low power applications.
When a gate voltage is applied, the junction depletion region widens by action of the reverse bias of the PN junction. With enough voltage applied, the depletion region widens enough as to completely divide the P material bar, effectively preventing current from flowing. When the gate voltage is lowered, the depletion region shrinks again and current can flow again.
Even in the absence of a control voltage at the gate, the transistor is able to conduct current through its P material body, and works like a semiconductor resistor. When a gate voltage is present, it effectively increases the resistance of the JFET's body, thus controlling the amount of current flowing through it.
Since the PN junction of the JFET is in reverse bias mode, very little current flows (only leakage current caused by heat), so it is useful in applications where loading of a previous stage can affect its behavior or there's a need to limit the amount of consumed current, as in low power applications.
Field Effect Transistors
The field effect transistor is a component that uses only one junction instead of two as in bipolar transistors. Even though it is only one junction that also functions like a diode, the actual layout of the materials make it have some properties that allow a single junction device function like a transistor.
The layout of the FET is a bar of semiconductor material that has a ring of an oppositely doped semiconductor material around it. This Transistor is called the junction field effect transistor or JFET.
There are two types of JFET, called Nchannel and Pchannel. The name comes from the type of material that makes up the bar of material, for example the Nchannel is a bar of N material with a ring of P material around it.
The explanations here are given for Pchannel JFETs, as with bipolar transistors, just reverse polarities for Nchannel JFETs.
Similar to the Bipolar transistors, FETs have three terminals, Source, Gate and Sink that correspond in function to the Collector, Base and Emitter of the BJT, respectively
The layout of the FET is a bar of semiconductor material that has a ring of an oppositely doped semiconductor material around it. This Transistor is called the junction field effect transistor or JFET.
There are two types of JFET, called Nchannel and Pchannel. The name comes from the type of material that makes up the bar of material, for example the Nchannel is a bar of N material with a ring of P material around it.
The explanations here are given for Pchannel JFETs, as with bipolar transistors, just reverse polarities for Nchannel JFETs.
Similar to the Bipolar transistors, FETs have three terminals, Source, Gate and Sink that correspond in function to the Collector, Base and Emitter of the BJT, respectively
Capacitive Coupling: Isolating AC from DC
In order to connect an alternating signal into the transistor amplifier in a way that the circuitry that generates the signal doesn't interfere with the operation of the amplifier, and also that the biasing and operation of the transistor amplifier doesn't change the way the circuitry of the source signal operates, we need a way to isolate them from each other.
Since the only component of interest that needs to be shared by both circuits is the alternating signal (AC signal), we need to use a component that will let the ac component pass while blocking the any DC of the bias circuitry or the signal generator.
As you learned in a previous lesson, a capacitor is a component that can store energy in the form of an electric field created by lumping charges close to each other but still isolated. Current cannot directly cross the insulating layer inside the capacitor, effectively blocking any direct current flow.
But something interesting happens when a capacitor is affected by an alternating current. On the positive half of an AC wave, one side of the capacitor is filled with an inrush of electrons, while on the other side, electrons are pushed out to be replaced with holes, until the capacitor is fully charged and no more charges move.
For the moment when the capacitor is charging, the amount of electrons entering one plate of the capacitor is the same as the electrons being pushed out from the other side, almost as if the electrons had just crossed the insulating layer.
When the polarity is reversed the effect happens once again, the electrons are now drawn towards the voltage source, leaving holes in the plate of the capacitor. These holes draw the electrons that were previously pushed away, into the plate of the capacitor. The net effect is again as if the electrons crossed the insulating layer to get to the voltage source.
In practice, it is not the actual crossing of the electrons that is of use, but the movement of them on both sides of the capacitor that can be used as a current in the circuit.
Summarizing, the capacitor blocks any current that tries to directly cross the insulating layer, but it can't stop the electrons from being drawn to or away from the plates, effectively letting alternating voltages get through.
This effect is used to isolate the DC component from both sides while allowing the ac to flow, and is called capacitive coupling.
Since the only component of interest that needs to be shared by both circuits is the alternating signal (AC signal), we need to use a component that will let the ac component pass while blocking the any DC of the bias circuitry or the signal generator.
As you learned in a previous lesson, a capacitor is a component that can store energy in the form of an electric field created by lumping charges close to each other but still isolated. Current cannot directly cross the insulating layer inside the capacitor, effectively blocking any direct current flow.
But something interesting happens when a capacitor is affected by an alternating current. On the positive half of an AC wave, one side of the capacitor is filled with an inrush of electrons, while on the other side, electrons are pushed out to be replaced with holes, until the capacitor is fully charged and no more charges move.
For the moment when the capacitor is charging, the amount of electrons entering one plate of the capacitor is the same as the electrons being pushed out from the other side, almost as if the electrons had just crossed the insulating layer.
When the polarity is reversed the effect happens once again, the electrons are now drawn towards the voltage source, leaving holes in the plate of the capacitor. These holes draw the electrons that were previously pushed away, into the plate of the capacitor. The net effect is again as if the electrons crossed the insulating layer to get to the voltage source.
In practice, it is not the actual crossing of the electrons that is of use, but the movement of them on both sides of the capacitor that can be used as a current in the circuit.
Summarizing, the capacitor blocks any current that tries to directly cross the insulating layer, but it can't stop the electrons from being drawn to or away from the plates, effectively letting alternating voltages get through.
This effect is used to isolate the DC component from both sides while allowing the ac to flow, and is called capacitive coupling.
Transistor Biasing
When you looked at how a common collector transistor amplifier works, you noticed that most of the behavior is controlled by the voltage applied at the base. Most of the time, the signal we want to amplify is a signal that alternates between positive and negative.
Since the transistor needs at least enough voltage at the base to overcome the baseemitter junction voltage (0.7v typical for silicon transistors), any voltage below that will drive the transistor into cutoff, clipping and distorting the signal.
One way to overcome this and allow negative signals to be amplified is to set a constant voltage at the base that will be varied up and down by the alternating signal to be amplified.
The setting of that constant voltage at the base is called biasing of the transistor.
The easiest and most common way to bias a transistor is to use a two resistor voltage divider. As you saw in a previous lesson, the voltage divider has the weakness that anything connected to it will "load" the circuit, and change the voltage across the output resistor.
When used to bias the transistor, the voltage divider will never get into a loaded situation, since the base will draw very little current.
Since the transistor needs at least enough voltage at the base to overcome the baseemitter junction voltage (0.7v typical for silicon transistors), any voltage below that will drive the transistor into cutoff, clipping and distorting the signal.
One way to overcome this and allow negative signals to be amplified is to set a constant voltage at the base that will be varied up and down by the alternating signal to be amplified.
The setting of that constant voltage at the base is called biasing of the transistor.
The easiest and most common way to bias a transistor is to use a two resistor voltage divider. As you saw in a previous lesson, the voltage divider has the weakness that anything connected to it will "load" the circuit, and change the voltage across the output resistor.
When used to bias the transistor, the voltage divider will never get into a loaded situation, since the base will draw very little current.
Common Emitter
Continuing from the emitter follower, you learned that the voltage at the emitter is roughly equal to the voltage at the base, independent of the resistor used at the emitter. What does depend on the resistor value used is the current that flows through the emitter resistor.
The current drawn by the resistor is defined by ohm's law
Ie = Vre/Re
and since the voltage at the emitter resistor is practically the same as the base voltage, then
Ie = Vb/Re
where Vre is emitter resistor voltage, Re is emitter resistor's resistance and Vb is the base voltage.
You also know that the current comes mostly from the voltage source connected at the collector, since the base doesn't contribute much to the overall emitter current, it can be considered a separate, series circuit.
As a series circuit, you know that the current flowing at any point in the circuit is the same as at any other point in the circuit. You already know the emitter current, so the current through the collector, and any resistor connected to it, will be the same as the emitter current.
The current through the collector resistor causes a voltage drop across it, defined by ohm's law as Vrc = Ic Rc
where Vrc is the voltage across the collector resistor. Since the collector current is the same as the emitter current, you get
Vrc = Ie Rc
You also have that the emitter current is defined as
Ie = Vb/Re
All this data collection is to arrive at an equation for the voltage at the collector
Vrc = [Vb/Re] Rc
if you rewrite it you get
Vrc = Vb [Rc/Re].
This final equation gives us an easy definition for the voltage across the collector resistor that is independent of the beta (current gain) of the transistor, characteristic that varies widely among even the same batch of transistor, and that also depends on the temperature of the transistor.
Now, the voltage across the collector resistor is not very useful by itself, but it can be used to obtain the voltage at the collectorresistor connection, in other words, the voltage across the transistor itself.
By Kirchoff's laws, the voltage supplied is the sum of all the voltages induced in the components that form the closed loop. In practice, our loop is the collector resistor, the transistor itself and the emitter resistor. You already know how to calculate the voltage across the resistors, and know that the sum is equal to the supply voltage, so Vcc  Vrc  Vre  Vce = 0, where Vce is the voltage across the transistor's collector and emitter.
Since most of the time, the output of this circuit is connected from the transistor collector to ground, we need to know the voltage at the collector of the transistor with respect to ground, defined as
Vc = Vcc  Vrc
Or in other terms
Vc = Vre + Vce
Since it is easier to calculate Vrc than Vce, the first equation is the most widely used.
The design of a common collector amplifier requires that you know all the mayor characteristics of the transistor, like the relationships between collector, emitter and base currents, as well as other properties of circuits like kirchoff's and ohm's laws.
The current drawn by the resistor is defined by ohm's law
Ie = Vre/Re
and since the voltage at the emitter resistor is practically the same as the base voltage, then
Ie = Vb/Re
where Vre is emitter resistor voltage, Re is emitter resistor's resistance and Vb is the base voltage.
You also know that the current comes mostly from the voltage source connected at the collector, since the base doesn't contribute much to the overall emitter current, it can be considered a separate, series circuit.
As a series circuit, you know that the current flowing at any point in the circuit is the same as at any other point in the circuit. You already know the emitter current, so the current through the collector, and any resistor connected to it, will be the same as the emitter current.
The current through the collector resistor causes a voltage drop across it, defined by ohm's law as Vrc = Ic Rc
where Vrc is the voltage across the collector resistor. Since the collector current is the same as the emitter current, you get
Vrc = Ie Rc
You also have that the emitter current is defined as
Ie = Vb/Re
All this data collection is to arrive at an equation for the voltage at the collector
Vrc = [Vb/Re] Rc
if you rewrite it you get
Vrc = Vb [Rc/Re].
This final equation gives us an easy definition for the voltage across the collector resistor that is independent of the beta (current gain) of the transistor, characteristic that varies widely among even the same batch of transistor, and that also depends on the temperature of the transistor.
Now, the voltage across the collector resistor is not very useful by itself, but it can be used to obtain the voltage at the collectorresistor connection, in other words, the voltage across the transistor itself.
By Kirchoff's laws, the voltage supplied is the sum of all the voltages induced in the components that form the closed loop. In practice, our loop is the collector resistor, the transistor itself and the emitter resistor. You already know how to calculate the voltage across the resistors, and know that the sum is equal to the supply voltage, so Vcc  Vrc  Vre  Vce = 0, where Vce is the voltage across the transistor's collector and emitter.
Since most of the time, the output of this circuit is connected from the transistor collector to ground, we need to know the voltage at the collector of the transistor with respect to ground, defined as
Vc = Vcc  Vrc
Or in other terms
Vc = Vre + Vce
Since it is easier to calculate Vrc than Vce, the first equation is the most widely used.
The design of a common collector amplifier requires that you know all the mayor characteristics of the transistor, like the relationships between collector, emitter and base currents, as well as other properties of circuits like kirchoff's and ohm's laws.
Emitter follower or common collector
When you connect the emitter through a resistor, the collector to the voltage source and apply enough voltage at the base for conduction, the voltage across the emitter resistor will be roughly equal to the base voltage minus the base emitter junction voltage (0.7v typical for silicon transistors).
The voltage across the emitter resistor is independent of resistor value, so it can be used to power high current drawing loads from a small input current, since the current through the emitter resistor and anything connected to it comes largely from the collector current instead of base current.
This also isolates parts of a circuit from loading or otherwise interfere with it's function. As such, they are also called voltage buffers and impedance buffers.
The voltage across the emitter resistor is independent of resistor value, so it can be used to power high current drawing loads from a small input current, since the current through the emitter resistor and anything connected to it comes largely from the collector current instead of base current.
This also isolates parts of a circuit from loading or otherwise interfere with it's function. As such, they are also called voltage buffers and impedance buffers.
The Bipolar transistor
The bipolar transistor is a three terminal component made from three layers of alternating semiconductor material. The layers form two PN junctions and in some ways work like two diodes connected in series with pointing away from the point where they connect, this point being the third terminal of the transistor.
The terminals in the transistor are called Collector, Base and Emitter.
Although similar in construction, the particular way in which the layers in a transistor are arranged give it some interesting properties.
The bipolar transistor functions as what is called a current controlled current regulator. When a small current flows through the forward biased baseemitter junction, a large current is also allowed to flow from collector to emitter.
This seems counterintuitive with the way you learned about diodes; a reverse biased diode should not allow current through it. This emergent property of the transistor is what gives it most of its uses, since a little input current at the base generates a large output current through the transistor, in essence it amplifies the current using an external power source.
There are two types of bipolar transistors, PNP and NPN, named after the combination of material types that make them up, they differ in polarity of voltages applied. The explanations given are for NPN transistors, use the reverse polarities for PNP transistors.
Transistor Operation
With collector connected to a more positive voltage than emitter and no current flowing into the base of the transistor, no current flows from collector to emitter, and the transistor is said to be in cutoff.
When the voltage applied to the base is slightly higher than the junction voltage of the base emitter junction, some current starts to flow from base to emitter, as well as from collector to emitter. The current that flows through the collector is roughly the same as the current going into the base times the current gain of the transistor (Typically written as hfe or B [beta]).
Consider a transistor's collector connected directly to the voltage source and the emitter connected to ground, by kirchoff's second law you can see that the voltage the transistor gets will always be equal to the supplied voltage.
In cutoff, no current flows through the transistor, so the voltage source "sees" an infinite resistance, that is equivalent to an open switch.
With only a voltage of a little over the junction voltage of base emitter, let's say enough for 1mA to flow and a beta of 100, we get a collector current of roughly 100mA. So in theory, if we supply 100mA we should get a collector current of roughly 10A right?
In theory, yes, that should be possible. In practice however, current flowing through any conductor generates heat, and with small transistors even a current of less than 500mA could be enough to create enough heat to burn and destroy the transistor. There's also the fact that any voltage source has a limit on the amount of current it can supply.
Let's now consider another similar circuit, now instead of being connected directly to voltage ground, we use a resistor of 90 ohms as a load the collector. Let's also use the base current again from the previous example, 1mA and a voltage source of 10v.
The theoretical collector current should be Ic = B Ib = 100 1mA = 100mA.
With 100mA flowing through it, the resistor gets an induced voltage of 9v, close to our voltage supply, with 1v across the transistor, we account for all 10v of supply. But what happens if we increase the base voltage to 2mA?
In theory, collector current should be Ic = 100 2mA = 200mA.
With 200mA flowing through it, the resistor should get an induced voltage of 18v, which is clearly higher than our supply voltage. To compensate, the transistor should have to be 8v lower than ground potential, which it simply cannot do.
What happens in this situation is that the transistor will try to keep the voltage across it as close to ground as it can to accommodate the current that should be flowing through its collector. The base current at which the transistor cannot lower the voltage across it , in other words the transistor is fully on, is called the saturation current, and the state itself called saturation.
The terminals in the transistor are called Collector, Base and Emitter.
Although similar in construction, the particular way in which the layers in a transistor are arranged give it some interesting properties.
The bipolar transistor functions as what is called a current controlled current regulator. When a small current flows through the forward biased baseemitter junction, a large current is also allowed to flow from collector to emitter.
This seems counterintuitive with the way you learned about diodes; a reverse biased diode should not allow current through it. This emergent property of the transistor is what gives it most of its uses, since a little input current at the base generates a large output current through the transistor, in essence it amplifies the current using an external power source.
There are two types of bipolar transistors, PNP and NPN, named after the combination of material types that make them up, they differ in polarity of voltages applied. The explanations given are for NPN transistors, use the reverse polarities for PNP transistors.
Transistor Operation
With collector connected to a more positive voltage than emitter and no current flowing into the base of the transistor, no current flows from collector to emitter, and the transistor is said to be in cutoff.
When the voltage applied to the base is slightly higher than the junction voltage of the base emitter junction, some current starts to flow from base to emitter, as well as from collector to emitter. The current that flows through the collector is roughly the same as the current going into the base times the current gain of the transistor (Typically written as hfe or B [beta]).
Consider a transistor's collector connected directly to the voltage source and the emitter connected to ground, by kirchoff's second law you can see that the voltage the transistor gets will always be equal to the supplied voltage.
In cutoff, no current flows through the transistor, so the voltage source "sees" an infinite resistance, that is equivalent to an open switch.
With only a voltage of a little over the junction voltage of base emitter, let's say enough for 1mA to flow and a beta of 100, we get a collector current of roughly 100mA. So in theory, if we supply 100mA we should get a collector current of roughly 10A right?
In theory, yes, that should be possible. In practice however, current flowing through any conductor generates heat, and with small transistors even a current of less than 500mA could be enough to create enough heat to burn and destroy the transistor. There's also the fact that any voltage source has a limit on the amount of current it can supply.
Let's now consider another similar circuit, now instead of being connected directly to voltage ground, we use a resistor of 90 ohms as a load the collector. Let's also use the base current again from the previous example, 1mA and a voltage source of 10v.
The theoretical collector current should be Ic = B Ib = 100 1mA = 100mA.
With 100mA flowing through it, the resistor gets an induced voltage of 9v, close to our voltage supply, with 1v across the transistor, we account for all 10v of supply. But what happens if we increase the base voltage to 2mA?
In theory, collector current should be Ic = 100 2mA = 200mA.
With 200mA flowing through it, the resistor should get an induced voltage of 18v, which is clearly higher than our supply voltage. To compensate, the transistor should have to be 8v lower than ground potential, which it simply cannot do.
What happens in this situation is that the transistor will try to keep the voltage across it as close to ground as it can to accommodate the current that should be flowing through its collector. The base current at which the transistor cannot lower the voltage across it , in other words the transistor is fully on, is called the saturation current, and the state itself called saturation.
The Light Emitting Diode  Led
The Led works under the same principles as the rectifier diode, but the N and P regions are built with special materials that emit a certain wavelength of light when a current flows through them.
The main advantages of the led diode is that it is smaller and uses less current than traditional light bulbs. Their main use is as indicators and lighting for small areas.
Led circuits are very popular among electronics enthusiasts because they can see how their circuit is working, and can also be used to produce a number of very interesting light effects.
The main advantages of the led diode is that it is smaller and uses less current than traditional light bulbs. Their main use is as indicators and lighting for small areas.
Led circuits are very popular among electronics enthusiasts because they can see how their circuit is working, and can also be used to produce a number of very interesting light effects.
The Zener Diode
As you learned on the mechanics of the PN junction, when a negative voltage from anode to cathode is connected, the electrons do not have enough energy to cross the widened depletion zone and no current flows.
But what happens when enough voltage is applied, the electrons have enough energy to cross the barrier and knock some other electrons free along the way. This creates what is called an electron avalanche where more and more electrons break free and conduct current. The voltage at which this phenomenon starts remains across the diode constant even if the outside source is increased.
In most semiconductor diodes this effect is destructive to the diode, since they are not designed to handle the current produced by the avalanche.
The zener diode works in what is called the zener region, a voltage where a small and controlled electron avalanche can be used to generate a constant voltage across the diode.
This property gives the zener diode many of its uses as a voltage regulator, voltage monitor and many other fixed voltage applications.
But what happens when enough voltage is applied, the electrons have enough energy to cross the barrier and knock some other electrons free along the way. This creates what is called an electron avalanche where more and more electrons break free and conduct current. The voltage at which this phenomenon starts remains across the diode constant even if the outside source is increased.
In most semiconductor diodes this effect is destructive to the diode, since they are not designed to handle the current produced by the avalanche.
The zener diode works in what is called the zener region, a voltage where a small and controlled electron avalanche can be used to generate a constant voltage across the diode.
This property gives the zener diode many of its uses as a voltage regulator, voltage monitor and many other fixed voltage applications.
The bridge rectifier
This new rectifier circuit is made from four diodes in a configuration called a bridge rectifier. What this does is that on any given half of the input cycle, only two diodes are conducting. They get connected in such a way that the two conducting diodes will route the incoming current on the same direction, rectifying the current.
Let's take a closer look at what is going on in this circuit.
On the positive half of the wave, D1 is forward biased (positive applied to anode) and conducts current, and D4 is reverse biased (positive applied to cathode) and blocks current flow. D2 is also reverse biased, since it is connected to the positive voltage on its anode and transformer negative, which is our ground, that has a 0v potential.
The return path of the current is wired to D3 and D4 anodes. Since D4's cathode is in a higher voltage it will not conduct. D3 anode is not on a higher voltage, in fact it is in the 0v potential of the transformers negative, so it conducts.
On the negative half, the polarities have reversed, and now D2 is forward biased, D1 reverse biased (cathode now connected to a more positive voltage than its anode) and current ends up flowing in the same direction as it did in the positive half of the wave.
On the return path, D3 is blocked by the positive at its anode and doesn't conduct, but D4 does, completing the path to transformer ground.
As you can see, on both halves of the input signal the current ends up flowing in the same direction, and so it is said to be rectified.
This bridge has applications outside rectifying ac current, it is also useful in protecting against wrong connections on the power supply terminal, like connecting a battery in reverse, since no matter which way the input is connected, the current will always flow in the same direction.
Let's take a closer look at what is going on in this circuit.
On the positive half of the wave, D1 is forward biased (positive applied to anode) and conducts current, and D4 is reverse biased (positive applied to cathode) and blocks current flow. D2 is also reverse biased, since it is connected to the positive voltage on its anode and transformer negative, which is our ground, that has a 0v potential.
The return path of the current is wired to D3 and D4 anodes. Since D4's cathode is in a higher voltage it will not conduct. D3 anode is not on a higher voltage, in fact it is in the 0v potential of the transformers negative, so it conducts.
On the negative half, the polarities have reversed, and now D2 is forward biased, D1 reverse biased (cathode now connected to a more positive voltage than its anode) and current ends up flowing in the same direction as it did in the positive half of the wave.
On the return path, D3 is blocked by the positive at its anode and doesn't conduct, but D4 does, completing the path to transformer ground.
As you can see, on both halves of the input signal the current ends up flowing in the same direction, and so it is said to be rectified.
This bridge has applications outside rectifying ac current, it is also useful in protecting against wrong connections on the power supply terminal, like connecting a battery in reverse, since no matter which way the input is connected, the current will always flow in the same direction.
Diode: Most basic semiconductor device
Enter semiconductors: The PN Junction
The basis of all semiconductor components is the pn junction. Made from the union of two semiconductor materials with different electrical characteristics, most often a silicon substrate with impurities that give it an overall charge.
These two types of materials are called N for negative type with excess electrons, and P for positive type with lack of electrons (excess holes).
When these two material initially come in contact with each other, a portion of the extra electrons in the N type material rush to meet the holes in the P type material, creating a zone where there are neither extra electrons or holes. This region is called a Depletion zone, since the extra charges are depleted by combining with each other.
This depletion zone works as an insulator, separating the N and P layers of the material. Since on one side you have more electrons and on the other you have more holes separated by an insulating layer, the PN junction resembles a little battery, by creating a potential difference across its terminals.
The diode is just a basic two terminal device made of a pn junction. Each terminal is given a specific name, now that it is part of a component, it's better to differentiate it from the P and N type materials. The terminal connected to the P material is called anode (A in schematics) and the terminal connected to the N material is called Cathode (K in schematics).
The pn junction has some properties when an external voltage source is applied to it. When you connect a positive voltage to the cathode with respect to the anode, the electrons are pushed towards the depletion zone.
Forward Bias
When the external voltage if higher than the internal junction voltage, also called forward voltage drop the electrons get enough energy to cross the depletion zone and meet with the holes on the other side.
Reverse Bias
In case of a negative voltage to anode with respect to cathode, the electrons in the cathode are drawn towards the holes in the positive of the voltage source. Same happens with the holes in the anode, which are filled and drawn towards the electrons in the negative terminal.
This has the overall effect of drawing the internal charges away from the depletion zone, effectively widening it, so the electrons don't have enough energy to cross it.
In any bias mode, when a diode conducts, the voltage at which it start to do so remains constant even with increases in the external applied voltage.
Diode varieties and common configurations
Diodes have a wide range of uses depending on their structure and exploited characteristic.
The simple rectifier diode employs the basic properties of the PN junction, specially the fact that it only conducts when forward biased, to create useful circuits. Its most important uses are, as its name implies, in rectifying alternating current into direct current.
The most basic of these rectifier circuits is the half wave rectifier, which basically consists of a diode in series with the source. This diode blocks the negative half of the input wave from reaching the load, creating a pulsating but one way flow of current. An improvement to this and any rectifier circuit is the use of an output capacitor that will charge close to the highest peak of the signal and keep the output dc from varying as much.
Using only one half of the incoming wave is not very efficient, since the energy from the other half is not used. An improved rectifier circuit is the full wave rectifier.
In the simplest form of the full wave rectifier, a special transformer with a central tap is used. In this configuration, the full wave rectifier functions as two half wave rectifiers working each on a different half of the input wave.
Using a center tap transformer is not very practical for small applications, since tapped transformers tend to be bigger than two terminal ones. Since we no longer have a common return path for the current, we need to find a way to make sure that whatever polarity the input has, the output will "see" current flowing in the same direction.
The basis of all semiconductor components is the pn junction. Made from the union of two semiconductor materials with different electrical characteristics, most often a silicon substrate with impurities that give it an overall charge.
These two types of materials are called N for negative type with excess electrons, and P for positive type with lack of electrons (excess holes).
When these two material initially come in contact with each other, a portion of the extra electrons in the N type material rush to meet the holes in the P type material, creating a zone where there are neither extra electrons or holes. This region is called a Depletion zone, since the extra charges are depleted by combining with each other.
This depletion zone works as an insulator, separating the N and P layers of the material. Since on one side you have more electrons and on the other you have more holes separated by an insulating layer, the PN junction resembles a little battery, by creating a potential difference across its terminals.
The diode is just a basic two terminal device made of a pn junction. Each terminal is given a specific name, now that it is part of a component, it's better to differentiate it from the P and N type materials. The terminal connected to the P material is called anode (A in schematics) and the terminal connected to the N material is called Cathode (K in schematics).
The pn junction has some properties when an external voltage source is applied to it. When you connect a positive voltage to the cathode with respect to the anode, the electrons are pushed towards the depletion zone.
Forward Bias
When the external voltage if higher than the internal junction voltage, also called forward voltage drop the electrons get enough energy to cross the depletion zone and meet with the holes on the other side.
Reverse Bias
In case of a negative voltage to anode with respect to cathode, the electrons in the cathode are drawn towards the holes in the positive of the voltage source. Same happens with the holes in the anode, which are filled and drawn towards the electrons in the negative terminal.
This has the overall effect of drawing the internal charges away from the depletion zone, effectively widening it, so the electrons don't have enough energy to cross it.
In any bias mode, when a diode conducts, the voltage at which it start to do so remains constant even with increases in the external applied voltage.
Diode varieties and common configurations
Diodes have a wide range of uses depending on their structure and exploited characteristic.
The simple rectifier diode employs the basic properties of the PN junction, specially the fact that it only conducts when forward biased, to create useful circuits. Its most important uses are, as its name implies, in rectifying alternating current into direct current.
The most basic of these rectifier circuits is the half wave rectifier, which basically consists of a diode in series with the source. This diode blocks the negative half of the input wave from reaching the load, creating a pulsating but one way flow of current. An improvement to this and any rectifier circuit is the use of an output capacitor that will charge close to the highest peak of the signal and keep the output dc from varying as much.
Using only one half of the incoming wave is not very efficient, since the energy from the other half is not used. An improved rectifier circuit is the full wave rectifier.
In the simplest form of the full wave rectifier, a special transformer with a central tap is used. In this configuration, the full wave rectifier functions as two half wave rectifiers working each on a different half of the input wave.
Using a center tap transformer is not very practical for small applications, since tapped transformers tend to be bigger than two terminal ones. Since we no longer have a common return path for the current, we need to find a way to make sure that whatever polarity the input has, the output will "see" current flowing in the same direction.
Subscribe to:
Posts (Atom)
Welcome To Electronic Circuits For Beginners!
All circuits included here are recommended to be assembled in printed circuit boards. Printed circuit boards, or PCB's increase the circuit reliability and mechanical stability.
Circuits quick links:






Led chaser circuit


Simple power supply
Beginner Electronics mini course index

All circuits include parts list and complete Howitworks for beginners and hobbyists to easily understand.