KI7TU's Reference Page -- Tips
On this page, I want to share some hard-won wisdom that I've gained
over the years.
Basically, it is stuff that I hope will save you from making the
same mistakes that I made, thus making your hobby a more
pleasant and rewarding experience.
It may seem a bit rambling and disjointed, and frankly, it is.
Maybe sometime I'll find the time to get it a bit more organized,
but for now, my main goal is just to get the info out there.
Since this is aimed at newcomers, and it was something I had a lot
of confusion about when I was a newbie, I'll mention it
right up front. The frequency specs in data sheets are typically
the MAXIMUM frequency that the device is "guaranteed" to
work at. They will (generally) quite happily work at lower frequencies.
Just as a "for instance", suppose you have a circuit where
you need to amplify an audio tone at around 400Hz to go to a speaker.
You dig through your "junk box" and find a transistor,
and look it up and the spec sheet says it's a "200 MHz"
transistor, but the other specs (power handling, voltage, gain,
and so on) will work in your circuit. Will it work? You betcha
(assuming that it hasn't been "blown"). If you were
ordering a new transistor from somewhere, you might want to look
at ones with a frequency rating closer to what you actually need,
just because they're likely to be less expensive, but in general
it's fine to use transistors and ICs several orders of magnitude
below their rated maximum.
The one exception to this is that there have been some microprocessor
designs that have a minimum frequency.
If this is true for a given part, the data sheed should clearly
state the "minimum operating frequency". (I should
mention that some will state that the minumum frequency that
the device is tested for, but that it should work at lower
frequencies.)
For "linear" devices, like transistors and op-amps,
on the high end of the frequency range performance can fall off
quite rapidly as you go beyond the stated maximum. For things
like microprocessors, it gets a bit chancy: often times a
manufacturer, through extensive testing, will find that there
are one or two instructions in the microprocessor's instruction
set that won't work beyond the maximum stated frequency. (They
try to make the maximum stated frequency as high as possible,
because the higher it is the higher the selling price for the parts.)
Note, too, that the frequency rating is at the temperature extremes
listed in the data sheet. They will often work at somewhat higher
speeds if they are kept at "room temperature".
A voltage regulator basically allows you to "feed" a
circuit with a higher, and possibly varying, voltage, while providing
a nice, steady voltage for the remainder of the circuit to consume.
Or at least, that's the goal.
I can remember when the three-terminal regulators, such as the 7805,
were a new thing.
Before that, you had to use a bunch of circuitry to acheive a steady
voltage, which is important to digital electronics.
There are a few key parameters on these things (besides the
output voltage), and a couple of
things you need to know if you're going to put together your own
circuit involving one and have it work.
The key parameters are:
- Current capacity - how much power you can put through it
- Dropout voltage - tells you what the minimum voltage you need
to feed the regulator to achieve the appointed voltage (and note
that if you just rectify AC, you'll need some capacitance to
maintain the minimum voltage when the AC is near zero volts)
- Power dissipation - essentially this is the heat that is generated
by the difference in input voltage and output voltage times the
current being drawn. It will determine whether or not you need
to add a heat sink.
Also, a very important consideration is bypass capacitors.
They can be easily overlooked, even though for most voltage regulators,
the data sheet does mention them.
A good rule of thumb is to provide a 0.01µF ceramic disc capacitor
(to ground)
on each side of the regulator, placed physically very close to the
regulator (like within a quarter inch), plus a larger, say 10µF
electrolytic on the input, also fairly close, and maybe another
10µF electrolytic on the output.
At first glance, those little ceramic ones may seem redundant. Believe me,
they are NOT redundant.
The reason is that electrolytic caps have a fairly high
internal inductace, and so can't react to high frequency spikes as well
as a much smaller valued ceramic cap can. The small ceramic, though,
doesn't store enough energy to be able to deal with the lower frequency
variations.
I found all this out the hard way, back in the 1970s, when I designed
my first computer, and it was acting in a very bizzare manner. With
the help of a (partially working) oscilloscope, I tracked down the
problem to an AC signal, about 2 MHz and about 7 volts peak-to-peak
riding on the 5 volt lines. The addition of bypass capacitors
eliminated the AC "noise".
Note that there are a few voltage regulator chips around that in
their data sheet claim that they don't need the extra bypass, but
even so, it doesn't hurt, and the caps aren't particularly expensive.
(I tend to buy the 0.01µF caps 100 at a time, which makes them
even less expensive.)
It can be very handy to have a few LM317 "adjustable"
regulators around. They can be had from most Radio Shack stores,
though for the price of one there you can get several at one of the
mail order parts houses. The output voltage is set by the ratio of
a couple of resistors on the output side. See the data sheet for
the formulas.
I noted above (under regulators) putting a 0.01µF ceramic
capacitor very close to both the input and output of a regulator.
When I'm degining a digital circuit, I generally place one very
close (again, within about a quarter inch, or for those who like
metric, within about 6mm) to every "power" pin on every
IC.
(Some ICs have two or more power pins, and so get two or more
bypass caps.)
Admittedly, for something like a Pentium® processor, with
dozens of power pins (and a hundred or more ground pins), this
would be overkill, but I have yet to desgin a circuit that
actually uses one of these behemoths.
The reason for all those bypass caps is that digital ICs actually
"switch" at very high speeds, and that can put a lot
of noise onto the power bus.
The bypass caps can do a lot to
smooth out these tiny power surges.
Virtually all of the so-called "passive" devices
(resistors, capacitors, and inductors)
have
a tolerance specification.
For instnace, you may see a "5%" resistor, or a
"2%" resistor (sometimes listed as "±5%"
and "±2%", respectively).
But what does this mean?
What it means is that the manufacturer has tested the part to
be within that tolerance of the marked value.
For instance, a 5% 100Ω resistor can be as low as 95&Omega,
or as high as 105&Omega, or anything in between.
One of the little "gotchas" that sometimes bites
people who haven't run into it before is thinking that
"OK, I could take a batch of 5% resistors, and sort them
out to find one that's within 2% of the specified value".
The problem is that for a lot of parts, the manufacturer
sorts the parts before marking them, and for our example of
a 100 Ω resistor, ones that measure in the range 99 to 101
Ω will be marked as 1%, ones that are 98 or 102 Ω will
be marked 2%, and ones that are 95 to 97 or 102 to 105 will
be marked 5%, and ones that are actually 90 to 94 or
106 to 110 will be marked as 10%. (Today's manufacturing
techniques have virtually eliminated the 20% classification
for resistors.) This isn't always true, but it is done
often enough that you need to be aware of it.
Most "car batteries" fall into this category.
Although there's a lot of wisdom about them, the first, and probably
foremost, bit of wisdom is that the so-called 12 volt battery is
actually around 13.8 volts. The reason for this is actually
historical. These batteries have been around since the late
19th century.
Individual cells for a lead-acid battery are a bit over 2 volts each.
Back in those days, volt meters that could accurately measure to
a fraction of a volt were very expensive, so they just rounded
off to 2 volts.
Cars in the early 20th century had batteries with
3 cells, and were called 6 volt batteries.
In about the middle of the 20th century, they went
to batteries with 6 cells, and were called "12 volt"
batteries.
Near the end of the 20th century, volt meters that
are accurate to tenths, and even hundreths of a volt became
common and affordable, but the name "12 volt battery"
has stuck around.
There are two major classes of the conventional wet-cell
lead-acid battery: "starting" batteries and "deep-cycle"
batteries. The starting batteries are designed to supply a huge amount
of power (often a couple hundred amps) for a few seconds to start
a gasoline or diesel engine, and then be quickly recharged. They're
also designed to be able to do this at fairly low temperatures.
Starting batteries are not designed to be deeply discharged, that is,
to have most of their power drawn out at a slower rate. Doing this
can greatly shorten the useful life of the battery.
Deep cycle batteries, on the other hand, are designed to have a
fairly high percentage of the power they contain drawn out of them,
albeit at a slower rate. They can be used for things like the
"house" battery on an RV, operating an electric golf cart,
or operating a trolling motor. They are not as affected by being
deeply discharged as are starting batteries, but still running them
"flat" can affect the life expectancy.
Another aspect of a car's electrical system to be aware of is that
there can be some substatial voltage spikes floating around at times,
especially when starting the engine. The starter motor is a huge
DC inductor, and when it is turned off, it still contains a huge
amount of energy, and that can come out as a spike into the electrical
system of the car.
Anything electronic device that is going to be connected to a car's
electrical system should be designed, at a minimum, to be able to
handle a spike of 80 or 90 volts, though that spike only lasts a
fraction of a second.
Heavy power consumers, such as ham radios, often have dedicated
wires going to the car's battery. It is customary to put fuses in
both the positive and negative wires at both the battery end and
at the radio end.
The fuses at the radio end protect the radio, and the ones at the
battery end will (hopefully) prevent a fire if there's a short in
the wires.
The reason for fusing the negative side is that there are things
that can go wrong with the car's electrical system and the power
from the starter can find a "return path" through the antenna and
radio to the battery -- better to blow a couple of fuses than to
fry the radio and/or antenna.
While we're on the topic, many, but not all, mobile radios have a
diode to protect them against being connected to the power supply
backwards. This diode will cause the fuse to blow, hopefully before
the radio gets fried.
While we're talking about wiring radios, I should also mention that
many mobile radios don't disconnect the power amplifier when the
front panel switch is "turned off". So it's best to wire
the radios to the battery through an appropriately sized relay.
Unfortunately, math scares a lot of folks.
(The big fancy word for this is "numerophobia".)
To be sucessful around electronics, especially if you want to
either design something from scratch, or even just modify an
existing design, you're porbably going to have to do some math.
However, don't get too scared!
For the hobbyist, a grasp of basic algebra is sufficient.
A decent calculator is useful, though the calculator programs on
most computers will suffice.
Sometimes you'll need to know a few fractions (typically 1/2 and
1/4), and decimal numbers.
Addition, subtraction, multiplication, division, and
being able to understand sines and
cosines (such as "sin(x)" or "cos(x)") should be
enough. OK, knowing what is meant by "squared" and
"square root" can also be helpful.
Enough, that is, if you are willing to "take my word for it" when
someone says "the more advanced math shows". (Often the
more advanced math is some sort of calculus, which engineers need
to know, though to be honest, in my career I had to resort to actualy
doing calculus only on rare occasions.)
Resistors come in many different types and many different power ratings,
from the teeny 1/100 watt that take a magnifying glass to see to big
multi kilowatt things that take a fork lift to move.
As a hobbyist, though, the most common ones you'll use are 1/4 and 1/2
watt, and sometimes either 1/8 or 1/10 watt.
Common resistors in this range are typically color coded as to their
value and tolerance.
If you have a pile of resistors, and are going through them looking
for a particular one, it's a wise idea to have that digital volt meter
handy so that you can verify that the one you've found is actually the
one you want.
Even after about 45 years of looking at color bands, I
still make mistakes.
Speaking of "piles" of resistors, Radio Shack and a few
other suppliers sell assortments of resistors, usually 1/4 watt
resistors, in the $10 to $15 dollar range. Having one of these
on hand can save a lot of trips to the store, and can allow
you to quickly substitute a different value when you find your circuit
doesn't quite work as predicted. Also remember that in most situations
you can combine a few resistors to get a needed value.
When you are ordering resistors from one of the mail-order places,
be sure to look at the quantity pricing. If you happen to need,
say, 15 of one value, it may be just a few pennies more (and sometimes
a few pennies less!) to order 100. That's a good way to get a
stash of them in your "junk box".
While we're on the subject of resistors, it is worth mentioning that
certain types of resistors do "drift" in value with age.
I recall reading one author's comments in one of the professional
magazines over 20 years ago that he'd gone through a batch of resistors
that he'd had around for maybe 20 years, and that the carbon
composition resistors had drifted so much that he referred to them
as carbon decomposition resistors. The moral of the story
is that if you have ones that are very old, or of unknown age,
it's probably worth taking a few seconds and checking them with
the meter.
Soldering skills can be an essential part of the electronics hobby.
You might find that my comments have some validity, since they're
based on roughly 45 years of experience.
Getting a good solder joint is something that does take some practice.
None of us got it right the first few times we tried, so don't get
discouraged. Try to find some scraps to practice on before getting
going on any sort of "major" project. Several vendors
sell "learning to solder" kits if you don't have access
to any "scraps" that you can practice on.
I have found that you're less likely to damage parts, and especially
damage printed circuit boards, if you use a higher wattage iron
(and thus a higher temperature) with a very small tip than if you use one
with a lower wattage rating (and thus lower temperature). This is
contrary to common opinion that you should use a lower temperature
iron on small parts. The basic reason is that with the higher wattage
iron, you can heat up the contacts (and solder) far more quickly, thus
transferring LESS total heat to the parts and board than if you were using
a cooler iron which takes much longer to heat up the work.
I keep a one-pound spool of very fine guage solder on the bench close
to my soldering iron, and it gets used for everything from very fine
work to very heavy work.
One trick that I learned many years ago was to cut off a piece of
solder about six to eight inches long, and wrap it around the end of
my index finger on the hand that isn't going to be holding the iron
while I'm working. I leave the last couple of inches straight
(or slightly curved), then
I can use the other fingers (and my thumb) to hold tweezers or small
pliers to hold a part, and use my index finger to bring in the solder
once the work has been heated with the soldering iron. True, it
takes some practice, but it can allow you to work a little easier.
(I unwrap the solder between soldering joints to maintain the length of
solder between the work and my finger.)
By the way, it does mean that you end up wasting a half inch or so
at the end of piece of solder, but when you buy it by the pound,
it's not too expensive, and besides, you can save those pieces and
use tweezers to feed them into the joint when you're working on
something where you don't need the "extra hand".
Also be sure to check out my comments on soldering irons and
desoldering on the Tools
page.
It does bear repeating that having the surfaces to be soldered bright
and shiny will make it a lot easier to get a good solder joint.
This is not an original idea, and I don't recall exactly where I
first encountered it, though it was likely in the directions for
one of the many kits I've built over the years.
One of the first steps that you should always do when building a
kit is to check that you have all of the parts.
For electronics kits, this means that you have to identify and
count a lot of small parts.
When they come loose in a bag, this can be a bit of an effort.
If you just plop them onto the workbench, you'll have to go
through them again when you're actually putting the thing together
and find the parts a second time. There's a better way, though.
Take a piece of corregated cardboard, and mark on it spaces for
each of the values of the parts, or at least all of the small
parts that have leads on them.
Then, as you identify each of the parts slip it into the corregation
corresponding to your markings.
Here's an example from a kit I recently built:
I just used a pocket knife to cut off part of the flap on a box
that was headed for the recycle bin.
Here's another (closer) view showing how the parts slip into the
corregations:
Notice that for these parts I've only used the "holes" on the side where
I've marked the values.
I recently unpacked a box that had been packed up several years ago
when I moved, and had never been unpacked.
One of the things that I found was a project that I'd built many years
ago, and although I'd done a neat and careful job of building it (both
inside and out), I'd neglected to put any labels on it.
There's a power connector on the back, with no indication as to
operating voltage and on the front are two binding posts (one red
and one black), two switches, and two pots (probably multiturn),
and 3 indicators (LED?), one red, one yellow, and one green,
all carefully mounted in an aluminum project box. Inside is a
relatively simple circuit, made with a combination of wire-wrapped
and point-to-point wiring.
If I'd even put a piece of masking tape and penned what it was for,
it would probably jog my memory. When (and if) I ever remember what
it was for, I'll make some nice labels.
Hopefully, you won't duplicate my mistake!
If you go to the typical hardware store, you'll find a dizzying
array of different sorts of tape.
I want to discuss a few types that are often misused or at least
misunderstood, and one that few people seem to know about.
- Electrical tape
-
This is the plastic stuff, usually vinyl, sometimes referred to
as "electrician's tape".
The primary use for the stuff is to cover or wrap electrical
connections. It will stretch some, and so can be form fitting.
By far the most common color is black, but you can get it in
other colors. (Radio Shack carries a package that has a roll
in each of several different colors.)
Thus you can use it for color-coding things.
Unrelated tip: Wrapping a strip of colored electrical
tape a couple of times around
the handles of your luggage can make it a lot easier to spot on
the carousel at the airport.
Having strips of a couple of different colors can make your bags
even more identifiable, and if your bags have multiple handles,
mark every handle the same way.
- Friction tape
-
This is often mistaken for electrical tape. It is made with
a cloth backing, and does not have the stretch that electrical
tape does.
What it is intended for is where electrical wires can be subject
to abrasion (wear), such as where they have to cross a metal edge
within a car.
Although youi might be able to get away with using friction tape
to insulate something at 12V, it's a lot better to use electrical
tape, and if needed, a separate layer or two of friction tape over
the electrical tape.
- Duck tape
-
Often erroneously referred to as "duct tape", this fabric
backed tape was originally developed during the Second World War
for sealing things, like cans, against moisture (thus the name
"duck" tape).
It is good for some uses, but is often used inappropriately.
For actually sealing duct work, you should use the type of tape
that is metallic and has an adhesive that is designed to last for
many years.
- Gaffer's tape
-
At first glance, this stuff looks a lot like duck tape. It has
a cloth backing, and although it can be had in many different colors,
dark grey is the most common.
There are two important differences between gaffer's tape and duck tape.
The first is that gaffer's tape uses an adhesive that is designed to
not leave a residue (duck tape is notorious for this) and generally
does not remove paint when carefully pulled off.
The second major difference is the price, with gaffer's tape usually
being around $16 a roll while duck tape can often be had for $3 a
roll.
Gaffer's tape is often difficult to find, but it can be worth both
the price and the effort to do so.
If your city still has a photographic
supply store, check with them, as gaffer's tape is very popular with
professional photographers. If not, try one of the big mail-order
photo supply houses, such as Adorama (www.adorama.com), though be
aware that shipping can be expensive
One thing to be aware of is that if you're working with anything more
modern than vacuum tubes: there is the possibility that it can be
damaged by static electricity (in the lingo of engineers, it's
called "electro-static damage" or ESD for short).
One of the basic rules of thumb is that if you can feel a static
electric discharge, even if you are trying to feel it, it is several
times what it takes to ruin a modern integrated circuit.
Back in the 1970s the usual wisdom (acutally "wisDUMB")
was that only MOS (Metal Oxide Semiconductor) chips were sensitive to
static discharges. Then in the early 80s, research started coming
out indicating that static electric discharges that didn't instantly
destroy TTL devices (and even descrete transistors) would still cause
damage that would dramatically shorten their lifespan.
The good news is that on the "bench", at least, static
electricity is fairly easy to control.
Have something that you KNOW is grounded, and touch it EVERY time you
approach the bench. (On my bench, the oscilloscope has a three-prong
power connector, and it is left plugged in all of the time.
I've checked, and the outside of the BNC connectors for the probes is
connected to ground, so I simply touch one of those before touching
anything else.)
There are a variety of anti-static wrist straps available on the market.
Please see that topic on the "Tools" page for more details.
Also be sure to hold onto some of the anti-static bags that some things
(like computer boards and disk drives) come in.
I keep a large one
over any projects on the bench, in case kitty decides to investigate
an area where she shouldn't be.
I recently had occasion to incorporate a small
fan
into a project to provide a mild airflow.
I wanted a small amount of airflow over a temperature sensor so
that it would be (close to) the ambient temperature outside of
the case, and not be affected much by the heat generated inside
the case.
It also needed to have very little acoustic noise, due to where
it was going to be located (in a bedroom).
My first experiment with the fan in question revealed that the
airflow was way too high, and it was also way too loud.
Many years ago I had some experience with slowing down DC motors
by effectively decreasing the voltage that they saw by including
a resistor in series with them.
These motors were all of the "brushed" type.
Today virtually all modern DC fans use brushless motors meaning
that they incorporate some electronics to control the magnetic
fields so that the shaft rotates, rather than the old method
of using a commutator and brushes to control the electromagnetic
fields.
These fans have a much longer life expectancy, as well as being
more energy efficient.
Also, they eliminate the sparking between the brushes and
commutator, which greatly reduces both fire hazards and
the amount of electrical noise produced by the fan.
I was worried that the simple approach that I'd used years ago
wouldn't work with the electronics inside the motor.
After having asked a few fellow Engineers about it, and getting
a consistent "I don't know",
I did some research on-line, and found a number of references
that suggested that there are two ways of regulating the speed
of a brushless DC fan:
You can either adjust the voltage, or you can use pulse width
modulation (PWM).
The advantage of adjusting the voltage is that in my case, at
least, it meant just adding a series resistor.
However, you have to be careful to not drop the voltage below
a certain threshhold where the fan won't start.
The consensus was that this is typically in the range of
25% to 50% of the nominal voltage.
The advantage of using PWM is that you can get a much lower speed
from the fan, but at the cost of added complexity in the cotrol
of the fan. In my case, it would have meant using an additional
GPIO pin on my microcontroller plus an additional transistor,
both of which I'd prefer to avoid.
I decided to experiment with the actual fan that I had.
Using an adjustable bench power supply, and a cheap digital
multi meter, I determined that the fan would start at just
over 3V, and would run happily at that voltage, drawing
roughly 60mA of current.
At this speed, it was both quiet and giving about the amount
of air movement that I wanted.
Doing the math, it worked out that I should use a resistor of
about 130Ω resistor to drop the 12V of the supply down
to 3V to run the fan.
It would be dissipating a bit over 0.6W.
I decided that a slightly better approach was to use three
resistors in parallel, each being rated 390Ω at 1W.
(Although they still dissipate the same amount of heat, they
won't get nearly as hot as they're running at about 1/5 their
rating, rather than running at 60% of their power rating.)
So far, it works just fine.
In this section I want to share a few tips specific to amateur radio.
A lot of us start with a hand-held radio, referred to in
amateur radio parlance as a "handi-talkie", or just "HT",
because of the price.
There are a few of them on the market that are in the $100 (U.S.) range,
though they can run up to about five times that.
Generally, HTs work in the VHF and/or UHF part of the spectrum.
This means that they'll usually be used with repeaters, because in
these frequency ranges, communications is more-or-less line-of-sight.
They can penetrate a few walls or trees, but not many.
By making use of a repeater, which is typically installed in a high
location, such as a mountain top or atop a tall building, two people
can talk to each other if they both can "see" the repeater,
even if they can't "see" each other.
(To improve things even further, repeaters are not infequently
linked together, so that anyone who can "see" one of the
repeaters in the link can talk to anyone else who can "see"
any of the repeates in the link.
Also, some repeaters are linked via the Internet, and can talk
internationally.)
HTs typically come equiped with a "rubber ducky" antenna.
Folks who have been around ham radio for a while often refer to
these as "rubber dummy loads" as they are such poor antennas.
Fortunately, most hand held amateur radios have provision for connecting
a better antenna.
If you decide to purchase an HT, there are a couple of accessories
I highly recommend.
The first is a "dry cell" battery pack. This device allows
you to run your radio on disposable AA size alkaline batteries, which means
that it is easy to find replacements.
It turns any convenience store or corner drug store into an
"instant recharge station" for your radio.
If you get a dry cell pack that takes six cells, then it's also
possible to use ordinary AA size NiCd or NiMH rechargable batteries.
(Because of the lower voltage, four cells, and sometimes five cells,
typically won't "light up" the radio.)
When I first got into ham radio, it was fairly common to see old
HTs at bargain prices because the original rechargable battery packs
no longer worked and the owner hadn't bought a dry cell pack while
they were still available. Today, there are companies, such as
"Batteries Plus" and "The Battery Lady" who
will rebuild old battery packs, and some of them also have
after-market packs for a lot of radios.
The second item I strongly recommend buying is the original equipment
manufacturer's soft case, especially if you are planning on using that
nifty belt clip for it's stated purpose.
If you're not used to wearing a bulky radio on your hip, you ARE
going to bump into things, and catch on corners, and that soft case
can absorb at least some of the abuse (and is less expensive than
the whole radio).
One of the things to be aware of is that most HTs will work on a
range of voltages. If you get the "cigar lighter plug"
for it, so that it is connected to the car battery, they typically
will put out about 5 watts, though on the rechargable batteries
they will typically put out in the neighborhood of 2 watts.
The aforementioned nifty belt clip often serves another purpose,
namely it is also a heat sink. These radios tend to get hot
when run for very long, especially at the higher power.
One other comment about HTs: The style in recent years has been
to include a broad-band receiver that will supposedly work in
frequency bands that are far outside the ham bands.
The unfortunate thing about this is that it requires that the
"front end" be very broad band, and therefore VERY
susceptible to all kinds of interference.
Some years ago, surprisingly, Radio Shack sold two HTs
(the HTX-202 for 2 meters, and the HTX-404 for the 440MHz band)
that had very narrow "front ends" and were therefore
much less likely to have interference problems, especially in
urban areas.
The next "step up" from HTs today are the mobile radios.
These are designed for installation and operation in an automobile,
though it's perfectly reasonable to use one as a "base station"
(a radio in a fixed location, such as a home), and many hams do that.
One thing to be aware of is that most mobile radios don't have the
input voltage range that HTs do. A mobile radio will typically
"drop out" if the input voltage drops more than a couple
of volts.
Most mobile radios will put out much more power than an HT -- often
as much as 100 watts. So they need to have antennas that are capable
of handling that.
Many hams who use mobile radios in their home "shack" use
an AC power supply, though there are some hams who use a deep cycle
battery (and some have solar panels set up to keep the battery
charged).
If you do decide to use an AC power supply, get one that can supply
more power than the radio requires.
Some power supplies are "linear", while others are
"switching". The linear supplies are much less likely to
inject noise into the radios, though they can weigh a lot, and are
not tremendously efficient.
The switching supplies are typically much more efficient, and
a lot lighter weight, though they can produce a lot of noise that
will find its way into the radio. There are "low noise"
switching power supplies, but they can be very expensive, though
both the noise levels of inexpensive switching power supplies and
the price of low noise supplies are dropping.
Be sure to read the stuff above about
lead-acid batteries, which includes some comments about wiring
mobile radios.
There are "base-station" amateur radios available, though
these tend to be a lot pricier (think "year-old economy car" price
range).
They do tend to have a lot more features than the mobile radios,
and also tend to have a "knob for each feature" rather
than having the features buried several menu layers down.
In this section I want to point out a few tips about things that are
purely mechanical in nature.
They may seem like common sense, and it may even seem a bit silly
to have to state some of it,
but there may be folks that don't know it, so I'll put it in.
This is certinaly one of those things that may sound silly to have
to mention to most of us, but if just one reader learns something
from it, it's been worth my effort.
Avoid Cross-threading
Cross-threading when assembling nuts, bolts, and/or screws can be a big
problem.
There is a simple trick to help avoid doing it, though it does take a
bit of practice:
Once you've aligned the parts the best you can, holding the part
(or the tool) lightly against the mating part, turn it some in the
"loosening" direction (usually to the left). You should
feel a slight "click" at one point, and if you were to
make one full rotation, you'd feel it again.
What this "click" is is the two threads "dropping
off" each other, and if you stop just after this click, and
start turning in the "tightening" direction (usually to
the right), you stand a very good chance of NOT cross-threading
the parts.
When you're using self-tapping screws (also called self-threading
or sheet-metal
screws), the first time one is put in its hole, there's nothing for
it but to cut a new thread. However, if you are re-installing a
screw that has gone into the hole before, it's important to use
the above trick to try to avoid cutting a new thread. The reason
for this is that every time a new thread is cut, it weakens the
metal around the screw. If a self-tapping screw is put into a
given hole several times and allowed to cut a new thread every
time, it will soon strip out all of the metal around the screw,
and there won't be anything left to hold the screw in the hole.
The only solutions are to use a machine screw and nut if you can
get to the "back side" to install the nut, to use a
larger screw (which may mean enlarging the hole on the
part being held), or using some form of "captive nut"
(which can be difficult to obtain, as well as being a hassle to
use).
When to tighten
When I was growing up, my father was an aircraft mechanic, and
he was always adamant that you should "start" all
the screws holding two parts together before tightening any of
them.
This works very well when the parts being assembled are
fairly rigid and have accurately placed holes.
However, I've seen a lot of electronics where the parts are
made out of bent pieces of sheet metal, and sometimes the
holes aren't located all that accurately. In these cases,
it's sometimes best to tighten the first couple of screws
so that you can get the other parts aligned.
Tighten the nut
When you have a choice, turn the nut rather than the bolt or
screw.
We don't often have the luxury of that choice in electronics,
but when we do, the nut should be turned to tighten the assembly
rather than turning the screw or bolt that it's on.
The reason for this is that the nut usually has less friction
than the screw or bolt, since the screw or bolt will probably
rub against the hole.
Know how tight to make it
I've seen some folks who don't have much experience really rare
down on screws or nuts, and then wonder why they have problems.
Knowing how tight to make a screw or nut does take some experience,
and even with more than 45 years at it I sometimes make mistakes
on this (though my mistakes tend towards "not quite tight
enough" and I find that something works loose).
One thought I have on this is if you have a couple of small pieces
of scrap steel (preferably totaling at least a quarter inch thick)
that have small holes, go to the hardware store and get some small
screws and nuts (say, #4 or #6) and "test them to
destruction" -- that is, tighten them until the screw breaks.
(If you have cheap tools, you may find out why cheap tools are
more expensive than good ones -- the tools may fail.) Also,
if you've got a scrap printed circuit board ("PC board"
or "PCB") try putting that up against the steel and
tightening the screw until you crush it. These two experiments
will give you something of a feel for "way too tight".
These days, including a small microcomputer in the design for an
electronic device is more the rule than the exception.
It seems like they're
everywhere in our lives.
Some of them are fairly easy to program, and can be used to do
some pretty amazing things.
Often times an MCU (micro-controller
unit) with just 14 pins, with a small handfull of other parts,
can do things that just a few years ago would have required several
large boards and been far beyond the typical hobbyist, both in
price and complexity. The other neat thing is that you can
sometimes dramatically change the capabilities of a "gadget"
you've built without hardly lifting a soldering iron, just by
installing new "firmware".
However, including an MCU does imply programming
(software, and since it's loaded into non-volatile memory
to be executed directly, it's referred to as "firmware")
which is a world
of its own.
I've been programming, at various levels, for about 40 years now,
including 23 years when I officially held the title
"software engineer".
I have some tips to pass on to hobbyists that will probably make
doing software easier and more enjoyable in the long run.
Virtually every computer language has some form of comments --
that is, a way to include text in the "source code"
which is ignored by the compiler, interpreter, or assembler.
Comments are supposed to make absolutely no difference in how
the code behaves. (I have seen a couple of languages where this
wasn't the case, but it was because of errors in the compiler itself.)
I had one professor who said that he included comments so that a
total idiot could understand what was going on, because he was
usually the "total idiot" trying to modify or correct
his code six months after he'd written it.
Comments should clarify why things are happening. You
should be able to see what is happening by reading the code
itself.
After many years of engineering, I've developed the habit of putting
a big long comment (or sometimes several comments) at the beginning
of each program.
The first thing is the name of the program (or module).
I've seen times when a disk drive has gotten messed up but it is
still possible to recover a file using this key clue about what
the contents of the file is. The next thing is a simple explanation
about what the program is attempting to do.
This "header comment" should also include a copyright
notice, and a brief history of major changes to the program.
Since this is one of the places where an example is good, here's
an example from one of my recent programs (in the "C" language):
/* waverter.c - This program is intended to take a .wav file and convert
the actual samples to a format that will be acceptable to the PIC 18
assembler.
The description of the contents of a .wav file comes from
https://ccrma.stanford.edu/courses/422/projects/WaveFormat
Copyright 2010 by Clark Jones.
History:
28-Oct-2010 CJ Began development.
08-Nov-2010 CJ Switched from "hardwired" file names to
command line control.
10-Nov-2010 CJ Added -v option, calculation of average
divergence (from 0).
*/
By the way, in the original of the above code, I had used tab characters
to get the indention in the "History" section.
It's very easy to neglect to keep the comments up to date as the program
evolves, but it is worth the effort.
It takes a while to develop a knack for coming up with meaningful,
but easy-to-type variable names.
When you first get into programming, you'll see a lot of texts that
use single letter character names, such as "i" or "x".
After many years of experience, I've leared that the only place
that single letter variable names are appropriate are on a marker board
(or, if you're as old as I am, a chalk board).
Real world programs are a lot better off with longer variable names.
For example, if I need a variable for a loop index (in a "for"
loop in C or a "DO" loop in Fortran) I'll use idx (short
for "index"). Commonly I'll need two or three nested loops,
so I'll add jdx and kdx (rather than using j and
k as you commonly see in books). One of the big reasons for
this is that it's a lot easier to find these three letter strings in
an editor than to have to skip over dozens (or even hundreds) of
occurances of a single letter.
One of the problems with doing software is that it is incredibly
easy to make mistakes.
It's very frustrating to write several hundred lines of code, and
try it for the first time and it crashes and you have no clue as
to why it crashed.
It is a lot easier to find the mistakes in just a few lines of code
than it is to find the mistakes in a long program.
If you start testing early, you can feel some confidence that you
have at least some code that is working.
That can help isolate the problems.
Also if you run the same tests that were earlier passing, when they
break you can feel pretty confident that what you did since the last
time you ran the tests is likely what broke something.
There are some wonderful "integrated development environments"
and "debugging programs" around, but many of these are
commercial programs and have very big price tags attached.
The hobbyist is hard pressed to be able to afford, say, a $1000 program
to debug her code for the $5 microcomputer that is going to control
a $50 project.
Not to fear, though, there are techniques you can use.
For programs that run entirely on a computer, learn to use the
languages' ability to "print" things to a terminal (or a file).
For C, this is the "printf" function (or to send it to a
file, "fprintf"). Printing even
a silly message when the program gets to a key point will at
least let you know that it got there. And printing out
a key variable can let you look at what the computer thinks the
value is, so that you can know whether or not that is what you
think it should be.
Many languages have a "conditional compile" feature,
such as C's "#ifdef" feature, where you can put a
line or two around the actual statement print statements to
essentially turn them into comments, but yet be able to
"turn them on" again if you need to later.
For "embedded code" (a.k.a. firmware), turning an LED on or
off at key
points can be a useful trick to try to see if part of your
program is working. When I'm first starting with a processor
I haven't used before, or if I'm using a clock mode I haven't
used before on a familiar processor, my first program is just
one that flashes an LED. It may sound simple, but it at least
proves that I've done enough to get the thing to "look alive".
Also, look for areas of code that can be broken out into separate
routines and/or files.
True, this is a skill that takes a long while to develop, and
the task of breaking stuff out into separate routines and/or files
can seem like a waste of time to the beginner (I know it did when
I first started).
Breaking things out, though, gives you two big advantages.
First is that you can re-use code to do the same task in some other
part of the program (or even in another program later on),
and second is that you can often test this
code more thoroughly. Let's look at an example:
A while back I was working on a project where I needed a small
LCD that can display two rows of 16 characters each.
I realized that I needed to be able to take an 8 bit value and
display it as a decimal number.
Very early on, I wrote a subroutine that would display a single
ASCII character on the LCD.
I then wrote some code that would translate the 8 bit value to
three ASCII characters (since an 8 bit value is in the range
0 to 255), and then called te first subroutine to actually send
the characters to the LCD. By putting all of the code to drive
the LCD into a separate file, I was then able to build a
"test harness" program that tested out the decimal
display subroutine with some key values, such as 0, 9, 10, 99, 100,
and 255.
Later on I found that I needed to be able to display a 16 bit
number, so I modified the original subroutine to be able to
handle bigger numbers, and when I ran my original test harness,
I found a couple of things I'd broken and was quickly able to
fix, and then added some more tests, like displaying 65535.
I could then call this subroutine from my main program, and when
I got weird numbers, know that it wasn't because the LCD
interface code was messing up.
When I was wondering if some other hardware was working (a
temperature sensor, to be exact), I was able to use the
aforementioned subroutine to display the value that hardware
was producing by just (temporarily) adding a few lines of code,
rather than having to write a whole bunch of stuff to get the
value to the display.
One other advantage of breaking the LCD related code out
is that I've thought about another
project that uses the same processor but a different interface
to the LCD. It will be simple to change, as all of the
"traffic" to the LCD goes through just a couple of
subroutines (one being that subroutine that sends a single
ASCII character to the LCD).
Most of the subroutines won't have to change at all.
I'll be able to use the software test harness
to verify the new interface hardware and software and go from there.
This screen last updated: 07-Feb-2017
Copyright © 2010-2017 by Clark Jones