2000-Year-Old Ancient Technology for Metal Coatings Superior to Today’s Standards

2000-Year-Old Ancient Technology for Metal Coatings Superior to Today’s Standards


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Research has shown that artisans and craftsmen 2,000 years ago used a form of ancient technology for applying thin films of metal to statues and other items, which was superior to today’s standards for producing DVDs, solar cells, electronic devices and other products.

The incredible discovery, published in July, 2013 in the journal of Accounts of Chemical Research, confirmed "the high level of competence reached by the artists and craftsmen of these ancient periods who produced objects of an artistic quality that could not be bettered in ancient times and has not yet been reached in modern ones".

Fire gilding and silvering are age-old mercury-based processes used to coat the surface items such as jewels, statues and amulets with thin layers of gold or silver. While it was mostly used for decoration, it was sometimes used fraudulently to simulate the appearance of gold or silver on a less precious metal.

From a technological point of view, what the ancient gilders achieved 2000 years ago, was to make the metal coatings incredibly thin, adherent and uniform, which saved expensive metals and improved its durability, something which has never been achieved to the same standard today.

Apparently without any knowledge about the chemical–physical processes, ancient craftsmen systematically manipulated metals to create spectacular results. They developed a variety of techniques, including using mercury like a glue to apply thin films of metals, such as gold and silver, to objects.

While the scientists concluded that their results were importance because they could help preserve artistic and other treasures from the past, the findings could have an even greater significance, for they once again demonstrate that there was a far higher level of understanding and knowledge of advanced concepts and techniques in our ancient past than what they are given credit for. Other examples of ancient technology include the 2000-year old Antikythera mechanism , an ancient metallic device consisting of a complex combination of gears which is thought to have been used for calculating the positions of celestial bodies to determine solar and lunar eclipses with accurate precision, and the Baghdad Battery , a clay pot encapsulating a copper cylinder with an iron rod suspended in the centre which appears to be the earliest form of an electric battery.

The level of sophistication present 2,000 years ago and even earlier is perplexing and raises many questions about where the knowledge came from and how it originated. One thing is for sure, our history books should be rewritten to include such significant accomplishments of our ancient past and not simply cast aside in the ‘too hard to understand’ basket.


    The history of the scalpel: From flint to zirconium-coated steel

    Editor’s note: The following article is based on a poster presented at the History of Surgery Poster Session at the American College of Surgeons (ACS) Clinical Congress 2017 in San Diego, CA. The session is sponsored each year by the Surgical History Group. For more information, please visit the ACS website.

    The surgical knife, one of the earliest surgical instruments, has evolved over 10 millennia. While the word “scalpel” derives from the Latin word scallpellus, the physical instruments surgeons use today started out as flint and obsidian cutting implements during the Stone Age. As surgery developed into a profession, knives dedicated to specific uses also evolved. Barber-surgeons embellished their scalpels as part of the art of their craft. Later, surgeons prized speed and sharpness. Today’s advances in scalpel technology include additional safety measures and gemstone and polymer coatings. The quintessential instrument of surgeons, the scalpel is the longstanding symbol of the discipline. Tracing the history of this tool reflects the evolution of surgery as a culture and as a profession.


    A Look Back in Time at the Rise of the Roofing Industry

    This post is part of a monthly series that explores the historical applications of building materials and systems, using resources from the Building Technology Heritage Library (BTHL), an online collection of AEC catalogs, brochures, trade publications, and more. The BTHL is a project of the Association for Preservation Technology, an international building preservation organization. Read more about the archive here.

    The role of the roof cannot be understated. It shelters a building’s interiors and its occupants from the forces of nature, protects vital utility systems, and helps to define the exterior’s aesthetic. The roof’s necessity has bred its ubiquity and, by extension, fostered a strong market for roofing materials ranging in performance and physical characteristics.

    Those materials have a long history, and their evolution has been largely driven by performance. Wood and slate shingles and clay tiles were the predominant roofing choice until the mid-19th century, when metal and bituminous roofing systems made low-slope applications possible. During the 20th century, several new materials were developed for low- and steep-sloped roofs. Among them was the asphalt shingle, which arrived on the scene around the turn of the 20th century and continues to be the top roofing material for houses. After a period of market experimentation with various shapes, patterns, and textures, the asphalt shingle evolved in form to the three-tab version popular today.

    Composites, such as asbestos and fiber-cement, rivaled asphalt for a time by purporting better performance while attempting to replicate traditional materials such as slate or clay tile. Imitation subsequently became a theme in the roofing category, with early examples including metal shingles that replicate the look of clay tiles and asphalt shingles that simulate thatching. The 20th century also saw the development of roofing materials with various levels of durability and fire resistance as well as the introduction of roof-related components such as gutters, downspouts, and flashing.

    The following brochures, pamphlets, and journals from the digital Building Technology Heritage Library explore how roofing systems evolved throughout the 20th century.

    H.M. Reynolds Shingle Co., 1910: The H.M. Reynolds Company of Grand Rapids, Mich., claimed in the early 20th century to have invented the asphalt roof shingle. As with many now-ubiquitous products, this is difficult to prove. However, rolled asphalt roofing coated with slate granules was available by the late 19th century so it’s not a stretch to see how the material could have been used to make individual shingles soon thereafter—it also makes it even more difficult to nail down who, exactly, did it first. Asphalt shingles were widely available by 1910 and rapidly replaced wood shingles due to their economy and fire resistance. Throughout the 20th century, the asphalt shingle evolved to include a range of shapes and textures with the crushed slate coating replaced by ceramic granules.

    Penrhyn Stone: Slate Roofs of Quality, J. W. Williams Slate Co., c. 1930: Slate has long been a regionally prominent roofing material in the northeastern U.S. and nearby parts of Canada because of the abundance of slate quarries in the area. Slate also became popular in the rest of the U.S. in period styles of residential and commercial architecture. The material's extreme durability made it popular with institutional owners. It is also quite heavy, suiting it for steep, rather than shallow, roofs. Of the limited range of slate colors, red is the rarest and is therefore typically used for ornamental accents.

    Barrett’s Hand Book on Roofing and Waterproofing for Architects, Engineers, and Builders, Barrett Manufacturing Co., 1896: The development of built-up roofing—comprising alternating layers of asphalt-impregnated fabric and bituminous coatings—changed the shape of buildings, quite literally, in the temperate regions of the U.S. The steep-slope roof was no longer necessary for rain protection, and the resulting flat roofs would forever change the scale and appearance of the built environment. The Barrett Manufacturing Co., in New York, was a major producer of built-up roofing materials, and the BTHL features the company’s technical catalogs from the 1890s through the 1950s.

    Republic Steel Roofing Products, Republic Steel Co., c. 1939: Large steel roofing panels were particularly popular for agricultural and industrial buildings. Corrugations allowed panels to span longer distances, which reduced the material volume and framing weight, while galvanized coatings gave the panels a longer service life. The material, which originated in the 19th century, remains widely used today.

    Certigrade Handbook of Red Cedar Shingles, Red Cedar Shingle Bureau, 1957: Cedar shingles commonly topped residential structures through the 19th century but were supplanted in popularity in the 20th century by asphalt. The shingle typology has been revived in the 21st century for roofing and siding applications, typically in higher-end projects.

    The Book of Roofs, Johns Manville, 1923: The combination of asbestos and cement resulted in fiber cement, which, when applied as roofing shingles, made for an extremely durable product weighing significantly less than clay and slate tiles. Fiber-cement shingles that simulated the look of slate and clay were particularly common. One popular variation was a large-scale hexagonal form-factor that produced a distinctive pattern.

    About the Author

    Mike Jackson, FAIA, is a Springfield, Ill.–based architect and a visiting professor of architecture at the University of Illinois Urbana–Champaign. He led the architectural division of the Illinois Historic Preservation Agency for more than 30 years and now champions the development of the Association for Preservation Technology's Building Technology Heritage Library, an online archive of pre-1964 AEC documents.


    Identifying the Problem Before Repointing return to top ▲

    The decision to repoint is most often related to some obvious sign of deterioration, such as disintegrating mortar, cracks in mortar joints, loose bricks or stones, damp walls, or damaged plasterwork. It is, however, erroneous to assume that repointing alone will solve deficiencies that result from other problems. The root cause of the deterioration&mdashleaking roofs or gutters, differential settlement of the building, capillary action causing rising damp, or extreme weather exposure&mdashshould always be dealt with prior to beginning work.

    Masons practice using lime putty mortar to repair historic marble. Photo: NPS files.

    Without appropriate repairs to eliminate the source of the problem, mortar deterioration will continue and any repointing will have been a waste of time and money.


    Nanotechnology Timeline

    This timeline features Premodern example of nanotechnology, as well as Modern Era discoveries and milestones in the field of nanotechnology.

    Premodern Examples of Nanotechnologies

    Early examples of nanostructured materials were based on craftsmen’s empirical understanding and manipulation of materials. Use of high heat was one common step in their processes to produce these materials with novel properties.

    The Lycurgus Cup at the British Museum, lit from the outside (left) and from the inside (right)

    4th Century: The Lycurgus Cup (Rome) is an example of dichroic glass colloidal gold and silver in the glass allow it to look opaque green when lit from outside but translucent red when light shines through the inside. (Images at left.)

    Polychrome lustreware bowl, 9th C, Iraq, British Museum (©Trinitat Pradell 2008)

    9th-17th Centuries: Glowing, glittering “luster” ceramic glazes used in the Islamic world, and later in Europe, contained silver or copper or other metallic nanoparticles. (Image at right.)

    The South rose window of Notre Dame Cathedral, ca 1250

    6th-15th Centuries: Vibrant stained glass windows in European cathedrals owed their rich colors to nanoparticles of gold chloride and other metal oxides and chlorides gold nanoparticles also acted as photocatalytic air purifiers. (Image at left.)

    13th-18th Centuries: “Damascus” saber blades contained carbon nanotubes and cementite nanowires—an ultrahigh-carbon steel formulation that gave them strength, resilience, the ability to hold a keen edge, and a visible moiré pattern in the steel that give the blades their name. (Images below.)

    (Left) A Damascus saber (photo by Tina Fineberg for The New York Times). (Right) High-resolution transmission electron microscopy image of carbon nanotubes in a genuine Damascus sabre after dissolution in hydrochloric acid, showing remnants of cementite nanowires encapsulated by carbon nanotubes (scale bar, 5 nm) (M. Reibold, P. Paufler, A. A. Levin, W. Kochmann, N. Pätzke & D. C. Meyer, Nature 444, 286, 2006).

    Examples of Discoveries and Developments Enabling Nanotechnology in the Modern Era

    These are based on increasingly sophisticated scientific understanding and instrumentation, as well as experimentation.

    "Ruby" gold colloid (Gold Bulletin 2007 40,4, p. 267)

    1857: Michael Faraday discovered colloidal “ruby” gold, demonstrating that nanostructured gold under certain lighting conditions produces different-colored solutions.

    1936: Erwin Müller, working at Siemens Research Laboratory, invented the field emission microscope, allowing near-atomic-resolution images of materials.

    1947: John Bardeen, William Shockley, and Walter Brattain at Bell Labs discovered the semiconductor transistor and greatly expanded scientific knowledge of semiconductor interfaces, laying the foundation for electronic devices and the Information Age.

    1947 transistor, Bell Labs

    1950: Victor La Mer and Robert Dinegar developed the theory and a process for growing monodisperse colloidal materials. Controlled ability to fabricate colloids enables myriad industrial uses such as specialized papers, paints, and thin films, even dialysis treatments.

    1951: Erwin Müller pioneered the field ion microscope, a means to image the arrangement of atoms at the surface of a sharp metal tip he first imaged tungsten atoms.

    1956: Arthur von Hippel at MIT introduced many concepts of—and coined the term—“molecular engineering” as applied to dielectrics, ferroelectrics, and piezoelectrics

    Jack Kilby, about 1960.

    1958: Jack Kilby of Texas Instruments originated the concept of, designed, and built the first integrated circuit, for which he received the Nobel Prize in 2000. (Image at left.)

    Richard Feynman (Caltech archives)

    1959: Richard Feynman of the California Institute of Technology gave what is considered to be the first lecture on technology and engineering at the atomic scale, "There's Plenty of Room at the Bottom" at an American Physical Society meeting at Caltech. (Image at right.)

    Moore's first public graph showing his vision of the semiconductor industry being able to "cram more components onto integrated circuits"

    1965: Intel co-founder Gordon Moore described in Electronics magazine several trends he foresaw in the field of electronics. One trend now known as “Moore’s Law,” described the density of transistors on an integrated chip (IC) doubling every 12 months (later amended to every 2 years). Moore also saw chip sizes and costs shrinking with their growing functionality—with a transformational effect on the ways people live and work. That the basic trend Moore envisioned has continued for 50 years is to a large extent due to the semiconductor industry’s increasing reliance on nanotechnology as ICs and transistors have approached atomic dimensions.1974: Tokyo Science University Professor Norio Taniguchi coined the term nanotechnology to describe precision machining of materials to within atomic-scale dimensional tolerances. (See graph at left.)

    1981: Gerd Binnig and Heinrich Rohrer at IBM’s Zurich lab invented the scanning tunneling microscope, allowing scientists to "see" (create direct spatial images of) individual atoms for the first time. Binnig and Rohrer won the Nobel Prize for this discovery in 1986.

    1981: Russia’s Alexei Ekimov discovered nanocrystalline, semiconducting quantum dots in a glass matrix and conducted pioneering studies of their electronic and optical properties.

    1985: Rice University researchers Harold Kroto, Sean O’Brien, Robert Curl, and Richard Smalley discovered the Buckminsterfullerene (C60), more commonly known as the buckyball , which is a molecule resembling a soccer ball in shape and composed entirely of carbon, as are graphite and diamond. The team was awarded the 1996 Nobel Prize in Chemistry for their roles in this discovery and that of the fullerene class of molecules more generally. (Artist's rendering at right.)

    1985: Bell Labs’s Louis Brus discovered colloidal semiconductor nanocrystals (quantum dots), for which he shared the 2008 Kavli Prize in Nanotechnology.

    1986: Gerd Binnig, Calvin Quate, and Christoph Gerber invented the atomic force microscope, which has the capability to view, measure, and manipulate materials down to fractions of a nanometer in size, including measurement of various forces intrinsic to nanomaterials.

    1989: Don Eigler and Erhard Schweizer at IBM's Almaden Research Center manipulated 35 individual xenon atoms to spell out the IBM logo. This demonstration of the ability to precisely manipulate atoms ushered in the applied use of nanotechnology. (Image at left.)

    1990s: Early nanotechnology companies began to operate, e.g., Nanophase Technologies in 1989, Helix Energy Solutions Group in 1990, Zyvex in 1997, Nano-Tex in 1998….

    1991: Sumio Iijima of NEC is credited with discovering the carbon nanotube (CNT), although there were early observations of tubular carbon structures by others as well. Iijima shared the Kavli Prize in Nanoscience in 2008 for this advance and other advances in the field. CNTs, like buckyballs, are entirely composed of carbon, but in a tubular shape. They exhibit extraordinary properties in terms of strength, electrical and thermal conductivity, among others. (Image below.)

    Carbon nanotubes (courtesy, National Science Foundation). The properties of CNTs are being explored for applications in electronics, photonics, multifunctional fabrics, biology (e.g., as a scaffold to grow bone cells), and communications. See a 2009 Discovery Magazine article for other examples SEM micrograph of purified nanotube "paper" in which the nanotubes are the fibers (scale bar, 0.001 mm) (courtesy, NASA). An array of aligned carbon nanotubes, which can act like a radio antenna for detecting light at visible wave- lengths (scale bar 0.001 mm) (courtesy, K. Kempa, Boston College).

    1992: C.T. Kresge and colleagues at Mobil Oil discovered the nanostructured catalytic materials MCM-41 and MCM-48, now used heavily in refining crude oil as well as for drug delivery, water treatment, and other varied applications.

    MCM-41 is a "mesoporous molecular sieve" silica nanomaterial with a hexagonal or "honeycomb" arrangement of its straight cylindrical pores, as shown in this TEM image (courtesy of Thomas Pauly, Michigan State University). This TEM image of MCM-41 looks at the straight cylindrical pores as they lie perpendicular to the viewing axis (courtesy of Thomas Pauly, Michigan State University).

    1993: Moungi Bawendi of MIT invented a method for controlled synthesis of nanocrystals (quantum dots), paving the way for applications ranging from computing to biology to high-efficiency photovoltaics and lighting. Within the next several years, work by other researchers such as Louis Brus and Chris Murray also contributed methods for synthesizing quantum dots.

    1998: The Interagency Working Group on Nanotechnology (IWGN) was formed under the National Science and Technology Council to investigate the state of the art in nanoscale science and technology and to forecast possible future developments. The IWGN’s study and report, Nanotechnology Research Directions: Vision for the Next Decade (1999) defined the vision for and led directly to formation of the U.S. National Nanotechnology Initiative in 2000.

    The progression of steps of using a scanning tunneling microscope tip to "assemble" an iron carbonyl molecule, beginning with Fe (iron) and CO (carbon monoxide) molecules (A), joining them to produce FeCO (B), then adding a second CO molecule (C), to achieve the FECO2 molecule (D). (H.J. Lee, W. Ho, Science 286, 1719 [1999].)

    1999: Cornell University researchers Wilson Ho and Hyojune Lee probed secrets of chemical bonding by assembling a molecule [iron carbonyl Fe(CO)2] from constituent components [iron (Fe) and carbon monoxide (CO)] with a scanning tunneling microscope. (Image at left.)

    1999: Chad Mirkin at Northwestern University invented dip-pen nanolithography® (DPN®), leading to manufacturable, reproducible “writing” of electronic circuits as well as patterning of biomaterials for cell biology research, nanoencryption, and other applications. (Image below right.)

    Use of DPN to deposit biomaterials ©2010 Nanoink

    1999–early 2000’s: Consumer products making use of nanotechnology began appearing in the marketplace, including lightweight nanotechnology-enabled automobile bumpers that resist denting and scratching, golf balls that fly straighter, tennis rackets that are stiffer (therefore, the ball rebounds faster), baseball bats with better flex and "kick," nano-silver antibacterial socks, clear sunscreens, wrinkle- and stain-resistant clothing, deep-penetrating therapeutic cosmetics, scratch-resistant glass coatings, faster-recharging batteries for cordless electric tools, and improved displays for televisions, cell phones, and digital cameras.

    2000: President Clinton launched the National Nanotechnology Initiative (NNI) to coordinate Federal R&D efforts and promote U.S. competitiveness in nanotechnology. Congress funded the NNI for the first time in FY2001. The NSET Subcommittee of the NSTC was designated as the interagency group responsible for coordinating the NNI.

    2003: Congress enacted the 21st Century Nanotechnology Research and Development Act (P.L. 108-153). The act provided a statutory foundation for the NNI, established programs, assigned agency responsibilities, authorized funding levels, and promoted research to address key issues.

    Computer simulation of growth of gold nanoshell with silica core and over-layer of gold (courtesy N. Halas, Genome News Network, 2003)

    2003: Naomi Halas, Jennifer West, Rebekah Drezek, and Renata Pasqualin at Rice University developed gold nanoshells, which when “tuned” in size to absorb near-infrared light, serve as a platform for the integrated discovery, diagnosis, and treatment of breast cancer without invasive biopsies, surgery, or systemically destructive radiation or chemotherapy.2004: The European Commission adopted the Communication “Towards a European Strategy for Nanotechnology,” COM(2004) 338, which proposed institutionalizing European nanoscience and nanotechnology R&D efforts within an integrated and responsible strategy, and which spurred European action plans and ongoing funding for nanotechnology R&D. (Image at left.)

    2004: Britain’s Royal Society and the Royal Academy of Engineering published Nanoscience and Nanotechnologies: Opportunities and Uncertainties advocating the need to address potential health, environmental, social, ethical, and regulatory issues associated with nanotechnology.

    2004: SUNY Albany launched the first college-level education program in nanotechnology in the United States, the College of Nanoscale Science and Engineering.

    2005: Erik Winfree and Paul Rothemund from the California Institute of Technology developed theories for DNA-based computation and “algorithmic self-assembly” in which computations are embedded in the process of nanocrystal growth.

    Nanocar with turning buckyball wheels (credit: RSC, 29 March 2006).

    2006: James Tour and colleagues at Rice University built a nanoscale car made of oligo(phenylene ethynylene) with alkynyl axles and four spherical C60 fullerene (buckyball) wheels. In response to increases in temperature, the nanocar moved about on a gold surface as a result of the buckyball wheels turning, as in a conventional car. At temperatures above 300°C it moved around too fast for the chemists to keep track of it! (Image at left.)

    2007: Angela Belcher and colleagues at MIT built a lithium-ion battery with a common type of virus that is nonharmful to humans, using a low-cost and environmentally benign process. The batteries have the same energy capacity and power performance as state-of-the-art rechargeable batteries being considered to power plug-in hybrid cars, and they could also be used to power personal electronic devices. (Image at right.)

    (L to R) MIT professors Yet-Ming Chiang, Angela Belcher, and Paula Hammond display a virus-loaded film that can serve as the anode of a battery. (Photo: Donna Coveney, MIT News.)

    2008: The first official NNI Strategy for Nanotechnology-Related Environmental, Health, and Safety (EHS) Research was published, based on a two-year process of NNI-sponsored investigations and public dialogs. This strategy document was updated in 2011, following a series of workshops and public review.

    2009–2010: Nadrian Seeman and colleagues at New York University created several DNA-like robotic nanoscale assembly devices. One is a process for creating 3D DNA structures using synthetic sequences of DNA crystals that can be programmed to self-assemble using “sticky ends” and placement in a set order and orientation. Nanoelectronics could benefit: the flexibility and density that 3D nanoscale components allow could enable assembly of parts that are smaller, more complex, and more closely spaced. Another Seeman creation (with colleagues at China’s Nanjing University) is a “DNA assembly line.” For this work, Seeman shared the Kavli Prize in Nanoscience in 2010.

    2010: IBM used a silicon tip measuring only a few nanometers at its apex (similar to the tips used in atomic force microscopes) to chisel away material from a substrate to create a complete nanoscale 3D relief map of the world one-one-thousandth the size of a grain of salt—in 2 minutes and 23 seconds. This activity demonstrated a powerful patterning methodology for generating nanoscale patterns and structures as small as 15 nanometers at greatly reduced cost and complexity, opening up new prospects for fields such as electronics, optoelectronics, and medicine. (Image below.)

    A rendered image of a nanoscale silicon tip chiseling out the smallest relief map of the world from a substrate of organic molecular glass. Shown middle foreground is the Mediterranean Sea and Europe. (Image courtesy of Advanced Materials.)


    2011:
    The NSET Subcommittee updated both the NNI Strategic Plan and the NNI Environmental, Health, and Safety Research Strategy, drawing on extensive input from public workshops and online dialog with stakeholders from Government, academia, NGOs, and the public, and others.

    2012: The NNI launched two more Nanotechnology Signature Initiatives (NSIs)--Nanosensors and the Nanotechnology Knowledge Infrastructure (NKI)--bringing the total to five NSIs.

    2013:
    -The NNI starts the next round of Strategic Planning, starting with the Stakeholder Workshop.
    -Stanford researchers develop the first carbon nanotube computer.


    History Timeline and Types of Automotive Paint

    What is the first thing you notice when you see an old car or truck for the first time? If you are like most folks, the answer would likely be the paint. Not just the color but the overall condition of the paint finish. Does it have a beautiful, high-gloss shine or a pleasing, soft “patina” that only the hands of time and exposure to sun and weather can produce? Of course, it is all subjective as we would fully expect a recently completed high-standards restoration to have a flawless, mirror like appearance. Conversely, a car or truck that is 40, 50, 60 years or even older and is still wearing it’s factory applied finish is greatly admired and highly prized for its beauty even though it may be worn through to the primer from many years of being lovingly polished or even proudly displaying some runs or imperfections that it acquired at the hands of that production line painter so many years before.

    In fact, at many show events, an unrestored car or truck wearing its original paint will often command much more attention by admirers than a perfectly restored example. What is all the more remarkable when we admire these old, preserved finishes is the fact that those paints weren’t really all that great compared to what is available today. This is not to say that those paints were of inferior quality as the manufacturers generally used the best materials that were available with whatever the coatings technology of the period allowed. It is also important to consider that, as the decades progressed into the 1950’s & 1960’s, the time required to apply paint increasingly became a more critical factor in the assembly of a car and with the exception of some of the more expensive luxury cars, a few flaws such as runs, texture and overspray were considered to be acceptable and actually looked for by some show judging organizations today.

    In the early days of the automobile, master furniture and carriage craftsmen painstakingly applied primitive oil-based enamel or varnish primer and finish coatings by brush! These finishes had somewhat poor opacity which required numerous coats for coverage and took weeks to dry. They used mainly ink pigments which all tended to be darker colors. These coatings did not withstand weather and sunlight very well and tended to become dry and brittle before long. Since those paint jobs didn’t last all that long, in those days, it was common for an owner to get some paint at a hardware store or mail order catalog like Montgomery Ward along with a good horse hair or hog bristle brush and re-paint the car. With the idea of preserving the car, some folks even did it every year or so…by brush of course!

    A number of manufacturers including Ford in the Model T line, used a combination of brushing, dipping and even pouring to fully cover and protect the various parts of a car or truck. The 1920s saw the beginning of the introduction to spray equipment and nitrocellulose lacquers and primers which were developed together to speed application and dry time to a week or less which cut down dramatically the time required to paint a car although they still required labor intensive and time consuming hand rubbing to achieve a shine. This was not especially true in the production of early trucks however, most 1920s to 1960’s trucks were considered to be no-frills pieces of working equipment built to be used and abused, not to be fussed over and pampered. A great example of this is with 1930’s Model AA Ford trucks with that were built with dull, non-shiny, non-rubbed lacquer finishes. Rubbing-out was an extra-cost Ford AA truck option that according to a Ford service letter of 06-05-31 cost $15.00 extra for the cab, cowl and hood while a pickup bed cost $7.00. In addition to reduced dry times, nitrocellulose lacquers were more durable and allowed the use of brighter colored although more expensive pigments. Interestingly, although with constant improvements, the organic-based nitrocellulose lacquer was used by some manufacturers well into the later 1950s when it was replaced with the much more durable acrylic lacquers and primers which were synthetics.

    Appearing shortly after nitrocellulose lacquers were enamels or more specifically, alkyd enamels and primers. These were generally a thicker material which required fewer coats than lacquers and usually were baked onto a partially assembled vehicle body by passing it through a large oven. This baking hardens the enamel and “flows” it out for a great shine and greater durability. Many more brilliant colors were available with the enamels which became possible due to the use of organic pigments which were widely popular with some of the more flamboyant and attractive two and tri-toned 1950’s combinations. Eventually, the alkyd enamels too were replaced in the early 1960s by the new and superior acrylic enamels and primers favored by several manufacturers.

    Of course as we all know, any paint finish has a limited lifespan and with the harsh conditions it is exposed to, it is remarkable that it can last as long as it does given adequate care. With time and exposure, even the best lacquers will lose their luster, shrink and crack while enamels will fade out and become dull and chalky. These shortcomings and a move toward greater environmental friendliness led to the eventual changeover by most car and truck manufacturers to new base-clear, water-borne systems in the late 1970’s to early 1990s however this period was not without serious issues as many of us will recall the peeling clear coats of many vehicles from that era resulting in scores of cars and truck being repainted through factory warranty claims. Fortunately, the major paint manufactures quickly resolved those problems and the newer finishes are the most durable in history and require virtually no care to survive.

    What does this all mean to the owner of a vintage car or truck today who is planning for a paint job in the near future? To begin with, lacquer, while still available, is very difficult to buy today and is actually illegal for sale in certain areas of the country especially California. This is because of state and federally mandated VOC laws. VOC’s are Volatile Organic Compounds which are chemicals found in paints and solvents that are considered harmful to the environment and living creatures. In addition, with the limited life of a lacquer or enamel paint job and the clear superiority of some of the higher quality modern paints, unless you are striving for 100% authenticity on your restoration, it would probably be to your advantage to choose one of the modern alternatives to lacquer or enamel. With today’s modern paints, there are two major choices suitable for use on a vintage vehicle Single Stage Urethanes also known as Single Stage Urethane Enamels and Two-Stage Urethanes. These urethanes are extremely durable, chip resistant, and chemical resistant and retain their gloss without dulling or fading. The single stage products are only similar to the old air dry lacquers and enamels in that they are one coating with the color, gloss and UV protection all in one material and do not require a clear topcoat. That is, the color is all the way through. They are all 2K formulations which means that an activator must be added per the manufacturer’s instructions which will chemically cure and harden the paint. They can be color sanded and rubbed out to provide that hard to describe yet pleasing, softer “polished bowling ball” look of a genuine lacquer paint job that looks so right on the rounded contours of a restored older car or truck. The two-stage products also known as “base-clear” are also 2K formulations requiring an activator but consist of a thin, no gloss color only film “base” which is sprayed on then top coated with multiple coats of urethane clear. The clear is then responsible for all the UV resistance, gloss and protection of the paint coating. While the two stage base clears do provide an attractive, deep, high gloss finish on more modern vehicles and the clear can also be color sanded and buffed to a glass-like surface, they often can be too glossy and look out of place on an older car.

    Another two-stage, base-clear system is the “water-based” coatings that are rapidly growing in popularity especially in today’s VOC sensitive world. It should be noted however that it is only the color base coat that is water based. At this time, there are no known, successful water-based clear coats. They are still solvent based formulations although the paint manufacturers are working hard to introduce successful, water based clear product.


    Composition of Historic Stucco

    Before the mid-to-late nineteenth century, stucco consisted primarily of hydrated or slaked lime, water and sand, with straw or animal hair included as a binder. Natural cements were frequently used in stucco mixes after their discovery in the United States during the 1820s. Portland cement was first manufactured in the United States in 1871, and it gradually replaced natural cement. After about 1900, most stucco was composed primarily of portland cement, mixed with some lime. With the addition of portland cement, stucco became even more versatile and durable. No longer used just as a coating for a substantial material like masonry or log, stucco could now be applied over wood or metal lath attached to a light wood frame. With this increased strength, stucco ceased to be just a veneer and became a more integral part of the building structure.

    Caulking is not an appropriate method for repairing cracks in historic stucco. Photo: NPS files.

    Today, gypsum, which is hydrated calcium sulfate or sulfate of lime, has to a great extent replaced lime Gypsum is preferred because it hardens faster and has less shrinkage than lime. Lime is generally used only in the finish coat in contemporary stucco work.

    The composition of stucco depended on local custom and available materials. Stucco often contained substantial amounts of mud or clay, marble or brick dust, or even sawdust, and an array of additives ranging from animal blood or urine, to eggs, keratin or gluesize (animal hooves and horns), varnish, wheat paste, sugar, salt, sodium silicate, alum, tallow, linseed oil, beeswax, and wine, beer, or rye whiskey. Waxes, fats and oils were included to introduce water-repellent properties, sugary materials reduced the amount of water needed and slowed down the setting time, and alcohol acted as an air entrainer. All of these additives contributed to the strength and durability of the stucco.

    The appearance of much stucco was determined by the color of the sand&mdashor sometimes burnt clay&mdashused in the mix, but often stucco was also tinted with natural pigments, or the surface whitewashed or color-washed after stuccoing was completed. Brick dust could provide color, and other coloring materials that were not affected by lime, mostly mineral pigments, could be added to the mix for the final finish coat. Stucco was also marbled or marbleized&mdashstained to look like stone by diluting oil of vitriol (sulfuric acid) with water, and mixing this with a yellow ochre, or another color. As the twentieth century progressed, manufactured or synthetic pigments were added at the factory to some prepared stucco mixes.


    Is America the New Rome? – United States vs. the Roman Empire

    Share this Article

    The example of the first great republic in recorded history (509 B.C. to 29 B.C.) was omnipresent in the minds of America’s founders as they created a new republic centuries later. As a consequence of their deliberations and, perhaps, the “protection of divine Providence” as written in the Declaration of Independence, the United States of America, in the mind of many of the founders, was intended to be the modern equivalent of the Roman Republic. The Roman Republic ended with the infamous assassination of Julius Caesar in 27 B.C..

    After a protracted civil war, Octavian became the first “Imperator Caesar,” or Roman emperor. The subsequent period – post-republic – of Roman dominance is known in history as the “Roman Empire.” While Rome enjoyed an additional 500 years of world dominance and internal conflict under the Caesars, history reports its disintegration in the fifth century A.D. (476 A.D.) following the successful invasion of the barbarian Germanic tribes.


    How can 30-year-old receivers sound better than new ones?

    Since no one listens before they buy, selling today's receivers is a numbers game, and sound quality takes a back seat.

    />A 31-year-old Pioneer SX-1980 receiver, still sounding great today. Brent Butterworth

    It's a strange turn of events, but mainstream manufacturers long ago gave up on the idea of selling receivers on the basis of superior sound quality. I'm not claiming today's receivers sound "bad," but since almost no one ever listens to a receiver before they buy one, selling sound quality is next to impossible.

    Back in the days when brick-and-mortar stores ruled the retail market, audio companies took pride in their engineering skills and designed entire receivers in-house. Right up through the 1980s most of what was "under the hood" was designed and built by the company selling the receiver. That's no longer true the majority of today's gotta-have features--auto-setup, GUI menus, AirPlay, iPod/iPhone/iPad compatibility, home networking, HD Radio, Bluetooth, HDMI switching, digital-to-analog converters, Dolby and DTS surround processors--are sourced and manufactured by other companies. Industry insiders refer to the practice of cramming as many features as possible into the box as "checklist design." Sure, there are rare glimpses of original thinking going on--Pioneer's proprietary MCACC (Multi Channel Acoustic Calibration) auto-setup system is excellent--it's just that there's precious little unique technology in most receivers.

    It doesn't matter if those features are useful to the majority of buyers, or if they're easy to use no, the features are included to make the product more attractive to potential buyers. It's a numbers game, pure and simple. The receiver with the right combination of features is judged to be the best receiver.

    OK, so what's wrong with that? The receiver engineers have to devote the lion's share of their design skills and budget to making the features work. Every year receiver manufacturers pay out more and more money (in the form of royalties and licensing fees) to Apple, Audyssey, Bluetooth, HD Radio, XM-Sirius, Dolby, DTS and other companies, and those dollars consume an ever bigger chunk of the design budget. The engineers have to make do with whatever is left to make the receiver sound good. Retail prices of receivers, the ones that sell in big numbers, never go up. The $300 to $500 models are where most of the sales action is, just like 10, 20 or 30 years ago, when their $300 to $500 models weren't packed to the gills with the features I just listed. Something's got to go, and sound quality usually takes the hit.

    />The Pioneer SX-1980 housed a more massive power supply than the best of today's receivers. Brent Butterworth

    I don't blame Denon, Harman Kardon, Marantz, Onkyo, Pioneer, Sony, or Yamaha for making "good-enough-sounding" receivers, but it would be nice if they could occasionally offer one or two models with a minimal features set, and devote the maximum resources to making the thing sound as good as possible. Oh right, that's what high-end audio companies do!

    As luck would have it, my friend Brent Butterworth just wrote an article where he compared the sound of a 2009 Yamaha RX-V1800 receiver with a 1980 Pioneer SX-1980 and a 1978 Sony STR-V6 receiver. In blind tests, where the listeners did not know which receiver was playing, most preferred the sound of the ancient Pioneer. Butterworth said, "Even with all the levels carefully matched, and even in conditions where none of the receivers were ever pushed past their limits, the Pioneer SX-1980 simply beat the hell out of the other receivers." Gee, what a shock in three decades, the industry has gone backward!

    Right up through most of the 1990s power ratings differentiated models within a given manufacturer's lineup, but that's barely true anymore. In those days the least expensive models had 20 or 30 watts a channel, but now most low- to midprice receivers have around 100 watts per channel. For example, Pioneer's least expensive receiver, the VSX-521 ($250) is rated at 80 watts a channel its VSX-1021 ($550) only gets you to 90 watts: and by the time you reach the VSX-53 ($1,100) you're only up to 110 watts per channel! Doubling the budget to $2,200 gets you 140 watts per channel from their SC-37 receiver. Denon's brand-new $5,500 AVR-5308CI delivers 150 watts per channel! The 31-year-old Pioneer SX-1980 receiver Butterworth wrote about was rated at 270 watts per channel. He tested the Pioneer and confirmed the specifications: "It delivered 273.3 watts into 8 ohms and 338.0 watts into 4 ohms." It's a stereo receiver, but it totally blew away Denon's state-of-the-art flagship model in terms of power delivery!

    So if you care more about sound quality than features, look around for a great old receiver! Go ahead and hook up your Blu-ray player's HDMI output directly to your display and get state-of-the-art image quality, and the player's stereo analog outputs to the receiver, and you may get better sound than today's receivers.


    Watch the video: Πώς έμοιαζαν οι αρχαίοι Έλληνες; Αναπαραστάσεις προσώπων αρχαίων Ελλήνων βάσει προτομών τους