Geeks Informed

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, 21 December 2011

Electronics Reliability Issues at the 45 Nanometer Node and Below

Posted on 16:12 by Unknown


Most tech-aware people have heard of Moore's Law. Moore was an engineer for Intel in 1965 when he famously observed that the number of transistors on a typical Integrated Circuit (IC) was doubling approximately every 18 months.

More recently, engineers have concerns that a continuation of Moore's law will be impossible. There is significant pressure on the IC industry to continue progress, hence justifying future consumer purchases. If the computer or other electronic module available today, is only equal in performance to the one already owned, why would you replace it?

Advanced fabs are now manufacturing at what is known in the industry as the "28 nanometer node" or below. This refers to the dimensions of features in the circuit. Intel has 4 factories in production with this technology node. They have been joined by Matsushita (Panasonic), AMD (GlobalFoundries), IBM, TSMC, UMC and Samsung. At this point, we are still on course for maintaining Moore's Law, but there are some troubling issues.

The most significant problem is reliability. As the recent problems with Intel's "Sandy Bridge" CPU chipsets revealed, even the best operations are vulnerable. The Cougar Point devices have been recalled for degradation over time. Over $700 million of Intel product was recalled (about 8 million units).

The thinnest layers in these devices are now measured in atomic layers (the number of atoms). At 45 nanometers, the thinnest layers are only 3-5 atoms in thickness, with very little margin for error.

One of the fundamental problems for the new node is heat. The general rule in electronics: for each 10 degrees (Centigrade) increase in operating temperature, a 50% reduction in circuit lifetime will result. The new devices, especially the microprocessors, produce more heat, and are much more difficult to cool.

There are other problems, many related to dielectrics. Voltage stress tends to increase as device feature size decreases. Over time, this can lead to dielectric breakdown. As the dielectric is reduced to less than 2 nanometers in thickness, leakage currents tend to increase significantly, which raises power consumption, and generates more heat.

Reliability studies are projecting a device lifetime for this generation of devices of approximately 3-5 years, as opposed to a reliability of 10-15 years with electronics produced at the 65nm technology node.

Flash memory has demonstrated that the smaller the device features, the less memory read/write cycles prior to failure. System designers, through careful statistical modeling and use of algorithms, idealize randomizing of cell use of available flash memory which allows the load to be spread out.

The reliability problems with this technology nodes 45nm and smaller are commonly discussed in the automotive, medical and military (Milspec) communities. These communities have always been more sensitive to reliability issues. One doesn't want a critical system failing.

Discussion of the reliability problems has been far less common in the general consumer community. This is predictable, since the Semiconductor Industry does not wish to discourage adoption of the technology, and the "consumer" is much less organized.

Many consumers apparently don't care. Even well documented reliability problems, for example with the Sony Playstation 3, did not significantly dampen sales of the product. Certain devices: game-players, cell-phones, MP3-player, etc. are considered disposable. In the future, with the newer tech nodes, it is necessary to modify our expectations of what constitutes a "normal" lifetime. It remains to be seen if the consumer will continue to be tolerant of increasingly reduced product lifetimes.


Read More
Posted in | No comments

Sunday, 24 July 2011

State of CAD and Engineering Workstation Technologies

Posted on 09:53 by Unknown
By TN Chan

Hardware for CPU-Intensive Applications

Computer hardware is designed to support software applications and it is a common but simplistic view that higher spec hardware will enable all software applications to perform better. Up until recently, the CPU was indeed the only device for computation of software applications. Other processors embedded in a PC or workstation were dedicated to their parent devices such as a graphics adapter card for display, a TCP-offloading card for network interfacing, and a RAID algorithm chip for hard disk redundancy or capacity extension. However, the CPU is no longer the only processor for software computation. We will explain this in the next section.Future Engineering Workstation

Legacy software applications still depend on the CPU to do computation. That is, the common view is valid for software applications that have not taken advantage of other types of processors for computation. We have done some benchmarking and believe that applications like Maya 03 are CPU intensive.

For CPU-intensive applications to perform faster, the general rule is to have the highest CPU frequency, more CPU cores, more main memory, and perhaps ECC memory (see below).

Legacy software was not designed to be parallel processed. Therefore we shall check carefully with the software vendor on this issue before expecting multiple-core CPUs to produce higher performance. Irrespectively, we will achieve a higher output from executing multiple incidences of the same application but this is not the same as multi-threading of a single application.

ECC is Error Code Detection and Correction. A memory module transmits in words of 64 bits. ECC memory modules have incorporated electronic circuits to detect a single bit error and correct it, but are not able to rectify two bits of error happening in the same word. Non-ECC memory modules do not check at all - the system continues to work unless a bit error violates pre-defined rules for processing. How often do single bit errors occur nowadays? How damaging would a single bit error be? Let us see this quotation from Wikipedia in May 2011, "Recent tests give widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 errors/bit-hour, roughly one bit error per hour per gigabyte of memory to one bit error per century per gigabyte of memory."

Hardware for GPU-Intensive Applications

The GPU has now been developed to gain the prefix of GP for General Purpose. To be exact, GPGPU stands for General Purpose computation on Graphics Processing Units. A GPU has many cores that can be used to accelerate a wide range of applications. According to GPGPU.org, which is a central resource of GPGPU news and information, developers who port their applications to GPU often achieve speedups of orders of magnitude compared to optimized CPU implementations.

Many software applications have been updated to capitalize on the newfound potentials of GPU. CATIA 03, Ensight 04 and Solidworks 02 are examples of such applications. As a result, these applications are far more sensitive to GPU resources than CPU. That is, to run such applications optimally, we should invest in GPU rather than CPU for a CEW. According to its own website, the new Abaqus product suite from SIMULIA - a Dassault Systemes brand - leverages GPU to run CAE simulations twice as fast as traditional CPU.

Nvidia has released 6 member cards of the new Quadro Fermi family by April 2011, in ascending sequence of power and cost: 400, 600, 2000, 4000, 5000 and 6000. According to Nvidia, Fermi delivers up to 6 times the performance in tessellation of the previous family called Quadro FX. We shall equip our CEW with Fermi to achieve optimum price/performance combinations.

The potential contribution of the GPU to performance depends on another issue: CUDA compliance.

State of CUDA

According to Wikipedia, CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by Nvidia. CUDA is the computing engine in Nvidia GPU accessible to software developers through variants of industry-standard programming languages. For example, programmers use C for CUDA (C with Nvidia extensions and certain restrictions) compiled through a PathScale Open64 C compiler to code algorithms for execution on the GPU. (The latest stable version is 3.2 released in September 2010 to software developers.)

The GPGPU website has a preview of an interview with John Humphrey of EM Photonics, Compucona pioneer in GPU computing and developer of the CUDA-accelerated linear algebra library. Here is an extract of the preview: "CUDA allows for very direct expression of exactly how you want the GPU to perform a given unit of work. Ten years ago I was doing FPGA work, where the great promise was the automatic conversion of high level languages to hardware logic. Needless to say, the huge abstraction meant the result wasn't good."

Quadro Fermi family has implemented CUDA 2.1 whereas Quadro FX implemented CUDA 1.3. The newer version has provided features that are significantly richer. For example, Quadro FX did not support "floating point atomic additions on 32-bit words in shared memory" whereas Fermi does. Other notable improvements are:

Up to 512 CUDA cores and 3.0 billion transistors
Nvidia Parallel DataCache technology
Nvidia GigaThread engine
ECC memory support
Native support for Visual Studio

State of Computer Hardware Developments

Abbreviations

HDD is Hard Disk Drive
SATA is Serial AT Attachment
SAS is Serial Attached SCSI
SSD is Solid State Disk
RAID is Redundant Array of Inexpensive Disks
NAND is memory based on "Not AND" gate algorithm

Bulk storage is an essential part of a CEW for processing in real time and archiving for later retrieval. Hard disks with SATA interface are getting bigger in storage size and cheaper in hardware cost over time, but not getting faster in performance or smaller in physical size. To get faster and smaller, we have to select hard disks with SAS interfaces, with a major compromise on storage size and hardware price.

RAID has been around for decades for providing redundancy, expanding the size of volume to well beyond the confines of one physical hard disk, and expediting the speed of sequential reading and writing, in particular random writing. We can deploy SAS RAID to address the large storage size issue but the hardware price will go up further.

SSD has turned up recently as a bright star on the horizon. It has not replaced HDD because of its high price, limitations of NAND memory for longevity, and immaturity of controller technology. However, it has found a place recently as a RAID Cache for two important benefits not achievable with other means. The first is a higher speed of random read. The second is a low cost point when used in conjunction with SATA HDD.

Intel has released Sandy Bridge CPU and chipsets that are stable and bug free since March 2011. System computation performance is over 20% higher than the previous generation called Westmere. The top CPU model has 4 editions that are officially capable of over-clocking to over 4GHz as long as the CPU power consumption is within the designed limit for thermal consideration, called TDP (Thermal Design Power). The 6-core edition with official over-clocking will come out in June 2011 timeframe.

Current State & Foreseeable Future

Semiconductor manufacturing technology has improved to 22 x 10-9 metres this year 2011 and is heading towards 18 nanometres in 2012. Smaller means more: we will get more cores and more power from a new CPU or GPU made on advancing nanotechnology. The current laboratory probe limit is 10-18and this sets the headroom for semiconductor technologists.

While GPU and CUDA are having big impacts on performance computing, the dominant CPU manufacturers are not resting on their laurels. They have started to integrate their own GPU into the CPU. However, the level of integration is a far cry from the CUDA world and integrated GPU will not displace CUDA for design and engineering computing in the foreseeable future. This means our current practice as described above will remain the prevailing format for accelerating CAD, CAE and CEW.

General Abbreviations

CAD is Computer Aided Design
CAE is Computer Aided Engineering
CEW is Computer aided design and Engineering Workstation
CPU is Central Processing Unit
GPU is Graphics Processing Unit


About the Author
TN Chan is the system architect of Compucon New Zealand, http://www.compucon.co.nz. He is a Chartered Engineer and has 17 years of power station engineering experience and 18 years of computer system hardware experience. His current roles are business management, knowledge transfer, technology appraisal, quality assurance and project management. He can be reached at tn@compucon.co.nz.
Read More
Posted in | No comments

Wednesday, 20 April 2011

Japan's Crisis and the Impact on the Technology Sector

Posted on 12:38 by Unknown
The crisis in Japan caused by the earthquake-tsunami, and the resulting problems at the Fukushima Daiichi nuclear plant are challenging a Japan, that until very recently, appeared on the brink of recovering from a 20 year recession. Japan Map

The latest tragedy is now estimated to be the most costly natural disaster in history. The earthquake and tsunami concentrated most of the damage in the northern part of the country. The Japanese North is, in general, much less industrialized than Japan's Southern and Southwestern regions.

The problems with the nuclear power facility, at the Fukushima plant located about 100 miles northeast of Tokyo, have caused concerns with both radiation, and with electrical generation capacity, since the Fukushima facility was one of the most important electrical generating plants in Japan.

The damage to the country is severe, but Tokyo, in the Southeastern part of the island, itself suffered minimal direct damage from either the earthquake or resulting tsunami.

If the the nuclear contamination can be limited to only the immediate area around the Fukushima plant, and Japan can stabilize the electrical grid, then the international economic impact should be minimal. However, apparent containment breaches at multiple reactors indicate that a regional event is in progress.

The most likely outcome of the current issues would be a long-term contamination of the ground-water. A regional event that affected the Tokyo metropolitan area, home to 36 million people, and countless technology management and R&D operations, would be very significant.

A direct comparison with Chernobyl is not ideal since a chemical explosion at the Chernobyl plant caused a huge airborne emission of radioactivity (more than 400X more radiation than Hiroshima or Nagasaki) that affected large parts of Europe. The Fukushima crisis seems to be Chernobyl in slow motion. The degree of regional contamination from the Fukushima plant will not be accurately measured for some weeks or months.

An important component in the calculation is the difference between the Tokyo citizenry and population surrounding Chernobyl. Approximately 15% of the adult Ukrainian population have completed higher education. Tokyo has one of the most educated populace in the world.

Kiev is a comparable distance from the Chernobyl accident as Tokyo is from the current event. Kiev was never officially evacuated, though many women and children left voluntarily. Kiev is 110 miles (175 kilometers) South of Chernobyl. The Tokyo Center is 147 miles (238 kilometers) from Fukushima. Would Tokyo tolerate a similarly compromised environment?
Read More
Posted in | No comments

Friday, 11 February 2011

Troubleshooting Wi-Fi Wireless Network Problems - 4 Diagnostic Strategies Toward Optimal Performance

Posted on 04:12 by Unknown



By Steve Leytus

Until recently there have been two primary techniques employed to troubleshoot Wi-Fi 802.11 wireless networks -- network discovery and RF spectrum analysis. Network discovery is also commonly referred to as network site survey or Wi-Fi scanning. Lately, two new WiFi diagnostic tools have been introduced that broaden the range of troubleshooting techniques -- 'Wi-Fi Channel Analysis' and 'Connection Analysis'. This article briefly describes 4 strategies for troubleshooting Wi-Fi networks, and mentions some of the 'pros' and 'cons' associated with each strategy.

Expensive RF analysis tools are available to measure a variety of parameters that only an RF engineer has a clue what they mean and the implications of their values. But when it comes to networks -- wired or wireless -- ultimately, what one should be most interested in is throughput performance. 802.11 (i.e. Wi-Fi) is a robust standard that includes a variety of protocols that help it communicate wirelessly with other devices. Unless one has intimate knowledge of the 802.11 standard and its inner workings, then it is not possible to predict how an 802.11 network will behave when you are armed solely with RF measurements. This is why it is important to focus on performance metrics -- since these more accurately predict how a wireless network will actually behave in a real-world environment.

What follows is a brief introduction to 4 troubleshooting techniques (network discovery, RF spectrum analysis, WiFi channel analysis, and WiFi connection analysis). We begin with a summary of the pros and cons of the different troubleshooting techniques, which should give you a high-level view of the direction in which the field of WiFi diagnostics is currently headed:

Summary:

1. Network Discovery:

Advantages:

Inexpensive

Disadvantages:

Of limited use since it only detects beacon packets transmitted by 802.11 access points. It does not "see" or measure RF energy transmitted by non-802.11 devices (which dominate the RF environment) or, even, actively transmitting 802.11 stations.

2. RF Spectrum Analysis:

Advantages:

Detects all RF transmissions within a frequency band.

Based on the transmission pattern you might be able to identify the source of the interference.

Disadvantages:

Expensive -- since it requires proprietary hardware.

When it detects RF interference in the 2.4x or 5.x GHz ISM bands, it can not predict how this will affect 802.11 devices or WiFi network performance -- since it knows nothing about the 802.11 standard nor how its underlying protocols work to mitigate potential sources of interference.

3. WiFi Channel Analysis:

Advantages:

Inexpensive -- uses off-the-shelf 802.11 devices.

Measures RF interference through the eyes of an 802.11 device -- hence, can better predict how an 802.11 Wi-Fi network will actually perform in the current environment.

Can quantify the expected performance for each Wi-Fi channel, thereby allowing you to choose the optimal channel.

Disadvantages:

Of limited use when attempting to identify the source of interference.

4. Connection Analysis:

Advantages:

Inexpensive -- uses off-the-shelf 802.11 devices or your built-in 802.11 adapters.

Measures throughput performance of your 802.11 devices when connected to a Wi-Fi network -- which is the ultimate metric when it comes to troubleshooting a network.

Disadvantages:

Of limited use when attempting to identify the source of interference.

-------------------------------------------------------------------------------------------



Descriptions:

1. Network Discovery

An 802.11 network discovery tool will report the Service Set Identifier (SSID) for each access point (AP) it detects, along with the channel used by the AP. Approximately every 100 mSec an AP transmits a small beacon packet and a discovery tool (running on your laptop and using its 802.11 wireless adapter) detects the beacon and adds the packet information (including the AP's SSID) to its list of known access points. In addition, the discovery utility may report signal strength (in dBm units) of the beacon as detected by the client adapter. The beacon's signal strength is a reflection of how close the AP is to your current location. Though this is useful information, it does not tell you anything about non-802.11 devices or even how busy the access points are. That is, your laptop could be sitting next to a microwave oven and the discovery tool would be clueless as to its existence. The discovery tool only knows about beacon packets transmitted by 802.11 devices and can not see non-802.11 transmissions.

Network discovery tools use the 802.11 adapter built into your laptop or an external USB 802.11 adapter. Since they do not require additional proprietary hardware, then they are relatively inexpensive (even free).

AP Beacon Strength Is Not A Measure Of Performance

The signal strength reported by a network discovery tool is the signal strength of a beacon as measured by the 802.11 wireless adapter installed on your laptop or desktop machine. Each access point (AP) sends out a short pulse or beacon of information approximately every 100 mSec. It's equivalent to an 'I'm over here!' shout. It does not expect a response from the 802.11 client adapters that may hear it -- it's just a one-way shout. The signal strength that the network discovery tool reports is the signal strength of a beacon, and the signal strength of a beacon is a reflection of how close that AP is located to you. It is not a reflection of the performance or throughput you can expect by associating with that AP -- rather, it is an indication of the AP's physical location relative to you. If the AP with the strongest beacon has 24 client adapters associated with it that are actively transmitting and receiving information, and if you connect with that AP then you will be client number 25 and your network connection will seem slow. On the other hand, if you instead choose to associate with an AP whose beacon strength is weaker but which is not associated with other client adapters, then you will likely experience better performance. Furthermore, the AP with the strongest beacon signal may be using a channel that is subject to RF interference -- again, degrading its performance. When it comes to networking (both wired and wireless) what we care most about is performance. And the key to performance is 'throughput' (i.e. bytes-per-second). Though a beacon's signal strength can affect it's performance, what's more important is the number of client stations that are competing for the same AP and whether the channel currently used by the AP is subject to RF interference from other wireless devices in the vicinity.

2. RF Spectrum Analysis

An RF spectrum analyzer is the instrument of choice for detecting and identifying sources of RF interference. Spectrum analyzers are a basic tool used for observing radio frequency (RF) signals. Since they detect all RF transmissions (both 802.11 and non-802.11) then they provide a much better picture of the RF environment, which then helps you identify and, perhaps, locate devices that could be interfering with your Wi-Fi network. Typically an RF spectrum analyzer will employ a 2-dimensional display where the vertical axis (Y-axis) represents the strength of a signal and the horizontal axis (X-axis) represents the frequency of a signal. If the spectral trace of the interfering RF transmissions have previously been documented, then it might be possible to determine which type of device is causing the disturbance. As for tracking-down and attempting to locate an interferer, in practice this is more difficult than it might seem on the surface. Not only does it require the use of a directional antenna, but in an indoor environment with waves bouncing all over the place (off of objects and walls) then how do you discern from which direction the wave originated. In other words, when your directional antenna measures a signal from a wave you don't know whether that's the original wave or the result of a wave that has bounced off of an object or wall in the room.

3. WiFi Channel Analysis

Today, one of the hottest topics discussed by Wi-Fi infrastructure manufacturers is "using the infrastructure to troubleshoot the infrastructure". That is -- using 802.11 devices to troubleshoot an 802.11 network. Channel analysis is a new technique we have championed and pioneered. This type of tool uses 802.11 hardware to perform data acquisition -- hence, the results truly reflect how RF interference in the local environment affects throughput performance of 802.11 channels. This is not possible using an RF spectrum analyzer. By virtue of the fact an 802.11 channel analyzer views the RF world through the eyes of an 802.11 device, then the diagnostic information it provides more closely mirrors the performance you can expect from your own 802.11 client adapters. This makes it easier to troubleshoot and fix problems and allows you to make better-informed decisions regarding how best to configure your wireless network for optimal throughput performance.

4. Connection Analysis

Ultimately, the bottom line for any network (wired or wireless) comes down to throughput performance -- that is, how many bytes-per-sec can be transferred from one node on the network to another. The dBm and RSSI values that are often referred to in the context of wireless networks don't mean much if you can't somehow relate them to a performance metric. Before we can really begin to troubleshoot a wireless network we need a way to benchmark its performance, so as modifications are made we can determine whether or not they really make a difference in the network's performance. A connection analysis tool allows you to directly compare the performance and reliability of different combinations of 802.11 adapters and access points.




Steve Leytus is a senior software engineer and develops applications for NutsAboutNets.com. For more information about low cost, PC-based, WiFi diagnostic tools for installing, optimizing and trouble-shooting 802.11 (Wi-Fi) wireless networks please visit http://www.NutsAboutNets.com.

Read More
Posted in | No comments

Friday, 4 February 2011

Flat-Rate Prices for Technical Contracts

Posted on 19:10 by Unknown


The latest trend in IT consulting work is flat-rate pricing. Under this scheme, the customer pays a fixed fee for work. A fixed portion of this fee is passed on to the contractor.

Flat rates are easier for the coordinating company, especially if the service is originated at a retailer, but can have negative implications for the contractor. When conditions are consistent, the flat rates have fewer problems. If, for example, every customer needed a service that required a similar amount of time, and a similar amount of travel expense, then a flat rate would be more logical.

To exacerbate the problem, some of the contracts include up to two visits. The logic is that one visit may be needed to determine a part requirement, and a second visit for installing the part. If the contractor works in a rural environment, travel expense can be important. If the customer is a hundred mile trip one-way, there is a potential for 400 miles of travel. This effectively discourages a contractor from accepting work orders from outside of a very conservative radius.

"When it comes to acquisition, government makes the rules. And, in a tough economy, contractors are more willing to adjust to those rules."

"True, firm-fixed-price contracts will result in a lower bid, particularly in today’s economy; but does this really result in better performance or better quality?" Mike Sullivan, Acquistion Solutions Inc., FederalTimes.com, June 22, 2009.


In a performance-based contracting environment, the business model must encourage the development of the incentive relationship. Incentives can define how the contractor will be rewarded for performance above the minimum quality levels. Disincentives define how the contractor will be penalized for performance below the minimum performance quality levels.

Incentive fees contingent upon the contractor meeting desired metrics can include the contractor sharing in the savings due to ingenuity or innovation, or reward the contractor for finishing a project early. Incentives should be in place to reward the contractor for the desired attributes of the contractor's role.

Quality assurance (QA) is critical to the successful management of incentive based contracts. When properly executed, QA can provide early warning of problems in the process.

Flat rate contracts distort the work environment. A contractor will often interact with a remote company, for example, to qualify communications. When working under flat rates, the remote company has little motivation to ensure that an efficient process is managed. If the contractor must wait on his phone for extended periods, why should the company care?

Under these conditions, contractors are encouraged to compromise the quantity of services delivered. A conscientious contractor will normally want to not only fix the primary problem, but do a quick overview of the system maintenance. With flat rates, this is discouraged. This results in mediocre service. If a top of the line TV would sell for the same price as a low quality TV, for how long would companies make high quality TVs?


A useful analogy would be a comparison to the gasoline price controls introduced during the Nixon administration. During the 1979 energy crisis (see photo), price controls resulted in a shortage of product. The artificially distorted market brought about lines at gas pumps and a shortage of gasoline. Hawaii tried a similar strategy in 2005-2006, with disastrous results. The same forces are at play with IT services.

Read More
Posted in | No comments

Wednesday, 2 February 2011

Remote Computer Support - Must Know Facts

Posted on 11:18 by Unknown

By JD Chang

Remote computer support is needed if you're unable to get local computer help, you want someone to fix your laptop problem quickly, or you don't have money to afford on site repair or home technicians.



In this article, I'll address:


-What is remote repair?
-How do I find the best remote support technicians?
-How can I leverage free remote help options?


What is remote computer support?
This basically happens via the internet - thus you must have a computer internet connection, preferably broadband, cable, ISDN, etc.


Remote computer help requires teleconferencing a technician to take over your computer and fix problems as if they were sitting in front of your laptop or desktop.


At the same time, support personnel will be talking to you on the phone or via VOIP for you to describe the problems and help aid in the solution.


Remote laptop and desktop help works best for software issues - hardware is very hard to solve over the phone or internet.


How do I find the best remote laptop support technicians?
Start with Google Search. Look for the following keywords:


-remote laptop repair
-distance desktop inspection
-online PC installation
-best PC technicians
-remote PC technicians
-internet laptop repair


That's just a start. When finding remote diagnostic options, you need to consider price, availability, expertise, and references.


Don't be scammed - often online inspection personnel will take over your computer and steal personal information such as credit cards, insurance, passwords, etc.


Whether laptop or desktop, PC or mac, best Anti-virus or no Anti-virus, they can still do this.


How can I leverage free remote support options?


#1 is to ask friends and family. They are more trustworthy, and the computer help is usually free.


#2 is to ask desktop forums specific questions. They will tell you how to solve your software problems, without needing to remotely take control of your PC.


#3 is to find free electronics forums - there are plenty of these through Google and Yahoo! search. Setup a Skype account, and a video teleconferencing account, and you can get started on free repair support!


Anything else I need to know about distance desktop repair?


There are many options - from online assistance over phone, to support by videoconferencing, to support by technicians taking control of your desktop through a remote portal.


Consider variables mentioned earlier - price, reliability, and availability. With all computer problems, you want them fixed fast! It's frustrating dealing with very slow computers and not knowing how to troubleshoot computers when needed.


Finally, think about local repair centers. These are not much more expensive and can be much more reliable. These local shops and support stores are brick-and-mortar, so you worry less about scams.


It's just another option if you're not 100% sure of remote computer support.


Hope that helps, best of luck in solving all of your hardware and software problems.



JD Chang is an expert computer technician specializing in PC repair. Click here to solve your PC support problems. Learn about remote computer support now and save $1000s in the next 10 minutes!

Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Computer Help
    Please report broken links to the blog administrator: Email Ars Technica Complete System Building Guide Build Your Own Inexpensive Compute...
  • Japan's Crisis and the Impact on the Technology Sector
    The crisis in Japan caused by the earthquake-tsunami, and the resulting problems at the Fukushima Daiichi nuclear plant are challenging a Ja...
  • A Guide to Importing Security Cameras from China
    China is the world leader in labor-intensive manufacturing. China is the OEM (original equipment manufacturer) for about half of the world’s...
  • Purpose of this Blog
    The technical services industry has more coruption problems than any other industry that I have encountered. It is not only the small, ...
  • Terms of Service
    Welcome to Geeks Informed. The following Terms of Service govern your use of all services on this Blog. All users of Geeks Informed must al...
  • Cyberwar in Estonia and the Middle East
    By Aviram Jenik Did a member of your family help launch a cyber attack that brought an entire nation to its knees? No, seriously, don't ...
  • Electronics Reliability Issues at the 45 Nanometer Node and Below
    Most tech-aware people have heard of Moore's Law. Moore was an engineer for Intel in 1965 when he famously observed that the number of ...
  • All About Performance Testing - The Best Acceptance Criteria
    By Yogindernath Gupta First of all, let us see what is the meaning of the term "Performance Testing": For general engineering prac...
  • Who Is Barrister Global Services?
    Barrister Global Services Network (barrister.com) provides IT services within the United States. They serve customers in the commercial, gov...
  • Earth Week E-Cycle
    According to the EPA, discarded electronics accounts for 220 million tons of refuse every year, enough material to fill trucks that would st...

Blog Archive

  • ►  2012 (8)
    • ►  June (1)
    • ►  April (1)
    • ►  March (2)
    • ►  February (3)
    • ►  January (1)
  • ▼  2011 (6)
    • ▼  December (1)
      • Electronics Reliability Issues at the 45 Nanometer...
    • ►  July (1)
      • State of CAD and Engineering Workstation Technologies
    • ►  April (1)
      • Japan's Crisis and the Impact on the Technology Se...
    • ►  February (3)
      • Troubleshooting Wi-Fi Wireless Network Problems - ...
      • Flat-Rate Prices for Technical Contracts
      • Remote Computer Support - Must Know Facts
  • ►  2010 (5)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  April (1)
  • ►  2009 (38)
    • ►  December (1)
    • ►  November (3)
    • ►  October (1)
    • ►  September (1)
    • ►  August (2)
    • ►  July (2)
    • ►  June (12)
    • ►  May (8)
    • ►  April (1)
    • ►  March (1)
    • ►  February (6)
Powered by Blogger.

About Me

Unknown
View my complete profile