Geeks Informed

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Tuesday, 30 June 2009

Mileage Logger from Vulocity

Posted on 11:50 by Unknown
It is tax season again. Many self-employed people claim automobile expenses as a tax deduction. If you are like me, you keep your mileage log on paper. Mileage Logger from VulocityThis is a major pain. Now there is a new system that will save you time now, by eliminating the manually logged data, and later, when it is time to do your taxes. It's called "Mileage Logger", a product from Vulocity (www.vulocity.com).

The device automatically tracks the mileage using GPS and sends this information over the cell phone data network to Vulocity.

The device is designed to record using motion activation, and to require no human interaction. The device also has manual stop and start buttons. The device is small, about the size of a laptop power supply. If necessary, the device can be moved independent of the vehicle using a rechargeable battery.

This gadget will enhance your credibility with the IRS. Last year, vehicle expense accounted for about 15% of all tax deductions. Mileage Logger uses GPS to log your miles, and sends the data over the GSM cellular network to Vulocity's servers.

To see your activity, you log in to Vulocity's servers. Here, you can assign trips to business or personal use, and manually input mileage. Each record contains addresses and maps to help you remember the purpose of the trip. You can export the records to an Excel importable file, and merge, delete or add records.

There is an optional "locate on demand" feature that can be used to locate your vehicle if it is stolen, or to track your fleet of vehicles. This service costs $1 a month, in addition to the normal subscription fee.


IRS

IRS standard business mileage rate deduction for 2009 is 55¢ per mile (30,000 business miles equates to a $16,500 reduction in taxable income). The business mileage rate was 50.5 cents in the first half of 2008 and 58.5 cents in the second half. The number of IRS Schedule C Audits have increased by more than 100% over the last decade. With this year's increase in mileage rate, a further increase in audit rate is anticipated. The Mileage Logger exceeds the IRS imposed mileage log book requirements.

"I spend most of my time in representation of clients before the IRS, and that includes audits", says Charles Bell, an "IRS Enrolled Agent" (certified specialization in IRS accounting) based in Richardson, Texas. "I see often that very few people are prepared for audits regarding substantiating their mileage. It should be an easy task, but it’s just one of those things that human nature says, let’s not bother with it now, we’ll deal with it later."


A recent Syracuse University study has revealed that an increasing number of small businesses are being targeted by the IRS for audits. The smallest businesses were audited 41 percent more often in 2007 than in 2005.



OBD 2 Port



Mileage Logger connects to your vehicle via the OBD 2 port. This port is designed for vehicle diagnostics, especially pollution control, and has been required equipment for all vehicles since 1996.

The OBD 2 connector will be located within three feet of the driver. It is probably under the dash or behind an ashtray.


Mileage Logger currently sales for $200 plus a $19 per month subscription fee.

Read More
Posted in | No comments

Friday, 5 June 2009

Who Is OnForce?

Posted on 20:20 by Unknown
OnForce (OnForce.com) provides access to over 12,000 service professionals in a number of technology categories, including computers, printers, networking, voice over internet (VoIP), point of sale technology, and consumer electronics. OnForce
Services include repair, training, and installation throughout the USA and Canada. OnForce currently processes about 23,000 work orders per month. This is about 10% higher than last year.

This service allows small companies to easily provide service and support nationwide. Approximately 60% of OnForce work orders are for troubleshooting and repair. The rest is primarily installation or deinstallation of equipment.

OnForce, formerly ComputerRepair.com, is often compared to Ebay, but instead of merchandise, OnForce auctions technical services. In fact, some of the OnForce executives came from Ebay. The OnForce system typically results in work being awarded to the contractor who will agree to work at the lowest rate. OnForce and similar services have their share of detractors.

Certifications are tracked, and buyers can filter service providers by these certifications. The number of different certifications tracked by OnForce has recently been increased. This has historically been an important weakness of the OnForce system. Certifications from BICSI, Cisco, Dell, Microsoft, Nortel, Samsung, Sun, and SonicWall are now also tracked. With these changes, OnForce now earns a "Fair" grade (upgraded from "Poor") with their certification tracking. A comprehensive Microsoft certification list is the most glaring oversight.

OnForce also includes a system for tracking criminal background checks and drug tests. The contractor must pay for these checks, but since buyers often filter by these attributes, the contractor is encouraged to participate.

OnForce Technology

OnForce technology is reminiscent of early Ebay technology. Remember that Ebay was "technology-challenged" in the early years.

OnForce interfaces with the contractor via a web portal. Since most contractors in this type of work are mobile, OnForce also sends an SMS (cellphone text) message, to alert the contractor that they need to check their account. If the contractor has a web enabled phone, he can log in and accept the work order. SMS can be a weak link in the system since SMS relays are notoriously unreliable. Remember that the tech may only have a couple of minutes after the SMS message is received (if he is lucky!).
Onforce Service Vehicle
The next time that you see a service vehicle cut across 3 lanes of traffic for a highway exit, you might consider that it could be an OnForce tech racing the clock to bid on a job.

Remember when Domino's Pizza was sued for pressuring their drivers to drive recklessly to deliver pizzas on time? Dominos should have reorganized their drivers into a contractor workforce ... Shazaam!!!      Zero Responsibility!

Enhancements to the system by OnForce are promised. OnForce is undoubtedly an important development in the IT services chronology.

AT&T ConnecTech Now Partners with OnForce

AT&T now offers on-site tech support, called "ConnecTech" that include services such like PC repair or Audio-Video services. Zip Express Installations (a spin-off of Best Buy) also uses OnForce, primarily for flat-screen installs.

Because these tech support services are hard to manage with a unionized labor force, companies like AT&T are contracting with third parties like OnForce for these services.

The on-site IT market does not have a clear leader at the moment. With the bankruptcies of Circuit City and CompUSA, it is clear that there is not a retailer that's well positioned to fill that role.

OnForce has earned the Geeks Informed Smell Test rating of Somewhat Stinky.

Read More
Posted in | No comments

The Switch to Digital TV, 24 Hours and Counting: Are We Ready?

Posted on 09:22 by Unknown
6/11/09


Days, Hours... on Friday the U.S. broadcast television industry goes "digital". Make a mental note to check on your neighbors and family. One survey this week said that 2 million homes are still not prepared for the switch.

TV reception is not a luxury. If you are reading this article, you are likely tech savvy. This will be a day to be generous with your skills.

Not everyone is pleased with the transition's efficiency. "This is a $650 million mistake," said Rep. Joe L. Barton (R-Tex.), who was an opponent of transition postponement. If the transition program uses all of the money, "they've managed to spend $1,000 per household for a device that costs $50."

Digital TV is intended to give our TV broadcast system a needed update. Digital technology results in higher quality reception, and is more efficient in the use of RF spectrum. With increased efficiency, more channels can be broadcast, and Rf spectrum can be used for other services, such as wireless broadband and public safety communications.

TV stations, in conjunction with the Federal Communications Commission have conducted several 5 minute interuptions of the legacy analog signal. These tests allow viewers to identify their readiness status for the transition, and publicize the upcoming event.

The last "soft test" was on May 21, and resulted in over 55,000 calls to the FCC's hotline. Just over half of the calls were requests for information for the agency's coupon subsidy program.

"It was a wake-up call for consumers who are unprepared, alerting them to the fact that they need to take the necessary steps before the June 12 DTV transition." said acting FCC Chairman Michael Copps.




  • The new deadline is now
    less than 20 days away,
    but will we be ready on June 12?


  • The answer depends on one's perspective. If we wait until 100% of consumers have completed their preparations, we might as well cancel the plan. It will never happen. However, if we set our perspective realistically, the country as a whole is well prepared.

    In January, according to research firm Nielsen, 6.5 million U.S. households were unprepared for the switch to digital television, still receiving only analog signals over antennas. Now the number of households said to be unprepared has been cut nearly in half, to about 3.5 million (approximately 3 percent of households).

    Approximately one-third of full-power TV broadcast stations are already completely transitioned, and more will do so soon. In total, about 45% of TV stations will have already switched to digital-only broadcasting before June 12.

    The National Telecommunications and Information Administration (NTIA) is again sending out analog converter set-top box discount coupons. The NTIA will even exchange expired coupons. To date, 26 million coupons have been redeemed.

    There is concern about the demand for the converter boxes needed might exceed supply. But according to Gary Shapiro, president of the Consumer Electronics Association (CEA), "Our survey data suggest that manufacturers and retailers will likely meet consumer demand for converter boxes and antenna through the end of the transition".

    Most Americans are aware of the switch, said Anne Elliott, vice president of communications at Nielsen. "At this point, I think it would be hard to imagine that anybody who watches television has not heard of this transition." But "there are always folks who buy presents on Christmas Eve and people who line up at the post office on April 15" to file their taxes.


    Read More
    Posted in | No comments

    Cyberwar in Estonia and the Middle East

    Posted on 08:58 by Unknown
    By Aviram Jenik

    Did a member of your family help launch a cyber attack that brought an entire nation to its knees? No, seriously, don't laugh. In April 2007, communications in the Baltic state of Estonia were crippled through a coordinated attack that relied on the computers of millions of innocent users around the world, just like you and your kin. The strike was notable in fully demonstrating how cyber war had moved from idea to reality. And it all started with the movements of a single soldier.


    The Bronze Soldier is a two-meter statue which formerly stood in a small square in Tallinn, the Estonian capital, above the burial site of Soviet soldiers lost in the Second World War. The memorial has long divided the population of the country, with native Estonians considering it a symbol of Soviet (and formerly Nazi) occupation and a large minority population (around 25% of the total) of ethnic Russian immigrants seeing it as an emblem of Soviet victory over the Nazis and Russian claims over Estonia. When the country's newly appointed Ansip government initiated plans to relocate the statue and the remains as part of a 2007 electoral mandate, the move sparked the worst riots the country had ever seen - and a startling cyber attack from Russia.


    On April 27, as two days of rioting shook the country and the Estonian embassy in Moscow found itself under siege, a massive distributed denial-of service (DDoS) attack overwhelmed most of Estonia's internet infrastructure, bringing online activity almost to a standstill. The targets were not military websites but civilian sites belonging to organizations such as banks, newspapers, internet service providers (ISPs), and even home users. Much of the onslaught came from hackers using ISP addresses in Russia, but the most devastating element in the attack was a botnet which co-opted millions of previously virus infected computers around the globe to pummel the Estonian infrastructure.


    Anatomy of a Cyber Attack


    The botnet fooled Estonian network routers into continuously resending useless packets of information to one another, rapidly flooding the infrastructure used to conduct all online business in the country. The attack centered mainly on small websites which were easy to knock out, but nevertheless was devastatingly effective. Bank websites became unreachable, paralyzing most of Estonia's financial activity. Press sites also came under attack, in an attempt to disable news sources. And ISPs were overwhelmed, blacking out internet access for significant portions of the population.


    While the Estonian government was expecting there to be an online backlash to its decision to move the statue, it was completely unprepared for the scale of the cyber attack. Estonia's defense minister went on record to declare the attack "a national security situation", adding "it can effectively be compared to when your ports are shut to the sea."(1)


    Once it became clear that most of the country's online business infrastructure was being affected, the Computer Emergency Response Team for Estonia (CERT-EE) issued a plea for help from IT security specialists worldwide and an ad-hoc digital rescue team was assembled, which included people from my own firm, Beyond Security. It took us a few days to get to the bottom of the threat and begin setting up frontline defenses, which mainly involved implementing BCP 38 network ingress filtering techniques across affected routers to prevent source address spoofing of internet traffic. The attack waned quickly once we started taking defensive measures. But in the days it took to fight off the attack, it is likely that the country lost billions of Euros in reduced productivity and business downtime.


    Cyber War in the Middle East


    The Estonian incident will go down in history as the first major (and hopefully biggest ever) example of full-blown cyber warfare. However, there is one place on earth where cyber war has become part of the day-to-day online landscape - and it is still ongoing.


    In the Middle East, the Arab-Israeli conflict has a significant online element, with thousands of attacks and counter-attacks a year. This has been the situation since the collapse of peace talks in the region and was preceded by a spontaneous wide-scale cyber war between Arab and Israeli hackers in 1999 and 2000. Arab sympathizers from many nations are involved. A group of Moroccan hackers have been defacing Israeli web sites for the last six years or so, and recently Israel's military radio station was infiltrated by an Iraqi hacker.


    Unlike the blitzkrieg-like strike in Estonia, this protracted warfare is not intended to paralyze critical enemy functions but more to sap morale, drain resources and hamper the economy. The targets are typically low-hanging fruit in internet terms: small transactional, informational and even homespun web sites whose security can easily be compromised. Taking over and defacing these sites is a way of intimidating the opposition - creating a feeling of 'if they are here, where else might they be?' - and leads to significant loss of data, profits and trust for the site owners.


    Cyber War Spreads


    If the Estonia and Middle East examples were our only experiences of cyber warfare then it might be tempting to put them down to local factors and therefore not of concern to the wider security community. Sadly, however, these instances are simply part of a much larger trend towards causing disruption on digital communications platforms. In January this year, for example, two of Kyrgyzstan's four ISPs were knocked out by a major DDoS hit whose authors remain unknown.(2) Although details are sketchy, the attack is said to have disabled as much as 80% of all internet traffic between the former Soviet Union republic and the west.


    The strike appeared to have originated from Russian networks which are thought to have had links to criminal activity in the past, and probably the only thing preventing widespread disruption in this instance was the fact that Kyrgyzstan's online services, unlike those in Estonia, are poor at the best of times. It was apparently not the first such attack in the country, either.(3) It is claimed there was a politically-motivated DDoS in the country's 2005 presidential elections, allegedly attributed to a Kyrgyz journalist sympathizing with the opposition party.


    China has also engaged in cyber warfare in recent years, albeit on a smaller scale. Hackers from within the country are said to have penetrated the laptop of the US defense secretary, sensitive French networks, US and German government computers, New Zealand networks and Taiwan's police, defense, election and central bank computer systems.


    In a similar fashion, in 2003 cyber pests hacked into the UK Labor Party's official website and posted up a picture of US President George Bush carrying his dog - with the head of Tony Blair, the Prime Minister of the UK at the time, superimposed on it.(4) The incident drew attention to government sites' lax approach to security although in this particular event it was reported that hackers had exploited the fact that monitoring equipment used by the site hosting company had not been working properly. And as long ago as 2001, animal rights activists were resorting to hacking as a way of protesting against the fur trade, defacing luxury brand Chanel's website with images of slaughtered animals. (5)


    The Case for the Defense


    What do all these incidents mean for policy makers worldwide? Both the Estonian and Middle Eastern experiences show clearly that cyber war is a reality and the former, in particular, demonstrates its devastating potential. In fairness, Estonia was in some ways the perfect target for a cyber strike. Emerging from Russian sovereignty in the early 1990s with little legacy communications infrastructure, the nation was able to leapfrog the developments of western European countries and establish an economy firmly based on online services, such as banking, commerce and e-government. At the same time, the small size of the country - it is one of the least populous in the European Union - meant that most of its web sites were similarly minor and could be easily overwhelmed in the event of an attack. Last but not least, at the time of the Estonian incident, nothing on a similar scale had been experienced before.


    It is safe to say that other nations will now not be caught out so easily. In fact, if anything, what happened in Estonia will have demonstrated to the rest of the world that cyber weapons can be highly effective, and so should be considered a priority for military and defense planning.


    What might make cyber warfare the tactic of choice for a belligerent state? There are at least five good reasons. The first is that it is 'clean'. It can knock out a target nation's entire economy without damaging any of the underlying infrastructure.


    The second is that it is an almost completely painless form of engagement for the aggressor: an attack can be launched at the press of a button without the need to commit a single soldier.


    The third reason is cost-effectiveness. A 21,000-machine botnet can be acquired for 'just a few thousand dollars', a fraction of the cost of a conventional weapon, and yet can cause damage and disruption easily worth hundreds of times that.(6)


    The fourth is that it is particularly difficult for national administrations to police and protect their online borders. A DDoS attack may be prevented simply by installing better firewalls around a web site (for example), but no nation currently has the power to tell its ISPs, telecommunications companies and other online businesses that they should do this, which leaves the country wide open to cyber strikes.


    The last but by no means least reason is plausible deniability. In none of the cyber war attacks seen so far has it been possible to link the strike with a government authority, and in fact it would be almost impossible to do so. In the case of the Chinese hack attacks, for instance, the authorities have provided a defense which amounts to saying: 'There are probably a billion hackers on our soil and if it was us we would have to be stupid to do it from a Chinese IP address.'


    A similar logic potentially provides absolution to the Russian administration in the case of Estonia: if it is so cheap and easy to get a botnet to mount a DDoS attack, why would the Russians bother mounting hack attacks from their own ISPs? And in the Kyrgyz attack, although the source of the DDoS clearly points to a Russian hand, the motives for Russia's involvement remain hazy, leading to a suggestion that it may have been caused by Kyrgyzstan's own incumbent party, acting with hired cyber criminals from Russia.


    Tactics For Protection


    With all these advantages, it is unlikely that any military power worth its salt is by this stage still ignoring the potential of cyber warfare. In fact, since the Estonia incident it is even possible that the incidence of cyber warfare has increased, and we are simply not aware of the fact because the defensive capabilities of the sparring nations have increased. After all, another important lesson from Estonia is that it is possible to mount a defense against cyber attacks. There is no single solution, no silver bullet, but a range of measures can be taken to deal with the kinds of DDoS issues faced by Estonia and the kinds of hacker attacks still going on in the Middle East.


    For DDoS strike avoidance, there are four types of defense:
    • Blocking SYN floods, which are caused when the attacker (for example) spoofs the return address of a client machine so that a server receiving a connection message from it is left hanging when it attempts to acknowledge receipt.
    • Implementing BCP 38 network ingress filtering techniques to guard against forged information packets, as employed successfully in Estonia.
    • Zombie Zappers, which are free, open source tools that can tell a device (or 'zombie') which is flooding a system to stop doing so.
    • Low-bandwidth web sites, which prevent primitive DDoS attacks simply by not having enough capacity to help propagate the flood.


    For hacker attacks such as those seen in the Middle East, meanwhile, there are
    three main types of defense:
    • Scanning for known vulnerabilities in the system.
    • Checking for web application holes.
    • Testing the entire network to detect the weakest link and plug any potential entry points.


    A Doomsday Scenario?
    All the above are useful defensive tactics, but what about strategic actions? First and foremost, the Estonian experience showed that it is important for the local CERT to have priority in the event of an attack, in order to ensure that things can return to normal as soon as possible.


    Authorities can also as far as possible check national infrastructures for DoS and DDoS weaknesses,, and finally, national CERTs can scan all the networks they are responsible for - something the Belgian CERT has already started doing. Given the openness of the internet and the differing challenges and interests of those operating on it, these measures will of course only provide partial protection. But it is hoped they would be enough to prevent another Estonia incident. Or would they?


    There is, unfortunately, another type of cyber war strike which we have yet to see and which could be several times more devastating that what happened in Estonia. Rather than trying to hack into web sites just to deface them - a time-consuming effort with relatively little payback - this tactic would involve placing 'time bombs' in the web systems concerned. These could be set to lay dormant until triggered by a specific time and date or a particular event, such as a given headline in the national news feed. They would then activate and shut down their host web site, either using an internal DoS or some other mechanism.


    The code bombs could lay dormant for long enough for a malicious agency to crack and infect most or all of the major web sites of a country. And in today's networked world, this is no longer about simply causing inconvenience. Think of the number of essential services, from telephone networks to healthcare systems, which now rely on internet platforms. Knocking all these out in one go could have a truly overwhelming impact on a nation's defensive capabilities, without the need for an aggressor to send a single soldier into combat.


    The means to create such an attack definitely exist. So do the means to defeat it. What has happened in Estonia and the Middle East shows we now need to consider cyber warfare as a very real threat. What could happen if we fail to guard against it really does not bear thinking about.


    References
    1. Mark Landler and John Markoff: 'Digital fears emerge after data siege
    in Estonia'. New York Times, 29 May 2007.
    2. Danny Bradbury: 'The fog of cyberwar'. The Guardian, 5 February 2009.
    3. Ibid.
    4. 'Labour website hacked'. BBC News, 16 June 2003.
    5. 'The fur flies'. Wired, 23 January 2001.
    6. Spencer Kelly: 'Buying a botnet'. BBC
    World News, 12 March 2009.



    Aviram Jenik is the CEO of Beyond Security, which has developed tools that uncover security problems in servers and web sites, discover vulnerabilities in corporate networks, check computer systems for vulnerability to hostile external attack and audit vendor products for security risks.


    Aviram Jenik

    Beyond Security

    http://www.beyondsecurity.com

    1616 Anderson Road

    McLean, VA 22102

    1-800-801-2821

    brianp@beyondsecurity.com

    Read More
    Posted in | No comments

    Thursday, 4 June 2009

    Archives

    Posted on 11:20 by Unknown


  • The Switch to Digital TV: Are We Ready?


  • E-Cycle




  • Read More
    Posted in | No comments

    Segmenting Inside (Linux)

    Posted on 08:40 by Unknown
    By Guy Smith

    Once in a great while you see a company doing what would be sane in other markets, but might be a Herculean improbability in their own.


    Yes, this has to do with the Linux market.


    Specifically this has to do with the embedded Linux market, a realm so fragmented that 'chaos' is too polite a description. It is also one of Linux's silent success stories. Odds are that you are within five feet of one or more devices that have embedded Linux inside. Glancing about my office I count three (a printer, a router, and a cell phone, though I suspect the hub and print server at Linux-based as well).


    The embedded Linux market is fragmented along several vectors. The primary vector of discord is the application. Router makers and printer makers and cell phone makers have different interest and needs with embedded Linux. A while back my neighbors at Wind River were toying with the notion of creating an online community where users in the different markets could share innovations in a non-competitive environment, but that initiative seems to have fallen in the gutter.


    Now MontaVista wants to do the opposite.


    Ignoring for a moment the unfortunate aspect of having the word 'vista' in their corporate name, the folks at MontaVista have decided that the proper approach to the market is to offer embedded Linux packages tailored to different market segments.Monta Vista They are not tackling the relative industries (routers, printers, cell phones, etc.). MontaVista is segmenting their embedded Linux offering by CPU/platform - Atom, PowerQUICC II Pro, PowerQUICC III, TI OMAP35x, etc.


    Unlike the x86 server market, where use variations between box vendors are relatively limited, the chip market for embedded Linux is highly fractured. The differences are allegedly significant enough that loading a Linux distro down with cross platform packages is a burden to the buyers. MontaVista claims that many in the market buy an embedded Linux package and then customize it to their platform before using in production.


    Which seems very odd given that the use of Linux Inside is typically for the more primitive functions.


    MontaVista is segmenting their product to match the chip-based segment of the market. Now segmentation is a Good ThingTM for marketers to do. What I find curious is that the assembly of a Linux package by CPU is a significant segmentation vector and that it has taken this long for a vendor to segment accordingly.


    Which means it may not be a prime vector for segmenting.


    Over at Wind River, they segment based the category of final product in which their Linux will be embedded. There are Wind River Linux distros for automotive devices, networking gear, consumer products as well as several medical and military specs. Instinctively this seems to be the more rational segmentation model. Consumer devices need user interface packages (image a G-Phone without the G-UI). Networking gear doesn't need fancy UIs, but it does need routing and network security functions that a consumer device might not.


    The method to MontaVista's madness may be in their new Integration Platform (sigh, another use for the acronym IP). Akin to SuSE's openBuild system, the goal is to provide customers with ways of safely and sanely customizing MontaVista's core distro. This saves buyers the pain of finding, including and removing parts of a Linux distro to make it work for the intended application.

    Wind River

    Here is a contrast in market approaches: Wind River has both a general purpose distro and a string of special builds for different industries. MontaVista has a general distro with some reconfiguration for different CPUs and with a tool to tailor the distro to your specific needs.


    Which approach is better?


    I'll have to give the short-term nod to Wind River. Business in competitive markets moves fast. Wind River provides products pre-configured for various industries, and yet which can still be tweaked by the customer (or by Wind River) if there is some exotic need. This helps customer get their products to market faster and possibly cheaper. If Wind River were to engineer an openSuse/MontaVista-IP type system for customization, then they would be hitting on all cylinders.


    The marketing lesson herein is that segmentation is always driven by the customer base, not the convenience of the vendor. Segmenting by industry is a natural for many technology vendors, but it may not be the viable for your products. There are two primary goals in segmenting, which we'll be happy to explain once we land you as a client. Your segmentation model must meet these goals. If you don't then you will embed your company into the ground.



    Guy Smith is the chief consultant for Silicon Strategies Marketing. Guy brings a combination of technical, managerial and marketing experience to Silicon Strategies projects.


    Directly and as a consultant, Guy has worked with a variety of technology-producing organizations. A partial list of these technology firms include DeviceAnywhere (mobile applications), ORBiT Group (high-availability backup software), Telamon (wireless middleware), Wink Communications (interactive television), LogMeIn (remote desktop), FundNET (SaaS), DeviceAnywhere (mobile applications), Open-Xchange (groupware), VA Software (enterprise software), Virtual Iron (server virtualization), SUSE (Linux distributions and applications), BrainWave (application prototyping) and Novell.


    http://www.SiliconStrat.com

    Read More
    Posted in | No comments

    Isotopic Variants Of The Intel Atom Processor

    Posted on 05:52 by Unknown
    By Debasis Das

    In one of my previous articles, I have mentioned how the Intel Atom processor is fueling the growth of Netbooks. Atom processors come in a range of variants; that are divided into a few families of devices depending on the resources available on the processor, clock speed, whether they support multi-threading etc.

    Atom Block Diagram

    Current Variants: The Atom isotopes or the current variants of the Atom processor are 200 family, 300 family, N270 series and Z5xx series processors. A large percentage netbooks have Atom variants powering them. That's how all pervasive it has become. What makes it so attractive! We shall take a quick look here at the features of these processors. According to Intel there are no particular significance to the numbers assigned to these components; performance or otherwise. For example the Z5xx family members are not necessarily higher performance, faster devices that the 200 family. There are different features to these families though and that is what we shall quickly review here. Ideal way of doing it would have been to present you a matrix of the isotopes vs. features available. However, the format of these articles is not quite suitable for a table like that. We shall try our best to highlight the differences anyway. The processor components available so far are Z540, Z530, Z520, Z510, Z500, N270, 330 and 230. Latest member to join the team is the N280. The major features look like as follows. The format used is the processor number followed by clock speed, L1 cache size, FSB speed and max TDP rating. Clock speed in GHz is a straightforward item. They are around the 1.6 GHz mark except for Z540 and N280 which are faster at 1.86 and 1.66 GHz respectively. Z500 has the slowest clock speed at 800 MHz.


    Intel's Atom Variants

    The cache size specification is that of the first level or the so called L1. This is fairly uniform across the members except for the 330 which has a 1 MB L1 cache. The clock speed indicates a spread of raw processor speeds from 800 MHz to 1.86 GHz. That's quite a range. The L1 cache specs are almost the same except the 330 which should show a marginal advantage over similar clock speed processor when really sequential programs are involved. FSB (front side bus) is the interface to memory system off chip. The memory system here is the main memory or the system memory; as they are variously called. This is a specification that largely determines the system performance.


    Even though the on-chip processor is really fast, FSB speed really determines how fast the complete system will work on an average. N280 can handle fastest interface at 667 MHz while the Z510 and Z500 can handle only 400 MHz transactions. Other components work at 533 MHz. Unique thing about the N280 is that it can handle HD video, combined with the chipset GN40 it can go right up to 1080p video. What is really interesting about these processors is their extremely low power operation. TDP, thermal design power, is a specification for maximum power design inside the chip that should be taken into consideration for thermal designs. Except for 330 and 230 the ratings are at 2.5 watt levels. The lowest dissipation is on Z500. There are other features included in the processors that help manage the power dissipation closely within those specifications. We shall discuss those at some other future opportunity. The processors have all been designed with 45 nm lithography features and the physical chip measures just 13mmx14mm. Truly miniature sized wonders!



    Debasis Das has worked with technology companies for close to four decades, 25 years of which has been with IT consulting companies. He has managed software and Geospatial industry outsourcing from India for International clientele. He has worked with customers from US, Europe, Japan and China. He is widely traveled and currently engaged in consultancy in software, embedded systems and technical contents.

    Contact at ddas15847@yahoo.com
    Web site at http://www.consult-debasis.com

    Read More
    Posted in | No comments

    Wednesday, 3 June 2009

    Servers - A Tale of Two Technologies

    Posted on 09:43 by Unknown

    By Arthur Cole

    "It was the best of times, it was the worst of times..."


    Dickens was describing London and Paris during the French Revolution. But in today's world, it is an apt description of the IT industry during the virtual revolution.

    Server Room

    For the worst of times, we need look no further than the server industry, which reported another disastrous quarter earlier this week. According to IDC, worldwide shipments dropped some 26.5 percent year-over-year in the first quarter of 2009, with all of the major vendors showing double-digit revenue drops. Overall, the industry shipped only 1.49 million units, the largest decline in five years, with revenues down nearly a quarter to $9.9 billion.


    The source of all this woe is the one-two punch of the recession and virtualization, which dampens the demand for new hardware through higher utilization of existing machines. While this may be good for capital budgets, as well as the environment, it's proving to be a real burden for the server industry, which had long counted on a steady refresh rate to keep its coffers full. The decline was most keenly felt in x86 devices.


    IDC is also reporting that the picture seems to be the same for the second quarter so far, although they are predicting a tepid rebound by the fourth.


    To their credit, many of the top server vendors are not trying to push back the tide but are actively embracing virtualization and other advanced technologies designed to produce more efficient hardware platforms. IBM, for instance, is gearing up for a new server line that takes advantage of Intel's forthcoming Nehalem-EX architecture that features up to 64 cores across eight processors. Although the system is likely to be expensive, it could do the job of multiple blade servers through its ability to handle up to 128 individual threads. The chip itself also provides 16 memory slots per socket and four QuickPath interconnect links for processing large amounts of data in tandem.


    Now for the best of times. All of this virtual and multicore activity is clearly a boon to the networking side of the house, particularly wide-band solutions like 10 GbE. Dell'Oro Group reports that the 10 GbE market rebounded in the first quarter, following a decline in the fourth quarter of 2008. The company did not release any numbers from its Network Adapters Quarterly Report, although it did say that Intel is once again the new leader in adapter card revenue and port shipments, while Broadcom retained the spot as leader in silicon controllers.


    This all makes perfect sense, of course, because as more and more data starts to run through fewer and fewer hardware devices, the focus of data center performance shifts from raw processing power to network agility and speed. Going forward, as cloud technologies allow enterprises to shift resources on a global scale, the question will no longer be "Do I have enough power to handle all this data?", but rather "How can I get this data quickly to my various end-points?"


    And in this vein, there doesn't seem to be anyone interested in slowing things down. Mellanox, for example, just unveiled a 6-port, multiple-protocol 10 GbE physical layer that lays the groundwork for a new generation of high-density, low-power switches and pass-through devices. The PhyX supports all 10 Gigabit Ethernet physical layer functions and can be field-upgraded to FCoE with 2, 4, and 8 Gbps Fibre Channel gateway service without hardware modifications.


    With such precipitous changes in data center hardware buying patterns, many wonder if things will ever get back to normal. While sales and revenue figures have fluctuated over the years, the hard news this time is that these changes look permanent. Once the recession is over, server sales should pick up, but they will be nowhere near previous numbers because those low utilization rates are gone forever.


    The new normal will be relatively low server activity and increasingly fast networks as enterprises position themselves for the cloudy/virtual decade to come.



    Read Art's article, "The Three Factors Shaping the Future of the Data Center" - http://bit.ly/LQD6h

    Read More
    Posted in | No comments

    Tuesday, 2 June 2009

    Enabling Diagrammatic Modelling of Engineering Problems

    Posted on 09:18 by Unknown
    By Peter Hale

    Introduction
    C.S. Peirce (1906) stated in 'Prolegomena to an Apology for Pragmaticism' "Come on, my Reader, and let us construct a diagram to illustrate the general course of thought; I mean a system of diagrammatization by means of which any course of thought can be represented with exactitude". That is the purpose of this research, but to limit the scope and so make application of this theory testable the research is restricted mainly to engineers (because they often think in terms of diagrams) and to the domain of modelling (which often requires diagrams). So the aim is to apply the research first where it can have the most use and encourage others to expand it for other domains and other users. This research is intended to simplify computing for computer literate non-programmers, this includes many engineers. The main research area is enabling users such as engineers to model the problems they encounter in manufacturing and design. However, the wider aim is to prototype research for enabling a larger range of software users to model their problems. The intention is to create collaborative tools that allow users to develop software in a way they will be familiar with from their use of spreadsheets. This research brings together approaches of object orientation, the Semantic Web, relational databases, and Model-Driven and Event-Driven programming. Frankel et al. (2004) explain the opportunities for, and importance of this kind of research.


    Iterative development is used both in this research and in the implementation to ensure that changes can be made systematically as necessary and without disrupting a project.


    Software engineering and modelling has much in common with engineering modelling, also the tools used for both have much in common. Software process modelling, engineering process modelling, and business/workflow modelling share a common approach, and similar tools. Much of this commonality is in the need to transform requirements into design into code semi-automatically. To achieve this, continuous consultation between potential users e.g. engineers for engineering modelling problems and developers for software problems is required.


    Organisations have many limitations resulting from the lack of facilities to allow users to program. For example the use of 'out of the box' modelling tools that are hard to customise or introduce collaborative capabilities, because a project deadline is so urgent that nothing else is practical.


    Methodology
    A common factor in these various types of modelling is the need to transform between a high level abstraction, to a lower level such as a computer model and then code. This is illustrated by examples of semi-automatically produced programs/models (Hale, 2008). The translation process involves translating from a tree/graph representation and for each node this is translated into a code representation of the equation that relates this node to any others, and this code is then presented in the interface as a result tree/graph. This can be achieved for programs and/or web pages. Kraus et al. (2007) examine and implement this transformation problem and also produce code and/or web pages. Uschold (2003) defines the Semantic Web as being machine usable and associated with more meaning. So this is a good way to convey the abstractions represented in a source and result tree to the end user.


    The intention is to demonstrate a way to construct diagrammatic representations of cost using the example of an aircraft wingbox. The wingbox is the structure or skeleton of the wing. These diagrammatic representations are achieved by visual representation of items and equations that make up wingbox cost. These items and equations can be represented in standardised categories used in engineering - 'materials', 'processes', 'cost rates' etc. These categories are standard for engineering and the methods for representing items and equations that relate the items can be expressed in standard mathematical form. Therefore using the same methodology and same categories it would be possible to represent other items and equations in the same way. So this methodology is reusable for costing other engineering components including those outside aerospace. The costing method is also recursive because components and sub components can be costed separately or together and top down or from bottom up. This methodology has the potential to be applied to any calculation based modelling problem.


    Engineering modelling can be performed using a high level diagrammatic view of the problem and conveyed to the computer via transformation. Solutions to this transformation problem can be found by adapting current tools and techniques using a systematic approach. Such tools and techniques involve use of modelling tools, spreadsheets, ontology management tools, and Semantic Web Web 2.0 tools. These possible solutions are not mutually exclusive and their combination could be the best way of providing usable collaborative modelling tools for computer literate end users and domain experts. The link between these alternative ways of advancing current research is translation and User Driven Modelling/Programming.


    Enabling diagrammatic de-abstraction and modelling of engineering problems, Peter Hale, http://userdrivenmodelling.blogspot.com/2009/05/enabling-diagrammatic-de-abstraction.html

    It is possible to create an extra layer of visualised semantics to enable users to specify commands in structured language. This approach of adding extra layers is the way this visual programming works. Users provide the information the program needs at the visual interface layer, and program code is created automatically. The layers provide the bridge between abstract ideas and computer code. If this approach is taken to its logical conclusion, it would be possible to allow the user to specify what the computer should do. Then each layer would communicate this to the layer below until the computer performs the action required. A simple example of this approach is the use of spreadsheets. Users can specify a calculation in mathematical terms using a formula. The spreadsheet then calculates the result of the formula. Users can change the formula if it is incorrect without any need to write code or re-compile. This accounts for the popularity of spreadsheets. However, spreadsheets do not provide the centralised and structured data-store required for a distributed collaborative system. Therefore, the research concentrates on combining the wide applicability of generic spreadsheet modelling with structured and adaptable modelling and visualisation.


    It is important to enable changes to the design of the information source and its structure as necessary, even when it contains information. This makes possible continuous improvement of the information and its representation together. Clear visualisation of the structure makes out of date and duplicate information obvious, so it can be changed by the end-users of the information. This provides for maintenance of information quality without necessitating end-users to understand relational database design; though relational databases can still be accessed by software specialists for more in depth and less frequent structural changes.


    Program transformation allows for writing in one representation or language, and translating to another. This is particularly useful for language independent programming, or for high level and end user programming that can then be translated to a language more easily interpreted by computer systems.


    A taxonomy representation is translated into a computer model. Relationships can be conveyed to a software model that evaluates them. Information is translated from the taxonomy and is visualised in tree form in a decision support tool with the example of spar manufacture information. The visualisation of the information in a tree can be further translated into visualisation as an interactive diagram. The representation can be translated into different languages, to allow for language independence.


    Related Research
    Crapo et al. (2002) assert the need for a methodology for creation of systems to enable more collaborative approaches to modelling by domain expert end-users, and that this combined with visualisation would allow engineers to model problems accurately. Huhns (2001) and PaternĂ², (2005) both explain that alternatives to the current approach to software development are required. Modelling languages such as Alloy explained by Wallace (2003) can be used as an interface to an End-User Programming environment. Transformation from a model building environment to program code has been investigated by Gray et al. (2004).



    My Research - http://www.cems.uwe.ac.uk/~phale/


    Modelling - http://sites.google.com/site/userdrivenmodellingprogramming/


    I am a Researcher in the final year of my PhD. I specialise in applying Semantic Web techniques. My current research is on a technique of 'User Driven Modelling/Programming'. My intention is to enable non-programmers to create software from a user interface that allows them to model a particular problem or scenario. This involves a user entering information visually in the form of a tree diagram. I am attempting to develop ways of automatically translating this information into program code in a variety of computer languages. This is very important and useful for many employees that have insufficient time to learn programming languages. I am looking to research visualisation, and visualisation techniques to create a human computer interface that allows non experts to create software.

    Read More
    Posted in | No comments

    Mission Statement

    Posted on 09:00 by Unknown


    Mission: one of the reasons that the IT services industry is so prone to corruption is because the industry is so fragmented. Most everyone on the worker-bee side is a freelancer.

    Over time, many of the coordinating companies have come to believe that screwing the contractor is within their rights. The contractor should feel lucky if they ever get paid. For many contractors who have worked in this environment for several years, this situation is "normal".

    This needs to change.

    It was bad enough when small "fly-by-night" companies were prowling the industry. But much more disturbing, are Fortune 100 companies that are now, indirectly participating in the activity. It is not OK to knowingly hire another company to commit a crime.

    The first step to address the fragmentation of the Geek Community is communication. If Geeks are aware of the actions of the coordinating companies, good and bad, then the Geek can act accordingly.

    The next needed step is a measured amount of oversight by the federal government. The best situation would have been a healthy free market. In this case, that plan has resulted in a Mafia-esque technical services market. This is not a local problem. Companies in rural "friendly" municipalities are commiting grand larceny nationally. These locale have not been chosen at random. Without intervention, this situation will surely worsen.

    That is the purpose of this blog:


    keeping Geeks Informed.




    _____________________________________________________
    Read More
    Posted in | No comments

    Monday, 1 June 2009

    All About Performance Testing - The Best Acceptance Criteria

    Posted on 08:29 by Unknown
    By Yogindernath Gupta

    First of all, let us see what is the meaning of the term "Performance Testing":


    For general engineering practice, "Performance Testing" refers to evaluation & measurement of functional characteristics of an individual, a system, a product or any material.


    However in software industry parlance, the term "Performance Testing" widely Santa Clara University Mission Controlrefers to the evaluation & measurement of functional effectiveness of a software system or a component, as regards its reliability, scalability, efficiency, interoperability & its stability under load.


    These days a new science by the name "Performance Engineering" is emerging in IT industry & Performance Testing / Acceptance Testing are being viewed as its subsets. The performance engineering lays prime emphasis on covering the performance aspects in the system design itself i.e. right from the beginning & more important is that well before the start of actual coding.


    Why Software Industry lays so much emphasis on Performance Testing:


    The key reasons are:


    1) Performance has become the key indicator of product quality and acceptance consideration nowadays in a highly dynamic & competitive market.
    2) Customers are becoming extremely demanding on quality front & have clear vision of their performance objectives.
    3) These days, every customer is looking for greater speed, scalability, reliability, efficiency & endurance of all applications - may it be multi tier applications, web based applications or client server applications etc. etc.
    4) Greater need for identifying & eliminating the performance inhibiting factors early during the development cycle. It is best to initiate the performance testing efforts right from the beginning of the development project & these remain active till final deployment.


    What are the objectives of Performance Testing?


    1) To carry out root cause analysis of performance related common & uncommon problems & devise plans to tackle them.
    2) To reduce the response time of the application with minimal investment on hardware.
    3) To identify the problems causing the malfunctioning of the system & fix them well before the production run. Problems remedied during later stages of production have high cost tags attached to them.
    4) Benchmarking of the applications, with a view to refine the company's strategy towards software acquisition for the next time.
    5) To ensure that the new system conforms to the specified performance criteria.
    6) To draw a comparison among performance of two or more systems.


    Typical Structure of a Performance Testing Model:


    Step-1: Collection of Requirements - The most important step & the backbone of performance test model
    Step-2: System Study.
    Step-3: Design of Testing Strategies - Can include the following.


    a) Preparation of traversal documents.
    b) Scripting Work.
    c) Setting up of test environment.
    d) Deployment of monitors.


    Step-4: Test Runs can cover the following


    a) Baseline Test Run
    b) Enhancement Test Run
    c) Diagnostic Test Run


    Step-5: Analysis & preparation of an interim report.
    Step-6: Implementation of recommendations from step-5.
    Step-7: Preparation of a Finalized Report.


    Attributes of a Good Performance Testing setup:


    1) Availability of a performance baseline document detailing the present performance of the system & acting as an effective baseline, which can be used in regression testing. This baseline document can be conveniently used to compare the expectations when the system conditions happen to change.
    2) Performance test beds & test environment should be separate & must replicate the live production environment as far as possible.
    3) Performance testing environment should not be coupled with the development environment.
    4) Resources leading to fulfillment of objectives like:


    # Deployment of personnel with sound knowledge
    # Systematic & deliberate planning
    # Study of existing infrastructure
    # Proper preparation
    # Systematic execution
    # Scientific analysis
    # Effective reporting


    However these days many companies have started doing part of the testing under the live environment, This helps them in establishing points of differences experienced during test & live systems.


    How to gear up for Performance Testing?


    1) Define the performance conditions: First of all we need to define performance conditions related to functional requirements like speed, accuracy & consumption of resources. Resources can be like memory requirements, storage space requirements & bandwidth of the communication system etc. etc.
    2) Study the operational profile: The operational profile contains details of usage patterns and environment of the live system. Cycle TestIt includes description of the period of operation, the operating environment, quantum of loads & expected transactions etc. When exact data is not available, the data from the testing profiles can be approximated especially when testing is not being done under the live environment.
    3) Prepare good performance test cases: While designing performance test cases, our endeavor must be to


    a) Understand the present performance levels & to use this information for benchmarking at a later date.
    b) Evaluate the performance requirements of the system against the specified norms.
    c) Clearly specify the system inputs and the expected outputs, when the system is subjected to the defined load conditions like profile of the test, test environment & the test duration etc.


    Ways of doing Performance Testing:


    Conventionally there are two methods of performance testing like


    1) Manual performance testing
    2) Automated performance testing


    1) Manual Performance Testing: In order to develop an adequate confidence, the response times being a good indicator of performance of a transaction must be measured several times during the test. Use of stopwatches monitored by many persons is one of the oldest & effective way to measure the test performance. Depending upon the available infrastructure, other means can also be devised.
    2) Automated Performance Testing: Many approaches can be practiced here. We can use the automation software which can simulate the users actions & can simultaneously record the response times & various system parameters like access of storage discs, usage of memory & queue length for various messages etc. etc.


    We can provide additional data load over the system, through many utility programs, message replication programs, batch files & many protocols analyzing tools etc.


    Important Considerations for Designing Good Performance Test Cases:


    1) Stress: To take care of the ability of a system or its component to move beyond the specified limits of performance requirements.
    2) Capacity: To cover the maximum amounts which can be contained, or produced, or completely fully occupy the entity.
    3) Efficiency: To take care of the desired efficiency measured as the ratio of volume of data processed to the amount of resources consumed for the particular processing.
    4) Response time: To take care of the specified requirements of response time i.e. the total time elapsed between the event of initiation of request to the receipt of response.
    5) Reliability: Must be able to deliver the expected results with sufficient consistency.
    6) Bandwidth: Must be able to measure & evaluate the bandwidth requirements i.e. the amount of data passing across the system.
    7) Security: Must be able to evaluate the user confidentiality, access permissions & data integrity considerations in the system.
    8) Recovery: Must be able to subject the system under test to higher loads, and measure the time it takes to the normal situation after withdrawal of loads.
    9) Scalability: Must be able to handle more loads by the addition of more hardware elements components without any coding change.


    Lessons learned:
    Performance engineering approach encompassing load testing, stress testing or endurance testing is extremely important acceptance consideration in today's highly competitive market with highly demanding & quality conscious customers.



    http://www.softwaretestinggenius.com

    Read More
    Posted in | No comments

    Earth Week E-Cycle

    Posted on 07:28 by Unknown

    According to the EPA, discarded electronics accounts for 220 million tons of refuse every year, enough material to fill trucks that would stretch bumper-to-bumper for more than 2000 miles. This electronic waste includes computer parts, monitors, printers, microwave ovens, cell phones, batteries, and audio-video equipment.

  • Every year, humans dispose of approximately 250 million computers.

  • Just in California, 6,000 computers become obsolete every day.

  • In the United States there are over 200 million mobile phones (More than 1.2 billon cell phones were sold worldwide in 2008).

  • Only 10% of electronics equipment is recycled. The rest winds up in landfill where it will sit for the rest of eternity.




  • Since electronic equipment contains toxic substances, this enormous volume poses a tremendous health and environmental risk from the landfills, where the toxins can leak into the soil and ground water. For example, cathode ray tubes (CRT) in computer monitors and televisions contain heavy metals, including lead, barium and cadmium.

    According to data from EIAE.Org, lead accounts for approximately 10% of the weight of a CRT television or computer monitor. An estimated 70 per cent of heavy metals found in U.S. landfills comes from discarded electronics. These metals can be very harmful to the health of people and wildlife if they enter the ground water.

    More than half of recycled electronics end up in scrapyards in China, Ghana and other developing countries. There, the refuse is burned or dismantled by hand, exposing desperate workers to mercury, lead and other toxic materials.


    The five hundred million computers currently in use contain 6.32 billion pounds of plastics, 1.58 billion pounds of lead, and 632,000 pounds of mercury.

    The Basel Action Network

    In the United States, there are no federal regulations to address e-waste disposal, but a few states have enacted laws to address the problem. Arkansas, California, Maine, Massachusetts, Minnesota and Washington have passed regulations governing the disposal of electronic waste.

    Consumer Reports has an online tool called "Fix It or Nix It" to help consumers decide whether their electronics is worth repairing or upgrading.

    Donating a Computer to Charity

    LandfillAfrican Electronics Dump















                                              Or

    E-CycleE-Cycle Day

















                              Photo: Dru Bloomfield

    There are several charitable organizations like Goodwill (reconnectpartnership.com), and National Cristina Foundation (cristina.org) that accept electronics for recycling. You can also contact eiae.org to find a local e-cycle center.

    Giving away old electronics can be characterized as a charitable donation (tax deduction). If you intend to donate a computer, you should not completely erase the hard-drive, since charitable organizations usually cannot afford to purchase new operating systems. If possible, provide the charity with original installation media and documentation from when the computer was purchased.

    Technical Aspects of Computer Recycling

    The computer owner has the most detailed knowledge of the data contained within the computer that needs to be purged, and what data needs to be retained.

    Properly deleting files requires more than just dragging files into the recycle bin and then emptying the directory. Erased documents can still be recovered unless they are purged more fully. Similarly, reformatting the hard disk may not prevent the recovery of old data as it is possible for disks to be "unformatted".

    The best options are commercially available programs like Norton SystemWorks. These programs not only delete a file, but remove traces of the file on the hard-drive by writing noise over the track that once held the sensitive data.



    Even though the process takes a little effort, it's worth the trouble to recycle. You probably can save enough money on your taxes, by donating the computer to charity, to make it worth your time. And you can sleep better, knowing that you did your part as a good citizen of the planet Earth.
    Read More
    Posted in | No comments
    Newer Posts Older Posts Home
    Subscribe to: Comments (Atom)

    Popular Posts

    • Computer Help
      Please report broken links to the blog administrator: Email Ars Technica Complete System Building Guide Build Your Own Inexpensive Compute...
    • Japan's Crisis and the Impact on the Technology Sector
      The crisis in Japan caused by the earthquake-tsunami, and the resulting problems at the Fukushima Daiichi nuclear plant are challenging a Ja...
    • A Guide to Importing Security Cameras from China
      China is the world leader in labor-intensive manufacturing. China is the OEM (original equipment manufacturer) for about half of the world’s...
    • Purpose of this Blog
      The technical services industry has more coruption problems than any other industry that I have encountered. It is not only the small, ...
    • Terms of Service
      Welcome to Geeks Informed. The following Terms of Service govern your use of all services on this Blog. All users of Geeks Informed must al...
    • Cyberwar in Estonia and the Middle East
      By Aviram Jenik Did a member of your family help launch a cyber attack that brought an entire nation to its knees? No, seriously, don't ...
    • Electronics Reliability Issues at the 45 Nanometer Node and Below
      Most tech-aware people have heard of Moore's Law. Moore was an engineer for Intel in 1965 when he famously observed that the number of ...
    • All About Performance Testing - The Best Acceptance Criteria
      By Yogindernath Gupta First of all, let us see what is the meaning of the term "Performance Testing": For general engineering prac...
    • Who Is Barrister Global Services?
      Barrister Global Services Network (barrister.com) provides IT services within the United States. They serve customers in the commercial, gov...
    • Earth Week E-Cycle
      According to the EPA, discarded electronics accounts for 220 million tons of refuse every year, enough material to fill trucks that would st...

    Blog Archive

    • ►  2012 (8)
      • ►  June (1)
      • ►  April (1)
      • ►  March (2)
      • ►  February (3)
      • ►  January (1)
    • ►  2011 (6)
      • ►  December (1)
      • ►  July (1)
      • ►  April (1)
      • ►  February (3)
    • ►  2010 (5)
      • ►  August (1)
      • ►  July (1)
      • ►  June (2)
      • ►  April (1)
    • ▼  2009 (38)
      • ►  December (1)
      • ►  November (3)
      • ►  October (1)
      • ►  September (1)
      • ►  August (2)
      • ►  July (2)
      • ▼  June (12)
        • Mileage Logger from Vulocity
        • Who Is OnForce?
        • The Switch to Digital TV, 24 Hours and Counting: ...
        • Cyberwar in Estonia and the Middle East
        • Archives
        • Segmenting Inside (Linux)
        • Isotopic Variants Of The Intel Atom Processor
        • Servers - A Tale of Two Technologies
        • Enabling Diagrammatic Modelling of Engineering Pro...
        • Mission Statement
        • All About Performance Testing - The Best Acceptanc...
        • Earth Week E-Cycle
      • ►  May (8)
      • ►  April (1)
      • ►  March (1)
      • ►  February (6)
    Powered by Blogger.

    About Me

    Unknown
    View my complete profile