WE RECOMMEND THE FOLLOWING PLANS

  • Advertise Here
  • Advertise Here
  • Advertise Here
  • Advertise Here

Opscode Brings Chef to Windows

Posted by eXactBot Hosting | News | Monday 31 October 2011 6:43 pm

Managing server and application configuration on Linux and Unix boxes during the past several years has gotten easier thanks to open source tools like Chef and Puppet. Now Opscode, the lead commercial sponsor behind the Chef project, is bringing the benefits of Chef to Windows users.

The Chef system enables administrators to leverage ‘recipes’ that define configuration for automated deployments.

“Our early community and our early adopter were much like us, so we developed our system all around deployment and monitoring on Linux and Unix,” Christopher Brown, chief technology officer at Opscode, told InternetNews.com.

When it comes to getting Chef up and running on Windows, Opscode is bundling the open source Ruby language in with the installed.

“We have an all-in-one installer, since we recognize that Windows folks expect a higher level of polish for an installer,” Brown said. “Getting Ruby and some of the other required dependencies is not always straightforward, that’s why the all-in-one installer is necessary.”

The addition of Windows support isn’t changing the development approach for Chef either. Brown said that there is one version of Chef for Windows and Linux. He explained that the backend of Chef is basically a publishing platform that doesn’t really care what system the bits are running on.

“The bits that are specific happen in the recipes,” Brown said. “People will write recipes that will only work on Linux or Windows, but that’s not a difference in the Chef backend.”

 

Read the full story at ServerWatch:
Opscode Brings Chef to Windows

Going Parallel with the Task Parallel Library and PLINQ

Posted by eXactBot Hosting | News | Monday 31 October 2011 5:57 pm

‘);
if (window.focus) {newwindow.focus()}
}

  • Bio »
  • Send Email »
  • More Articles »
  • When developing applications, most developers tend to think linearly through the logical steps needed to complete a task. While sequential thinking leads to working applications that are relatively easy to understand, such single-threaded applications are not able to benefit from the multiple cores in today’s processors.

    Before the multi-core era began, Intel and AMD launched faster processors each year, with ever-increasing clock speeds. Effectively, this meant that the same application code simply ran faster on each processor generation?a real-world case of a free lunch.

    However, limitations to current processor technology mean that the fastest clock speeds are limited to around 3 GHz. However, manufacturers still need to come up with faster and faster processors to match the demand. Because raising clock speed is (currently) out of question, the only way to increase performance significantly is to increase the number of cores or execution units in the chips. These multiple-core processor are then able to execute instructions in parallel, thus providing more speed. Today’s two- and four-core processors are only the beginning; in the future, 16, 32, and 64 core systems will be commonly available.

    But unlike with increasing clock speed, as vendors add multiple cores, your application will not automatically run faster if you just sit on your laurels. The free lunch is over. Because most .NET applications are single-threaded by default (although they may use additional threads for such things as database connection pools), your application code will still run on a single core. For example, if you run a single-threaded .NET application on a PC with a quad-core processor, it will run on one core while the three other cores sit idle.

    Surely, a quad-core processor is still able to run multiple applications faster compared to a traditional single-core processor. To some degree, that’s true, because the Windows task scheduler can assign different processes to run on different cores. (The same thing would happen if you had multiple processors with a single core each.)

    However, to be able to take full use of the multiple cores that are in even mainstream PCs these days, you need to make your application use more than one thread. That way, the operating system can schedule your application’s threads into multiple cores for simultaneous execution. You need two separate skills to do this: one is the ability to identify possibilities where threading can help improve performance, the other is implementing that behavior.

    Speaking of implementation, introducing multiple threads into an application is often easier said than done. In fact, using threads properly has been one programming’s most difficult tasks?until now. Although .NET has provided threading support since .version 1.0, using the Thread class and the low-level locking mechanisms correctly requires skill that not all developers have.

    To help more developers gain from the current processors, Microsoft is planning to include support for easier threading in the forthcoming version 4.0 of the .NET framework. For example, the new libraries support running for and foreach loop iterations in parallel with only small alterations to your code. Similarly, you can use a parallel version of LINQ to help boost the performance of your queries.

    This article discusses the new parallel programming features available in the future releases of Visual Studio 2010 and .NET 4.0.

    Understanding the New Features in .NET 4.0

    Figure 1. New Parallel Architecture: In.NET 4.0, the Task Parallel Library and Parallel LINQ sit above the new Concurrency Runtime.

    When planning the next version of the .NET Framework, one key design consideration was to let developers harness the power of the current processors more easily (see Figure 1). The results of this planning and development work have culminated in a new concurrency runtime with supporting APIs. Both will be available to developers when Visual Studio 2010 and .NET 4.0 are released to manufacturing.

    For .NET developers, the new API classes are probably the most interesting new features. The parallel API can further be divided into two parts: the Task Parallel Library (TPL), and Parallel LINQ (PLINQ). Both features help developers use processors more fully. You can think of the Task Parallel Library as a generic set of parallel capabilities, whereas PLINQ focuses on database (or object) manipulation.

    Although having additional parallelism support in the .NET framework is great in itself, the story gets better once you bring Visual Studio’s IDE into the mix. Although Visual Studio has had windows to help debug threaded applications for a long time, the new features in Visual Studio 2010 are aimed squarely at developers using the new parallel APIs.

    For instance, Visual Studio 2010 has a new window called Parallel Tasks, which can show all tasks running at a given point in time (see Figure 2).

    Figure 2. Parallel Tasks: The new Parallel Tasks window shows which tasks are running.

    Another new IDE window shows stacks in a new way, referred to as the “cactus view” (see Figure 3), which can help when debugging applications that perform parallelization through the Task Parallel Library. You will also get access to new performance measurement tools that can help you spot bottlenecks in your code.

    Figure 3. Parallel Stacks: Visual Studio 2010’s Parallel Stacks window provides a new way to peek at stacks.

    When designing Task Parallel Library and PLINQ, Microsoft focused on making the features intuitive to use. For example, to run three simple tasks in parallel, you can use the Task Parallel Library as follows:

    Parallel.Invoke(
      () = MyMethod1(),
      () = MyMethod2(),
      () = MyMethod3());
    

    Looks easy! Next, assume you had a traditional LINQ query like this:

    int[] numbers = new int[50];
    ...
    var over100 = from n in numbers
                  where n  100
                  select n; 
    

    To convert this query to a parallelized PLINQ version, simply add the AsParallel construct to the query:

    var over100 = (from n in numbers
                   where n  100
                   select n).AsParallel(); 
    

    Again, that’s quite simple. After the change, PLINQ will attempt to parallelize the query, taking into account the number of processors (or processor cores) available. Although the preceding query is for illustration only (it actually wouldn’t benefit much from parallelization), you’d make the AsParallel method call the same way for more complex queries that would benefit more. But before going into PLINQ specifics, it’s worth exploring the TPL.

    • 1
    • 2
    • 3




    Back-up and Recovery Best Practices Guide for Small and Medium Businesses
    Sponsored by QuoromLabs

    icon
    Download this paper to learn about virtual recovery solutions that small and medium businesses need to bring an application back online in a matter of minutes by leveraging on-site appliances and virtualization.

    Networking Solutions


    Click here


    Click here

    ‘);
    }else {
    try {
    if (window.external.msIsSiteMode()) {
    jQuery.get(‘http://www.developer.com/latest_articles’, function(data){
    displayLatestNews(data);
    });
    }
    } catch (ex) { }
    }
    }else{
    document.write(‘

    Upgrade your browser to IE 9 to see the power of Site Pinning!

    ‘);
    }
    }
    function displayLatestNews(data){
    try{
    g_ext = window.external;
    if(data.articles != undefined data.articles.length){
    g_ext.msSiteModeClearJumpList();
    g_ext.msSiteModeCreateJumplist(‘Developer.com Latest News’);
    for(i=0; i


    Click here

    <!–

    .dmcontent_container
    {
    text-align: center;
    background-color:
    width: 125px;
    height: 250px;
    border-top: 2px solid; border-color:
    }
    .dmcontent_title
    {
    text-align: left;
    height: 20px;
    line-height: 20px;
    font-family: Verdana, Arial, Helvetica, sans-serif;
    font-size: 12px;
    font-weight: bolder;
    width: 125px;
    color:
    background-color:
    }
    .dmcontent_body
    {
    overflow-y: auto;
    text-align: left;
    font-family: Verdana, Arial, Helvetica, sans-serif;
    line-height: 14px;
    font-size: 10px;
    margin-left: auto;
    margin-right: auto;
    width: 123px;
    height: 219px;
    color:
    background-color:
    border: 1px solid; border-color:
    }
    .dmcontent_link
    {
    text-decoration: none;
    color:
    }

    –>

    Architecture & Design
    >Database
    >Java
    >Languages & Tools
    >Microsoft & .NET
    >Open Source
    >Project Management
    >Security
    >Techniques
    >Wireless

    –>


    Internet.com
    The Network for Technology Professionals

    About Internet.com

    Copyright 2010 QuinStreet Inc. All Rights Reserved.

    Legal Notices, Licensing, Permissions, Privacy Policy.

    Advertise | Newsletters | E-mail Offers


    CRM rivals invest in social and cloud

    Posted by eXactBot Hosting | News | Monday 31 October 2011 12:42 pm

    CRM and Social

    The big guns of customer relationship management–Oracle and Salesforce.com–continue to vie for dominance through acquisitions.

    Back in March, Salesforce.com enhanced its social strategy and Service Cloud 3 offer with a $326 million purchase of Radian6, the popular social-media monitoring platform. Last week, Oracle made competitive moves to boost its cloud business announcing a $1.5 billion purchase of RightNow, the customer service monitoring company. For that price, Oracle will get a nice set of social cloud applications and some 2,000 reported customers.

    Oracle’s stockpile of cash and savvy acquisitions has heated up its rivalry with Salesforce in the customer relationship management (CRM) space. Oracle’s assets for CRM range from Siebel On Demand, Inquira, InstantService, PeopleSoft, and ATG. Meanwhile, Salesforce has made some purchases of their own, including the recent acquisition of Assistly to augment its existing Service Cloud offering.

    All this spending shows some genuine momentum for customer service management vendors. While Radian6 and RightNow are off the table, there’s no shortage of companies pushing strong social and cloud offers.

    I recently spoke to Sam Keninger, head of product marketing at Medallia, about the company’s launch of Medallia Social Feedback–a platform that brings social media analytics and geo-location to market as a part of a full customer experience management suite, according to Keninger.

    Medallia takes social media management out of a siloed products and puts it into a platform clients already use to analyze and respond to other customer feedback channels including e-mail-, Web-, and receipt-based surveys; call center feedback; and custom feedback channels for verticals and business-to business transactions.

    Though Medallia touts customers like Four Seasons, Hilton, Sephora, American Express, and Gold’s Gym, it doesn’t yet have the client volume of RightNow or Radian6. Still, its unified platform (which integrates unstructured social feedback data with structured survey data) could make it an attractive acquisition target. Another attractive aspect is the fact that the platform already works with Salesforce, Siebel, and other enterprise relationship management tools.

    Most would agree that there is value in customer relation tools, but it remains to be seen if these types of tools do better on their own or as part of a bigger package. Certainly the bigger vendors have a broader channel to sell through, but I wonder how much customers care.

    Why OpenMAMA is the Future of Open Source

    Posted by eXactBot Hosting | News | Monday 31 October 2011 11:43 am

    OpenMAMA

    From the ‘Open Source for Wall St.’ files:

    A group of financial firms have come together under the auspices of the Linux Foundation in a new open source effort known as, OpenMAMA (Middleware Agnostic Messaging API ).

     

    OpenMAMA is an effort to standardize and simplify the MAMA APIs that have been in use since at least 2002. The basic idea behind have an open source implementation of MAMA is to have a level-set, a baseline implementation that is used to promote interoperability. The financial industry, especially stock exchanges like the NYSE are not strangers to Linux. The Big Board itself has been running on Red Hat since at least 2008. There has also been collaboration among financial services vendors as part of the AMPQ messaging standard too.

    OpenMAMA is a bit different though. The way I see it, this is a case where the financial firms and in particular the NYSE, see a way to make money open sourcing their own technology.

    Make no mistake about it, OpenMAMA isn’t about any kind of altruistic Free Software zeal, OpenMAMA will help the financial services companies to make money. Instead of going to a standards body, these vendors have decided that open source is what makes the standard.

    This bodes incredibly well for Open Source as the defacto approach to building standard technology now and the future. The only way that technology can be a standard is for it to be open, and the way to be open is open source.

    “I suspect that many people may view our effort to open source MAMA with skepticism and suspicion,” the OpenMAMA site states. “NYSE Technologies motivations for giving MAMA to the community are a topic worthy of a post of its own, but it is important to emphasize that OpenMAMA is truly FOSS (free and open source software). We chose the Linux Foundation to host the project because we feel that they bring both the credibility in the open source community as well as a neutral home for the OpenMAMA project.  Also, we selected the LGPL 2.1 license for OpenMAMA because it places the fewest restrictions on MAMA users while its hereditary nature ensures that project thrives and remains open.”

    Siri now flirting with older iPhones–for real

    Posted by eXactBot Hosting | News | Monday 31 October 2011 5:42 am


    Siri answering a tough question.

    (Credit:
    CNET)

    Siri’s exclusivity on the
    iPhone 4S may not be long for this world. At least unofficially.

    Efforts to get the new software feature working on older Apple devices, including the
    iPhone 4 and
    iPod Touch, seem to have pushed past the biggest hurdle: slipping by Apple’s security.

    Over the weekend, Apple tracking blog 9to5mac posted a video of the software feature working smoothly on an iPhone 4, courtesy of Irish iPhone hacker Steve Troughton-Smith. That follows a demonstration from earlier this month where Troughton-Smith showed the software installed, but unable to run queries on an iPhone 4.

    In an interview over the weekend, Troughton-Smith told 9to5mac that the working version of the hack runs on multiple devices, including Apple’s fourth-generation iPod Touch. The feat was accomplished using “files from an iPhone 4S,” that he said “aren’t ours to distribute,” alongside “a validation token from the iPhone 4S that has to be pulled live from a jailbroken iPhone 4S.”

    In other words, there are some things going on behind the scenes that Apple won’t like, and could very well move to block if a working hack takes off, but the key takeaway is that there are seemingly no hardware hurdles standing in the way.

    Here’s a video of it working on an iPhone 4 with Troughton-Smith’s workaround in place:

    As for when you could possibly get your hands on the hack, Troughton-Smith said he’s not going to package it up for people with jailbroken iPhones to grab and install, and is leaving it up for others to do that.

    Siri made its debut during the unveiling of the iPhone 4S earlier this month. The feature lets iPhone 4S users talk to their phone to issue commands, which are then piped to Apple’s servers over a 3G or Wi-Fi connection, then sent back as commands to the phone. The entire turnaround takes just a few seconds, but depends entirely on a handshake between the phone and Apple’s servers, which has kept the feature from being jury-rigged onto older devices, as well as causing problems when Siri can’t connect.

    Older iPhones are not the only target for porting Siri. Last week developer Jackoplane posted screenshots of the voice assistant software installed on an iPad. Though like Troughton-Smith’s earlier effort, it couldn’t connect when it came time to talk to Apple’s servers. With the newer workaround, that could change.

    Apple has a long history of leaving out new software features on older hardware, though in Siri’s case, the expectation was that it depended–in part–on the newer dual-core A5 processor. A similar hardware requirement came in the introduction of Voice Control to the speedier iPhone 3GS, which processed voice commands on the device itself instead of through Apple’s servers.

    Where Siri goes from here continues to be of intense interest given the expected future of Apple’s product line. All eyes are on the company to introduce a television set in the next year or two, and Siri is assumed to play a part in that vision. While it’s more likely to arrive on something like the iPad or iPod first, there’s also the possibility of it jumping to Apple’s Mac OS X as well.

    Penetration Testing Shows Unlikely Vulnerabilities

    Posted by eXactBot Hosting | News | Monday 31 October 2011 12:33 am

    The Spider Labs division of security firm Trustwave conducts over 2,000 penetration tests a year looking for IT security risks. While some audits find normal flaws, there are some that lead to the discovery of extraordinary types of enterprise security risks.

    Speaking at the SecTOR security conference in Toronto last week, Nicholas Percoco, senior vice president and head of SpiderLabs explained that penetration scans need to look beyond the surface to find business logic and other deeply ingrained flaws.

    One of the more interesting hacks that Spider Labs has done is called “Do You Want Fries with that Hack?” The penetration testing team was conducting a test for a large restaurant chain that does take-out orders over the Internet. The initial penetration testing sweep revealed that the Web application used Java and Flash and was not at risk from any common exploits or SQL Injection issues.

    Ryan Linn, senior security consultant with SpiderLabs, noted however that the credit card processing was handled by a third party via JavaScript and the testers were able to manipulate payment info as it passed to the third party processing firm.

    “What was missing was JavaScript validation,” Linn said. “So we adjusted the price of the food and we were able to get a meal delivered for $.50 cents.”

     

    Read the full story at eSecurityPlanet:
    Penetration Testing Shows Unlikely Vulnerabilities

    Going Parallel with the Task Parallel Library and PLINQ

    Posted by eXactBot Hosting | News | Sunday 30 October 2011 11:33 pm

    ‘);
    if (window.focus) {newwindow.focus()}
    }

  • Bio »
  • Send Email »
  • More Articles »
  • When developing applications, most developers tend to think linearly through the logical steps needed to complete a task. While sequential thinking leads to working applications that are relatively easy to understand, such single-threaded applications are not able to benefit from the multiple cores in today’s processors.

    Before the multi-core era began, Intel and AMD launched faster processors each year, with ever-increasing clock speeds. Effectively, this meant that the same application code simply ran faster on each processor generation?a real-world case of a free lunch.

    However, limitations to current processor technology mean that the fastest clock speeds are limited to around 3 GHz. However, manufacturers still need to come up with faster and faster processors to match the demand. Because raising clock speed is (currently) out of question, the only way to increase performance significantly is to increase the number of cores or execution units in the chips. These multiple-core processor are then able to execute instructions in parallel, thus providing more speed. Today’s two- and four-core processors are only the beginning; in the future, 16, 32, and 64 core systems will be commonly available.

    But unlike with increasing clock speed, as vendors add multiple cores, your application will not automatically run faster if you just sit on your laurels. The free lunch is over. Because most .NET applications are single-threaded by default (although they may use additional threads for such things as database connection pools), your application code will still run on a single core. For example, if you run a single-threaded .NET application on a PC with a quad-core processor, it will run on one core while the three other cores sit idle.

    Surely, a quad-core processor is still able to run multiple applications faster compared to a traditional single-core processor. To some degree, that’s true, because the Windows task scheduler can assign different processes to run on different cores. (The same thing would happen if you had multiple processors with a single core each.)

    However, to be able to take full use of the multiple cores that are in even mainstream PCs these days, you need to make your application use more than one thread. That way, the operating system can schedule your application’s threads into multiple cores for simultaneous execution. You need two separate skills to do this: one is the ability to identify possibilities where threading can help improve performance, the other is implementing that behavior.

    Speaking of implementation, introducing multiple threads into an application is often easier said than done. In fact, using threads properly has been one programming’s most difficult tasks?until now. Although .NET has provided threading support since .version 1.0, using the Thread class and the low-level locking mechanisms correctly requires skill that not all developers have.

    To help more developers gain from the current processors, Microsoft is planning to include support for easier threading in the forthcoming version 4.0 of the .NET framework. For example, the new libraries support running for and foreach loop iterations in parallel with only small alterations to your code. Similarly, you can use a parallel version of LINQ to help boost the performance of your queries.

    This article discusses the new parallel programming features available in the future releases of Visual Studio 2010 and .NET 4.0.

    Understanding the New Features in .NET 4.0

    Figure 1. New Parallel Architecture: In.NET 4.0, the Task Parallel Library and Parallel LINQ sit above the new Concurrency Runtime.

    When planning the next version of the .NET Framework, one key design consideration was to let developers harness the power of the current processors more easily (see Figure 1). The results of this planning and development work have culminated in a new concurrency runtime with supporting APIs. Both will be available to developers when Visual Studio 2010 and .NET 4.0 are released to manufacturing.

    For .NET developers, the new API classes are probably the most interesting new features. The parallel API can further be divided into two parts: the Task Parallel Library (TPL), and Parallel LINQ (PLINQ). Both features help developers use processors more fully. You can think of the Task Parallel Library as a generic set of parallel capabilities, whereas PLINQ focuses on database (or object) manipulation.

    Although having additional parallelism support in the .NET framework is great in itself, the story gets better once you bring Visual Studio’s IDE into the mix. Although Visual Studio has had windows to help debug threaded applications for a long time, the new features in Visual Studio 2010 are aimed squarely at developers using the new parallel APIs.

    For instance, Visual Studio 2010 has a new window called Parallel Tasks, which can show all tasks running at a given point in time (see Figure 2).

    Figure 2. Parallel Tasks: The new Parallel Tasks window shows which tasks are running.

    Another new IDE window shows stacks in a new way, referred to as the “cactus view” (see Figure 3), which can help when debugging applications that perform parallelization through the Task Parallel Library. You will also get access to new performance measurement tools that can help you spot bottlenecks in your code.

    Figure 3. Parallel Stacks: Visual Studio 2010’s Parallel Stacks window provides a new way to peek at stacks.

    When designing Task Parallel Library and PLINQ, Microsoft focused on making the features intuitive to use. For example, to run three simple tasks in parallel, you can use the Task Parallel Library as follows:

    Parallel.Invoke(
      () = MyMethod1(),
      () = MyMethod2(),
      () = MyMethod3());
    

    Looks easy! Next, assume you had a traditional LINQ query like this:

    int[] numbers = new int[50];
    ...
    var over100 = from n in numbers
                  where n  100
                  select n; 
    

    To convert this query to a parallelized PLINQ version, simply add the AsParallel construct to the query:

    var over100 = (from n in numbers
                   where n  100
                   select n).AsParallel(); 
    

    Again, that’s quite simple. After the change, PLINQ will attempt to parallelize the query, taking into account the number of processors (or processor cores) available. Although the preceding query is for illustration only (it actually wouldn’t benefit much from parallelization), you’d make the AsParallel method call the same way for more complex queries that would benefit more. But before going into PLINQ specifics, it’s worth exploring the TPL.

    • 1
    • 2
    • 3




    Back-up and Recovery Best Practices Guide for Small and Medium Businesses
    Sponsored by QuoromLabs

    icon
    Download this paper to learn about virtual recovery solutions that small and medium businesses need to bring an application back online in a matter of minutes by leveraging on-site appliances and virtualization.

    Networking Solutions


    Click here


    Click here

    ‘);
    }else {
    try {
    if (window.external.msIsSiteMode()) {
    jQuery.get(‘http://www.developer.com/latest_articles’, function(data){
    displayLatestNews(data);
    });
    }
    } catch (ex) { }
    }
    }else{
    document.write(‘

    Upgrade your browser to IE 9 to see the power of Site Pinning!

    ‘);
    }
    }
    function displayLatestNews(data){
    try{
    g_ext = window.external;
    if(data.articles != undefined data.articles.length){
    g_ext.msSiteModeClearJumpList();
    g_ext.msSiteModeCreateJumplist(‘Developer.com Latest News’);
    for(i=0; i


    Click here

    <!–

    .dmcontent_container
    {
    text-align: center;
    background-color:
    width: 125px;
    height: 250px;
    border-top: 2px solid; border-color:
    }
    .dmcontent_title
    {
    text-align: left;
    height: 20px;
    line-height: 20px;
    font-family: Verdana, Arial, Helvetica, sans-serif;
    font-size: 12px;
    font-weight: bolder;
    width: 125px;
    color:
    background-color:
    }
    .dmcontent_body
    {
    overflow-y: auto;
    text-align: left;
    font-family: Verdana, Arial, Helvetica, sans-serif;
    line-height: 14px;
    font-size: 10px;
    margin-left: auto;
    margin-right: auto;
    width: 123px;
    height: 219px;
    color:
    background-color:
    border: 1px solid; border-color:
    }
    .dmcontent_link
    {
    text-decoration: none;
    color:
    }

    –>

    Architecture & Design
    >Database
    >Java
    >Languages & Tools
    >Microsoft & .NET
    >Open Source
    >Project Management
    >Security
    >Techniques
    >Wireless

    –>


    Internet.com
    The Network for Technology Professionals

    About Internet.com

    Copyright 2010 QuinStreet Inc. All Rights Reserved.

    Legal Notices, Licensing, Permissions, Privacy Policy.

    Advertise | Newsletters | E-mail Offers


    Jobs’ sister eulogizes her brother as ‘idealistic’

    Posted by eXactBot Hosting | News | Sunday 30 October 2011 6:32 pm

    The sister of Steve Jobs says that growing up as an only child raised by a single mother, she would imagine her father as “an idealistic revolutionary” who resembled actor Omar Sharif.

    “For decades, I’d thought that man would be my father,” writes Mona Simpson, a noted writer. “When I was 25, I met that man and he was my brother.”

    In a moving New York Times op-ed piece today, Simpson offers an intimate portrait of the late Apple co-founder, a man she met for the first time in 1985 when she was 25. She describes meeting Jobs and getting to know him, his struggles with his health, as well as his personality quirks.

    Simpson recounts how hurt Jobs felt about leaving the company he co-founded after a boardroom struggle for control of Apple in 1985.

    “When he got kicked out of Apple, things were painful,” she writes. “He told me about a dinner at which 500 Silicon Valley leaders met the then-sitting president. Steve hadn’t been invited. He was hurt but he still went to work at Next. Every single day.”

    She even explains Jobs’ famous fondness for black turtlenecks.

    “For an innovator, Steve was remarkably loyal,” Simpson writes. “If he loved a shirt, he’d order 10 or 100 of them. In the Palo Alto house, there are probably enough black cotton turtlenecks for everyone in this church.”

    Simpson also talks in detail about how Jobs’ life changed as his illness began to take its toll on his body.

    “Then, Steve became ill and we watched his life compress into a smaller circle,” she says, detailing the everyday pleasures that no longer appealed to Jobs. “Yet, what amazed me, and what I learned from his illness, was how much was still left after so much had been taken away.”

    She says his death was unexpected and describes the last afternoon she spent with her brother in some detail. “Death didn’t happen to Steve, he achieved it,” she writes.

    Jobs’ final words that afternoon: “OH WOW. OH WOW. OH WOW.”

    Jobs died October 5 after a long battle with pancreatic cancer and was buried a few days later during a private, non-denominational funeral in Santa Clara County.

    Neo Launches Open Source NoSQL Graph Database for Spring

    Posted by eXactBot Hosting | News | Sunday 30 October 2011 5:33 pm

    NoSQL type databases have become increasingly popular over the last several years as a way to deliver better scalability and performance. There are a number of different types of NoSQL databases, including a graph database structure, which is what open source startup Neo Technology is all about.

    Neo Technology is the lead commercial sponsor behind the open source Neo4j NoSQL database. This week the company is launching its Spring Data Neo4j 2.0 release, bringing the database to the popular Spring Java framework. The company has also just completed raising $10.6 million in Series A funding.

    “There is so much noise in the NoSQL space now, it’s a term that isn’t strictly defined,” Emil Eifrem, CEO of Neo, told InternetNews.com.

    In Eifrem’s view, there are only four types of NoSQL databases: key value stores, BigTable types like Apache Cassandra, document databases like CouchDB and MongoDB, and the fourth is Graph databases like Neo4j.

    “In the graph data model, there are nodes with type relationships across nodes,” Eifrem said.

    Eifrem said a graph database can then attach key value pairs to nodes and their relationship. He noted that the way nodes are connected is a first-class citizen in the graph data model.

    In contrast with the concept of a traditional join that exists in traditional relational SQL type databases, Eifrem said there are some key underlying differences.

    “What the relational guys did is work with a data model that is all tables that is optimized for access and that goes along with a table,” Eifrem said.

    For example, in the relational model, if you’re looking for all the people in a table with an age greater than 20, that’s an optimized query. Eifrem said that if you want to hop from one entity to another, that requires a join, which is a CPU-bound operation where you merge the entities that match your criteria from the first table with the second table.

    “In contrast, with a graph database the only thing you do when you hop from one node to another is just have a direct pointer for access,” Eifrem said. “You don’t have to traverse an index or do a merge which leads to some amazing performance improvements.

    Read the full story at Database Journal:
    Neo Launches NoSQL Graph Database

    Zuckerberg: Silicon Valley isn’t necessary for startups

    Posted by eXactBot Hosting | News | Sunday 30 October 2011 11:32 am

    Facebook founder Mark Zuckerberg at Y Combinator’s Startup School.

    Facebook founder Mark Zuckerberg says if he had it all to do over, he would have stayed in Boston.

    The Facebook chief executive said during an interview yesterday at Y Combinator’s Startup School that Silicon Valley suffers from a bit of shortsightedness.

    “If I were starting now I would do things very differently,” he said. “You get this feeling when you are out here in the Silicon Valley that you have to be out here.”

    While he said there were a lot of great resources for beginners in Silicon Valley such as engineers, universities, and VCs, he said “it’s not the only place to be, I think. If I were starting now, I would have stayed in Boston. There are aspects of the culture out here where I think it sill is a little short-term focused in a way that bothers me.”

    “There’s a culture out here where people don’t commit to doing things…I feel like a lot of companies that have built outside of Silicon Valley I just think seem to be on a longer-term cadence than the ones in Silicon Valley, for some reason.”

    “You don’t have to move out here to do this,” Zuckerberg said. However, he admits that “Facebook would not have worked if I had stayed in Boston.”

    During the 40-minute interview, which begins about 40 minutes into the video below, Zuckerberg also describes some of the challenges his company faced early on.

    “We didn’t expect it to be a company initially,” he said. “It was not like in the movie, there was no drinking,” he said, referring to the 2010 movie “The Social Network.” “We all just lived in a house.”


    Watch live video from Startup School on Justin.tv

    Next Page »