It’s Joke Time!!

From the Heart

The following letter was forwarded by someone who teaches at a junior high school in Memphis, Tennessee; the letter was sent to the principal’s office after the school had sponsored a luncheon for the elderly. This story is a credit to all humankind. Read it, soak it in, and bask in the warm feeling that it leaves you with.

Dear Reyer School:

God bless you for the beautiful radio I won at your recent senior citizen’s luncheon. I am 84 years old and live at the county home for the aged. All my people are gone. It’s nice to know that someone thinks of me. God bless you for your kindness to an old forgotten lady.

My roommate is 95 and always had her own radio, but would never let me listen to it, no matter how often or politely I asked. The other day her radio fell and broke into a lot of pieces. It was awful. She was very upset. She then asked if she could listen to mine, and I told her to buzz off.

Sincerely,
Edna Johnston

source: www.arcamax.com

 

More genes linked with genetics of height

EXETER, England (UPI) — British scientists who last year identified the first common version of a gene influencing height have now identified 20 genome regions that do the same.

The researchers said the findings in independent studies from Peninsula Medical School in Exeter and the University of Oxford mean scientists now know of dozens of genes and genetic regions that influence humans’ height, providing insights into how the body grows and develops normally and might shed light on diseases such as osteoarthritis and cancer.

Unlike a number of other body size characteristics such as obesity, which is caused by a mix of genetic and environmental factors, 90 percent of normal variation in human height is due to genetic factors rather than, for example, diet, said Dr. Tim Frayling, a professor at Peninsula, and Oxford Professor Mark McCarthy.

“The number and variety of genetic regions that we have found show height is not just caused by a few genes operating in the long bones,” Frayling said. “Instead, our research implicates genes that could shed light on a whole range of important biological processes … not just height disorders, but also tumor growth, for example.”

The research appears online in the journal Nature Genetics.

Copyright 2008 by United Press International
www.arcamax.com

 

Scientists study Canadian CO2 emissions

NEW HAVEN, Conn. (UPI) — Yale University scientists in Connecticut said rare genetic variants can be associated with a dramatically lower risk of high blood pressure.

The researchers say their finding that rare mutations might collectively play a large part in the development of common, yet complex, diseases such as hypertension also has implications for the diagnosis and treatment of such diseases as diabetes and schizophrenia.

“Collectively, common variants have explained a small fraction of the risk of most diseases in the population, as we would expect from the effects of natural selection,” said Yale Professor Richard Lifton, who led the study with Daniel Levy, director of the National Heart, Lung and Blood Institutes’s Framingham Heart Study. “The question this leaves open is whether many rare variations in genes will collectively account for a large influence on common disease.”

Lifton said the new study underscores the importance of sequencing the genome of many individuals in order to discover disease-causing mutations.

The research is reported in the journal Nature Genetics.

Copyright 2008 by United Press International
www.arcamax.com

 

Wordless Wednesday

Photobucket

These photos were taken during my vacation to my Philippines last Oct. 2006
I just miss home…hopefully will visit there next time!!
One of the falls in Kawasan, Badian, Cebu..

along the way to Kawasan Falls in Badian, Cebu, Phils..
real beauty of nature…I love it!!
 

Green Spring Cleaning!!!

Natural Household Cleaners Make It Safe and Cheap

After a long winter, a round or two of spring cleaning is a great way to bring a sense of renewal, in addition to fresh cleanliness, to your household. It can clear away the cobwebs of the mind, as well as the rafters.

Unfortunately, over past decades the ever-expanding arsenal of home cleaning products has included a number of dangerous weapons, loaded with strong, artificial colors and fragrances and harsh cleansing agents like bleach, ammonia, alcohol and more. These chemicals are a major threat to indoor air quality, off-gassing toxic fumes that can irritate eyes and respiratory systems. Children and pets are most at risk, being smaller and closer to the floor. Many cleaners also contain unnecessary antibacterial compounds, which may lead to antibiotic resistance.

Instead, harken back to a simpler time, and rediscover the natural cleaners of your grandparents. Even the biggest messes and toughest stains can be attacked effectively with baking soda, borax, lemon juice and more. You’ll also spend less money and reduce packaging.

 

Where Global Warming Begins

Watch Video Animations of U.S. Carbon Dioxide Pollution

Just where is all that global warming pollution coming from?

The Northeast pumps out an awful lot of carbon dioxide, but the Southeast, Midwest and Southern California are also responsible for voluminous pollution that billows out each day.

The precise sources of carbon dioxide have now been mapped, with 100 times more detail than was previously available, by Vulcan project researchers at Purdue University.

The high-resolution, interactive maps combine emissions data from power plants, factories and vehicles. The maps and movies compare the relative contribution of pollution from various parts of the country on an hourly basis. One of the most striking things one sees when watching the animations is the day-night “breathing” cycle of our pollution, with a long exhale of pollution all day, followed by a sharp decline each night. Seasonal spikes – such as those when hot days prompt millions of Americans to turn up their air conditioners – are also evident.

The maps also highlight an important political reality: While states in the Northeast, upper Midwest and West have agreed to state-level compacts to reduce greenhouse gas emissions, the nation’s pollution won’t be significantly cut until the South joins in. Depending on the estimate, the U.S. is the world’s biggest, or second-biggest (next to China) producer of greenhouse gas emissions; it produces 25% of the world’s carbon dioxide pollution, the key ingredient in atmospheric change fueling global warming.

“Before now the only thing policy-makers could do was take a big blunt tool and bang the U.S. economy with it,” said Kevin Gurney, an assistant professor of earth and atmospheric science at Purdue University and leader of the project. “Now we have more quantifiable information about what is happening in neighborhoods, on roads and in industrial areas, and track the CO2 by the hour. This offers policy-makers something akin to a scalpel instead.”

Watch video here here

What can you say about this guys!! Hope that everyone will continue to take care for Mother earth!!!

 

PageRank

History

PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank[3]) and later Sergey Brin as part of a research project about a new kind of search engine. The project started in 1995 and led to a functional prototype, named Google, in 1998. Shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors which determine the ranking of Google search results, PageRank continues to provide the basis for all of Google’s web search tools.[1]

PageRank is based on citation analysis that was developed in the 1950s by Eugene Garfield at the University of Pennsylvania. Google’s founders cite Garfield’s work in their original paper. In this way virtual communities of webpages are found. Teoma’s search technology uses a communities approach in its ranking algorithm. NEC Research Institute has worked on similar technology. Web link analysis was first developed by Jon Kleinberg and his team while working on the CLEVER project at IBM’s Almaden Research Center.

Algorithm

PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for any-size collection of documents. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called “iterations”, through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a “50% chance” of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.

Simplified algorithm

How PageRank Works

How PageRank Works

Assume a small universe of four web pages: A, B, C and D. The initial approximation of PageRank would be evenly divided between these four documents. Hence, each document would begin with an estimated PageRank of 0.25.

In the original form of PageRank initial values were simply 1. This meant that the sum of all pages was the total number of pages on the web. Later versions of PageRank (see the below formulas) would assume a probability distribution between 0 and 1. Here we’re going to simply use a probability distribution hence the initial value of 0.25.

If pages B, C, and D each only link to A, they would each confer 0.25 PageRank to A. All PageRank PR( ) in this simplistic system would thus gather to A because all links would be pointing to A.

PR(A)= PR(B) + PR(C) + PR(D).,

But then suppose page B also has a link to page C, and page D has links to all three pages. The value of the link-votes is divided among all the outbound links on a page. Thus, page B gives a vote worth 0.125 to page A and a vote worth 0.125 to page C. Only one third of D‘s PageRank is counted for A’s PageRank (approximately 0.083).

PR(A)= frac{PR(B)}{2}+ frac{PR(C)}{1}+ frac{PR(D)}{3}.,

In other words, the PageRank conferred by an outbound link L( ) is equal to the document’s own PageRank score divided by the normalized number of outbound links (it is assumed that links to specific URLs only count once per document).

PR(A)= frac{PR(B)}{L(B)}+ frac{PR(C)}{L(C)}+ frac{PR(D)}{L(D)}. ,

In the general case, the PageRank value for any page u can be expressed as:

PR(u) = sum_{v in B_u} frac{PR(v)}{L(v)},

i.e. the PageRank value for a page u is dependent on the PageRank values for each page v out of the set Bu (this set contains all pages linking to page u), divided by the number L(v) of links from page v.

Damping factor

The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[4]

The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores.

That is,

PR(A)= 1 - d + d left( frac{PR(B)}{L(B)}+ frac{PR(C)}{L(C)}+ frac{PR(D)}{L(D)}+,cdots right)

or (N = the number of documents in collection)

PR(A)= {1 - d over N} + d left( frac{PR(B)}{L(B)}+ frac{PR(C)}{L(C)}+ frac{PR(D)}{L(D)}+,cdots right) .

So any page’s PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The second formula above supports the original statement in Page and Brin’s paper that “the sum of all PageRanks is one”.[2] Unfortunately, however, Page and Brin gave the first formula, which has led to some confusion.

Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.

The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are all equally probable and are the links between pages.

If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. However, the solution is quite simple. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.

When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability of usually d = 0.85, estimated from the frequency that an average surfer uses his or her browser’s bookmark feature.

So, the equation is as follows:

PR(p_i) = frac{1-d}{N} + d sum_{p_j in M(p_i)} frac{PR (p_j)}{L(p_j)}

where p1,p2,…,pN are the pages under consideration, M(pi) is the set of pages that link to pi, L(pj) is the number of outbound links on page pj, and N is the total number of pages.

The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is

mathbf{R} = begin{bmatrix} PR(p_1) \ PR(p_2) \ vdots \ PR(p_N) end{bmatrix}

where R is the solution of the equation

mathbf{R} =  begin{bmatrix} {(1-d)/ N} \ {(1-d) / N} \ vdots \ {(1-d) / N} end{bmatrix}  + d  begin{bmatrix} ell(p_1,p_1) & ell(p_1,p_2) & cdots & ell(p_1,p_N) \ ell(p_2,p_1) & ddots &  & vdots \ vdots & & ell(p_i,p_j) & \ ell(p_N,p_1) & cdots & & ell(p_N,p_N) end{bmatrix}  mathbf{R}

where the adjacency function ell(p_i,p_j) is 0 if page pj does not link to pi, and normalised such that, for each j

sum_{i = 1}^N ell(p_i,p_j) = 1,

i.e. the elements of each column sum up to 1.

This is a variant of the eigenvector centrality measure used commonly in network analysis.

The values of the PageRank eigenvector are fast to approximate (only a few iterations are needed) and in practice it gives good results.

As a result of Markov theory, it can be shown that the PageRank of a page is the probability of being at that page after lots of clicks. This happens to equal t − 1 where t is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.

The main disadvantage is that it favors older pages, because a new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia). The Google Directory (itself a derivative of the Open Directory Project) allows users to see results sorted by PageRank within categories. The Google Directory is the only service offered by Google where PageRank directly determines display order. In Google’s other search services (such as its primary Web search) PageRank is used to weight the relevance scores of pages shown in search results.

Several strategies have been proposed to accelerate the computation of PageRank.[5]

Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which seeks to determine which documents are actually highly valued by the Web community.

Google is known to actively penalize link farms and other schemes designed to artificially inflate PageRank. In December 2007 Google started actively penalizing sites selling paid text links. How Google identifies link farms and other PageRank manipulation tools are among Google’s trade secrets.

source: Wikipedia.http://en.wikipedia.org/wiki/PageRank

 

Jules Verne cleared for ISS docking

TOULOUSE, France (UPI) — The European Space Agency said it has cleared the Jules Verne spacecraft to proceed with the first automated docking with the International Space Station.

The event is scheduled for 10:41 a.m. EDT Thursday.

Wednesday’s official go-ahead followed two flawless Monday tests during which the automated transfer vehicle proved its operational capabilities.

“We have proven that Jules Verne’s systems are safe, reliable and ready to dock to the station,” said John Ellwood, the ESA’s ATV project manager. “Everyone has worked very hard to get to this point, and we have also proven that the team on the ground is fully ready for (Thursday’s) first attempt.”

If the docking does not occur for any reason, the next possible window occurs 48 hours later on Saturday, the ESA said.

The entire procedure will be monitored from ESA’s ATV Control Center in Toulouse, France, in cooperation with the Russian control center in Moscow and the U.S. space agency’s control center in Houston.

The event, to be telecast by ESA TV, will be available at http://television.esa.int/. It will also be streamed live via the ESA Web site at http://www.esa.int/atv.

Copyright 2008 by United Press International
ww.arcamax.com

 

Study shows hormone directs brain links

BERKELEY, Calif. (UPI) — U.S. and Chinese scientists say they’ve found a hormone called insulin-like growth factor directs nerve connection formation in an area of the brain.

IGF had formerly been known only to stimulate the growth of cells throughout the body but the new study showed it also plays a critical role in establishing connections in the brain’s olfactory bulbs — a pair of small structures that analyze signals from about 1,000 different types of odor receptors in the nose.

University of California-Berkeley Professor John Ngai, the study’s principal author, said he and his colleagues believe IGF could become important when clinicians implant stem cells into organs to cure neurodegenerative diseases.

“Even if you figure out a way to grow new cells to replace dying cells, those cells still need to make proper connections,” Ngai said. “So, anything you know about what drives normal connectivity in the brain will help you figure out how to get those new cells to wire up correctly.”

Ngai and colleagues at the Shanghai Institutes of Biological Sciences and Columbia University reported their findings in the March 27 issue of the journal Neuron.

www.arcamax.com
Copyright 2008 by United Press International

 

Scientists ID Lou Gehrig’s disease gene

MONTREAL (UPI) — A team of Canadian and French researchers said it has identified a gene responsible for a significant fraction of amyotrophic lateral sclerosis cases.

ALS — commonly referred to as Lou Gehrig’s disease — is an incurable neuromuscular disorder.

Researchers at the University of Montreal and Waterloo and Laval Universities in Canada, along with the Institute of Biology and the Federation of Nervous System Diseases in France, identified several genetic mutations in the TDP-43 gene. They established TDP-43 as the gene responsible for up to 5 percent of the 200 ALS patients in the study.

Two years ago, a team from the University of Pennsylvania discovered TDP-43 in abnormal protein clumps in ALS patients. However, it wasn’t certain whether TDP-43 caused motor neuron disease or was just a pathological marker.

“The identification of additional mutations in TDP-43 in other ALS patients will confirm that this gene is a prominent cause of this type of disorder,” said Dr. Guy Rouleau of the University of Montreal, who said the findings “will provide crucial insight into how TDP-43 aggregate and ultimately kill motor neurons.”

The research appears in the online edition of the journal Nature Genetics.

Copyright 2008 by United Press International
www.arcamax.com

 
 

Resources

Hi dear friends and visitors!! thanks for visiting me here!! Have a great and blessed day!!




WANT TO EXCHANGE LINK WITH ME? READ HERE FIRST

Extras

All photographs used on this site, including thumbnails, are the Author's property and are ©copyright. Please do not use our photos without our permission. If you wish to use one of our photos on your personal website or blog, please send us the link to the page where it is being used and the photo must be linked back to this site. We hope that you respect the Authors' request. Thank you for your respect and understanding!

Copyright © 2013 Health, Food and Travel | All Rights Reserved

Blog Design by Simple Blog MakeOver