From October 10-12th, the City by the Bay will be hosting this year’s PuppetConf and it’s promising to be a fun and educational event you won’t want to miss. Over 1,300 attendees will hear top industry speakers cover topics like DevOps, automation, infrastructure modernization, and get the opportunity to network, improve their skills, and learn how to align IT with their organization’s business strategy.As a headline sponsor, Dell EMC will absolutely be bringing our A-game. We’ll be showcasing how, when you bring together Puppet Enterprise automation with VMware vRealize in a turnkey hybrid cloud platform like Dell EMC Enterprise Hybrid Cloud, you can increase productivity and accelerate time to value by standing up a hybrid cloud architecture in days rather than months with built-in automation and self-service access to IT resources.In the Dell EMC booth, we’ll feature a demo of all of this in action, but if you’re not among the lucky ones who will be on-site, click below preview the demo.https://www.youtube.com/watch?v=Z8mgo2Z8EtkAnd if after viewing that, you want to learn more, you can register for a full webcast featuring subject matter experts from Puppet, VMware, and Dell EMC who’ll walk you through how the latest release of Enterprise Hybrid Cloud, which features VMware’s vRA 7.3 with native Puppet Enterprise integration, simplifies your journey to the hybrid cloud.As businesses continue to embark on their IT Transformation journey, being able to quickly take advantage of a hybrid cloud architecture with built-in automation is key in enabling the business to benefit from increased productivity and agility. Join us at this year’s PuppetConf and learn how Dell EMC, in collaboration with VMware and Puppet, can simplify your transformation efforts (register here to get 35% discount on your attendee pass). Hope to see you there!
More high-performance machine learning possibilitiesDell EMC is adding support for NVIDIA’s T4 GPU to its already powerful DSS 8440 machine learning server. This introduces a new high-performance, high capacity, reduced cost inference choice for data centers and machine learning service providers. It is the purpose-designed, open PCIe architecture of the DSS 8440 that enables us to readily expand accelerator options for our customers as the market demands. This latest addition to our powerhouse machine learning server is further proof of Dell EMC’s commitment to supporting our customers as they compete in the rapidly emerging AI arena.The DSS 8440: a highly flexible machine learning serverThe DSS 8440 is a 4U 2-socket accelerator-optimized server designed to deliver exceptionally high compute performance for both training and inference. Its open architecture, based on a high performance switched PCIe fabric, maximizes customer choice for machine learning infrastructure while also delivering best-of-breed technology. It lets you tailor your machine learning infrastructure to your specific needs – with open, PCIe based components.Choose between 4, 8 or 10 NVIDIA® V100 GPUs for the highest performance training of machine learning models, or select 8, 12 or 16 NVIDIA T4 GPUs to optimize in the inferencing phase. (Note, the lower cost, lower energy consuming T4 card is also an excellent option for training environments that do not require the absolute fastest performance). Combined with 2 second-generation Intel Xeon CPUs for system functions, a PCIe fabric for rapid IO and up to 10 local NVMe and SAS drives for optimized access to data, this server has both the performance and flexibility to be an ideal solution for the widest range of machine learning solutions – as well as other compute-intensive workloads like simulation, modeling and predictive analysis in engineering and scientific environments.The DSS 8440 and machine learning Machine learning encompasses two distinctly different workloads; training and inference. While each benefits from accelerator use, they do so in different ways, and rely on different accelerator characteristics that may vary from accelerator to accelerator. The initial release of the DSS 8440 was specifically targeted at complex, training workloads. By implementing up to 10 V100 GPUs it provided more of the raw compute horsepower needed to quickly process the increasingly complicated models being developed for complex workloads like image recognition, facial recognition and natural language translation.Machine learning training flowAt the simplest level, machine learning training involves “training” a model by iteratively running massive amounts of data through a weighted, multi-layered algorithm (thousands of times!), comparing it to a specifically targeted outcome and iteratively adjusting the model/weights to ultimately result in a “trained” model that allows for a fast and accurate way to make future predictions. Inference is the production or real-time use of that trained model to make relevant predictions based on new data.Training workloads demand extremely high-performance compute capability. To train a model for a typical image recognition workload requires accelerators that can rapidly process multiple layers of matrices in a highly iterative way – accelerators that can scale to match the need. NVIDIA® Tesla® V100 Tensor Core GPUs are such an accelerator. The DSS 8440 with NVIDIA GPUs and a PCIe fabric interconnect has demonstrated scaling capability to near-equivalent performance to the industry-leading DGX-1 server (within 5%) when using the most common machine learning frameworks (i.e., TensorFlow) and popular convolutional neural network (CNN) models (i.e., image recognition).Note that Dell EMC is also partnering with the start-up accelerator company Graphcore, that is developing machine learning specific, graph-based technology to enable even higher performance for ML workloads. The DSS 8440 with Graphcore accelerators will be available to a limited number of early adopter customers in December. See the Graphcore sidebar for more details.Inference workloads, while still requiring acceleration, do not demand as high a level of performance, because they only need one pass through the trained model to determine the result.However, inference workloads demand the fastest possible response time, so they require accelerators that provide lower overall latency. Dell EMC is now supporting the use of up to 16 NVIDIA T4 GPUs for use with the DSS 8440.While the T4 GPU provides less overall performance than the V100 (640 cores vs 320 cores), it supplies more than enough to deliver superb inference performance – and it does so while using less than 30% of the energy, only 70 watts per GPU.DSS 8440 topology – up to 10 V100 GPUsV100 TRAINING: Exceptional throughput performance With the ability to scale up to 10 accelerators, the DSS 8440 can deliver higher performance for today’s increasingly complex computing challenges. Its low latency, switched PCIe fabric for GPU-to-GPU communication enables it to deliver near-equivalent performance to competitive systems that are based on the more expensive SMX2 interconnect. In fact, for the most common type of training workloads, not only is the DSS 8440 throughput performance exceptional, it also provides better power efficiency (performance/watt). Most of the competitive accelerator optimized systems in the marketplace today are 8-way systems. An obvious advantage of the DSS 8440 10 GPU scaling capability is that it can provide more raw horsepower for compute-hungry workloads. More horsepower that can be used to concentrate on increasingly complex machine learning tasks, or conversely, may be distributed across a wider range of workloads – whether machine learning or other compute-intensive tasks. This type of distributed, departmental sharing of accelerated resources is a common practice in scientific and academic environments where those resources are at a premium and typically need to be re-assigned as needed among dynamic projects.Better performance per watt One of the challenges faced as accelerator capacity is increased is the additional energy required to drive an increased number of accelerators. Large scale data centers understand the importance of energy savings at scale. The DSS 8440 configured with 8 V100 GPUs has proven to be more efficient on a performance per watt basis than a similarly configured competitive SMX2-based server – up to 13.5% more efficient. That is, when performing convolutional neural network (CNN) training for image recognition it processes more images than the competitive system, while using the same amount of energy. This testing was done using the most common machine learning frameworks – TensorFlow, PyTorch and MXNet – and in all three cases, the DSS 8440 bested the competition. Over time, and at data center scale, this advantage can result in significant operational savings.T4 INFERENCEThe DSS 8440 with NVIDIA® T4 GPUs offers high capacity, high performance machine learning inference with exceptional energy and cost savings. Customers can choose to implement 8, 12 or 16 T4 GPUs for compute resource, and because inference is typically a single accelerator operation (no need to scale across GPU cards) the DSS 8440’s high capacity for accelerators enables an extremely flexible multi-tenancy environment. It allows data centers to share those inference resources among multiple users and departments – easily and with no loss of performance.T4 GPUs in the DSS 8440 have demonstrated average throughput of nearly 3900 images per second at a 2.05 millisecond latency (at batch size 8) using the ResNet50 model. As with Training, performance for inference can be significantly impacted by batch size. Latency and throughput fluctuate depending on the amount of data being processed simultaneously. This can be seen in the chart below, where a batch size of 32 exhibits 3 times higher latency than a batch size of 8, while delivering relatively similar throughput. So, for results that deliver both high throughput and low latency a batch size of 8 is the optimum choice.Optimized Inferencing performance with NVIDA TensorRT™NVIDIA TensorRT is a platform that optimizes inference performance by maximizing utilization of GPUs and seamlessly integrating with deep learning frameworks. It leverages libraries, development tools and technologies in CUDA-X AI for artificial intelligence, autonomous machines, high-performance computing, and graphics. It also provides INT8 and FP16 precision optimizations for inference applications such as video streaming, speech recognition, recommendation and natural language processing. Reduced precision inference significantly reduces application latency, which is a requirement for many real-time services, auto and embedded applications.Accelerated development with NVIDIA GPU Cloud (NGC)When the DSS 8440 is configured with NVIDIA GPUs you get the best of both worlds – working with the world’s #1 server provider (Dell EMC) and the industry’s #1 provider of GPU accelerators (NVIDIA). In addition, you can take advantage of the work NVIDIA has done with NVIDIA GPU Cloud (NGC), a program that offers a registry for pre-validated, pre-optimized containers for a wide range of machine learning frameworks, including TensorFlow, PyTorch, and MXNet. Along with the performance-tuned NVIDIA AI stack, these pre-integrated containers include NVIDIA® CUDA® Toolkit, NVIDIA deep learning libraries, and the top AI software. They help data scientists and researchers rapidly build, train, and deploy AI models to meet continually evolving demands. The DSS 8440 is certified to work with NGC.Multi-tenancy for higher productivity and greater flexibilityAs mentioned above, the high accelerator capacity of the DSS 8440 makes it an ideal multi-tenancy solution. It can provide Training or Inference resource across multiple workloads, multiple users and departments, or multiple systems. It gives users the flexibility to run different stacks of machine learning software (i.e., models, frameworks, OS) simultaneously on the same server using different numbers of accelerators, as needed. And multi-tenancy also lets data centers simplify the management of machine learning services.In a multi-tenant environment, you can use NVIDIA NGC and NVIDIA-Docker for ease of use and performance. As mentioned above, NGC includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs and Python can be used at runtime to indicate the number of GPUs needed. Additionally, a distributed cluster can be managed by an administrator using Kubernetes, and multiple users can make resource requests through that interface.Balanced design for better performanceIndustry-leading Training GPUNVIDIA V100 Tensor Core GPUs in the DSS 8440The NVIDIA V100 Tensor Core is the most advanced data center GPU ever built to accelerate machine learning, high performance computing (HPC), and graphics. It supports a PCIe interconnect for GPU-to-GPU communication – enabling scalable performance on extremely large machine learning models, comes in 16 and 32GB configurations, and offers the equivalent performance of up to 100 CPUs in a single GPU. The PCIe-based GPU runs at 250W – 50W lower than the SMX2-based GPU – allowing for better power efficiency at maximum capacity than an 8-way SMX2-based system.Powerful, energy-efficient Inference GPUThe NVIDIA T4 Tensor Core GPU delivers responsive, state-of-the-art performance in real-time and allows customers to reduce inference costs by providing high performance in a lower power accelerator. In small batch size jobs, multiple T4 GPUs can outperform a single V100, at nearly equivalent power. For example, four T4’s can provide more than 3 times the performance of a single V100 – at a similar cost – and two T4’s can deliver almost twice the performance of a single V100 using roughly half the energy and at half the cost.First time accelerator customers who choose T4 GPUs will be able to take advantage of it a 40X improvement in speed over CPU-only systems for inference workloads when used in with NVIDIA’s TensorRT runtime platform.The T4 GPU is also an excellent option for Training environments that don’t require top-end performance and want to save on GPU and energy costs. You can save 20% for cost-optimized training workloads and get better performance per dollar. A single T4 GPU gives you 80% of V100 performance at 25% cost).Wide range of IO options Access to data is crucial to machine learning training. To that end, the DSS 8440 has 8 full height and 1 low profile x16 PCIe slots available for use in the rear of the server. (A tenth slot is reserved for a RAID storage controller).Extensive, high-speed local storageThe DSS 8440 provides flexible local storage options for faster access to training data, with up to 10 drives, 2 fixed as SATA, 2 fixed as NVMe and 6 that can be either SATA or NVMe. For maximum performance, it can be configured with up to 8 NVMe drives. (NVMe drives are 7 times faster than SATA SSDs).More power, more efficiency, more flexibility – the DSS 8440Solve tougher challenges faster. Reduce the time it takes to train machine learning models with the scalable acceleration provided by the DSS 8440 with V100 GPUs, and get inference results faster with the low latencies available using the DSS 8440 with T4 GPUs. Whether detecting patterns in online retail, diagnosing symptoms in the medical arena, or analyzing deep space data, more computing horsepower allows you to get better results sooner – improving service to customers, creating healthier patients, advancing the progress of research.Now you can meet those challenges while simultaneously gaining greater energy efficiency for your datacenter. The DSS 8440 is the ideal machine learning solution for data centers that are scaling to meet the demands of today’s applications and want to contain the cost and inefficiencies that typically come with scale.Contact the Dell EMC Extreme Scale Infrastructure team for more information about the DSS 8440 accelerator-optimized server. (ESI@dell.com)
WUHAN, China (AP) — A World Health Organization team looking into the origins of the coronavirus pandemic has visited the food market in the Chinese city of Wuhan that was linked to many early infections. The team members visited the Huanan Seafood Market for about an hour Sunday afternoon. The market was the site of a December 2019 outbreak of the virus. Scientists initially suspected the virus came from wild animals sold in the market. The market has since been largely ruled out but it could provide hints to how the virus spread so widely. The WHO mission has become politically charged, as China seeks to avoid blame for alleged missteps in its early response to the outbreak.
WASHINGTON (AP) — Vice President Kamala Harris has spent her first two weeks in office working with the president on coronavirus relief, consulting with the head of the World Health Organization and talking with the prime minister of Canada. But it’s her interview with a local news station in West Virginia that’s getting more attention — and not in a good way. West Virginia Democratic Sen. Joe Manchin didn’t take kindly to the vice president’s effort to put pressure on him in his home state by urging passage of a giant COVID-19 relief package, especially when he had no warning it was coming.
BILLINGS, Mont. (AP) — The Biden administration is delaying a rule finalized in former President Donald Trump’s last days in office that would have drastically weakened the government’s power to enforce a century-old law protecting wild birds. The rule had been set to take effect next week. But Interior Department officials said they were putting it off at Biden’s direction and will open a public comment period. Government studies say the rule could mean more birds die, including those that land in oil pits or collide with power lines. Under Trump, the government sided with industry groups seeking to end prosecutions of accidental but preventable bird deaths.
LOS ANGELES (AP) — Fox Business Network’s “Lou Dobbs Tonight” has been canceled. In a statement Friday, Fox News Media said the move was part of routine programming changes that it had foreshadowed last fall. The company said plans were in place to launch new formats post-election, including on Fox Business. Fox News Media said the Dobbs cancellation was among the changes. The statement appeared to distance the show’s end from a multibillion-dollar defamation suit filed against Fox and three of its hosts, including Dobbs, by an election technology company. Whether the cancellation ends Dobbs’ career with Fox News wasn’t addressed, and the company had no further comment.
Today, the first day couples can register for weddings at the Sacred Heart Basilica for the 2011 year, is perhaps one of the reasons the “ring by spring” mentality pervades for many a Notre Dame senior.According to Amy Huber, Wedding and Baptism Coordinator of the Basilica, current students, alumni, University administrators and Sacred Heart parishioners are all eligible to sign up for weddings at the Basilica beginning today by calling in.The process is competitive, as desirable spots fill quickly as the day progresses.“You just have to be patient and keep redialing until you get through,” Huber said. “I probably take about 70-80 reservations [on call day].”Huber said the Basilica accommodates only a certain number of wedding reservations each year, and that number is limited by certain days on which wedding ceremonies are disallowed.“There are a little over 100 dates for 2011 to give out and the summer afternoon dates always go first as expected,” she said. “[Weddings are not held] on holiday weekends, JPW, Alumni Weekend, final vows weekend, ordination weekend, Freshman Orientation weekend and Commencement weekend.”The fee for use of the Basilica is $750, and Huber said that figure includes not just the ceremony itself.“It also provides a wedding coordinator who will attend the rehearsal and wedding and will assist in all the details of the wedding liturgy,” she said.Couples who choose to marry in the Basilica tend to do so because of a sentimental bond with the University.“Most of the couples met here, fell in love here and want to have the sacrament of marriage here,” Huber said. “The Basilica is one of the most beautiful places on campus and our staff at the Basilica and Campus Ministry are wonderful in supporting these couples in all aspects of their preparation and liturgy.”Samantha Mainieri Roth, a 2009 graduate, married her husband Andrew Roth, a 2008 graduate, in the Basilica on Oct. 10, 2009.“The number one reason I wanted to get married at the basilica is because Andrew and I met at ND and it just symbolizes tradition in every sense to us,” she said. “Just knowing how many previous ND grads got married there made it that much more special.”A former Notre Dame cheerleader, Mainieri Roth said the date of her wedding was especially difficult to come by.“I knew I wanted the bye-weekend of the football season in October to be my wedding date because none of my cheerleader teammates would be out of town then since there wasn’t an away game and there are no weddings on weekends of home games,” she said.“By the time Andrew made it through the phone line, after about 500 attempts, we were given the 9 a.m. slot because that’s all there was left for that day.”According to the Basilica’s Web site, available weddings times are Fridays at 1 p.m. and 3 p.m. and Saturdays at 9 a.m., 11 a.m., 1 p.m. and 3 p.m.
In the wake of the University’s announcement of its plans for expanding resources for gay, lesbian, bisexual, transgender and questioning (GLBTQ) students through the creation of a new student organization, professional staff position and advisory board, members of the existing Core Council for Gay, Lesbian, Bisexual and Questioning Students will continue to play an integral role in the transition to the new structures of support. Sophomore Core Council member Lauren Morisseau said the group, which has been involved in both programming and advising, will effectively translate into the proposed advisory board, which will be expanded from its current six undergraduate members to include graduate students and faculty members. “Core Council is already in kind of an attenuated version of itself because it’s already gone back to its roots as an advisory council, so we’ll continue to be involved in that capacity,” she said. “[The council] is going to remain in place as it is needed because some things still need to be worked out and … it really is the group of people who have stood as the voice.” Senior Core Council member Karl Abad said this group of students will bridge the current and future structures of support for GLBTQ students. “Until [the plan] is fully implemented, we’re going to be sort of an active placeholder, a bookmark for the next chapter of our lives,” he said. The creation of the advisory board in conjunction with the student organization will allow for increased delegation and specialization of responsibilities, Morisseau and Abad said, which will help direct the focus of each entity more clearly. “[The advisory board] will be kind of a spinoff of Core Council, but what they’re going to focus on is advising and transferring programming out,” Morisseau said. “That’s something that will be really healthy for the community this is serving but also for the Notre Dame community in general.” Additionally, Morisseau said Core Council members who are active in student organizations and clubs that have been involved in the conversations about GLBTQ support systems will continue to do so in the future. “I think the members won’t cease to have a voice. Some of us I assume will end up on that advisory council,” she said. “I think the transition will be fluid and gradual, but it probably won’t be officially completed until around the time the professional is hired.” Morisseau and Abad said while the current timeline for hiring a professional advisor for the unnamed student organization is not definite, both students and administrators hope to have that person in place by next fall. “If someone perfect comes around, [the administration] will hire them, but it just depends,” Abad said. “Students will have a part in saying whether we agree with [appointing] this person as well, so there’s a collaboration between students and administrators … because we’re keeping a close discussion about what we want and need from someone in this position.” That collaboration has been “unprecedented” throughout the five-month long process of formulating a strategic plan for GLBTQ resources at Notre Dame, especially after decades of advocacy on the part of students without achieving concrete results, Morisseau said. “It’s been an extremely collaborative process, and I think that’s been extremely powerful in building trust and relationships with the administration and understanding where they’re coming from knowing they do have our best interests in mind,” she said. Throughout the process of restructuring GLBTQ resources, Abad and Morisseau said students and administrators engaged in a necessary symbiotic relationship of education and strategic planning, the latter of which came primarily from working with vice president for Student Affairs Erin Hoffmann Harding. “I think at first our job was very much to educate [administrators] because I feel like from their standpoint there’s a burden of knowledge to understand,” Abad said. “I feel like Erin’s prior position in strategic planning and the dialogue she had with us really pushed our thinking.” “We all needed each other. [The administrators] needed our testimony, and we needed their position and advocacy,” Morisseau said. “They can’t know what’s wrong unless students tell them, so there was a lot of eye-opening. I would say from there it was all about balancing each other’s needs.” Although Harding, her chief of staff, Karen Kennedy, and other administrators could identify a range of student needs, Morisseau said students helped the administrators understand their priorities. “They could see a whole spread of student needs, but they didn’t know which were more important until students told them, ‘We prioritize this over this,’” she said. “They were able to stratify needs from there, and that’s how things like the ‘T’ [transgender] got involved.” In some of the monthly meetings between Core Council and administrators, Morisseau said Harding identified the absence of transgender students from the conversation as an issue. “She picked up on it and we verified it,” Morisseau said. Abad said he felt transparency increased between students and administrators throughout the process. “All the senior staff we worked with made clear what their purpose was in this,” he said. “They really wanted to address the trust issue between administrators and students.” After months of open discussion, Morisseau said her initial ambivalence about the administration has faded away. “Since this whole process began this fall, that idea of them as an adversary has really just dissolved because you kind of understand we’re all part of this community and everybody fills different roles,” she said. “We need each other in this.” Although students who submitted a proposal for a gay-straight alliance (GSA) focused primarily on obtaining official club status for that group, Morisseau said that goal changed as a result of collaborating with administrators to determine the most effective solution. “As a student, I don’t think I would have come up with this structure because I’m not a student affairs professional,” she said. “I think that was really where the collaboration became really valuable because there were definitely some conversations where it sounded like we made compromises, but when I look at it today, it seems like a huge leap forward.” Engaging in an in-depth analysis of the current structures and the needs of students gave the proposed structure much more breadth and permanence due to the creation of a student organization, a new advisory board and the new staff position, Morisseau said. “The breadth we’re getting from this broad review far exceeds what we were expecting … and in that sense, I’m very grateful,” she said. “I think the University really decided to commit and did it in a classic Notre Dame style with a lot of integrity. I’m really grateful to [University President] Fr. John Jenkins, Erin, Karen and everyone who … has treated this with respect and been extremely thoughtful and thorough.” Abad said administrators also took care to ensure the focus of the decision process was confined to conversations between the Notre Dame community and themselves, rather than allowing for influence from outside opinions. “[The administration] really gave their input on why they made these decisions. It was never arbitrary,” he said. “We’re trying to satiate and weaken the outside forces from affecting us here because if we don’t do this right the first time around it’s going to be negative for everybody.” Once the new structures are more fully implemented, Morisseau said she and her peers hope to create a peer educator program similar to the Gender Relations Center’s FireStarters. But for now, Abad said the primary focus will be maintaining the general discourse and message of current programs during the transition to more open, effective structures of support for the GLBTQ community and Notre Dame as a whole. “We want to make it clear that we are excited for the changes, but keeping dialogue going is important because there are still things to be settled,” he said. “Past leaders of this movement have kept their vision clear and it’s been passed down, and now it’s coming to fruition.”
According to a philosophy professor, the Harry Potter series is more than just wizards, owls and house elves. John O’Callaghan, associate professor of philosophy, delivered a lecture titled “Harry Potter and the King’s Cross” on Tuesday in DeBartolo Hall. The lecture was the second installment of the Notre Dame Center for Ethics and Culture’s Children’s Literature Series. O’Callaghan said his thesis was that the Harry Potter series is a carefully constructed allegory of the search for wisdom through Christ. He said J.K. Rowling’s bestselling fantasy series reflects the conflict between the modern philosophy of Friedrich Nietzsche, Immanuel Kant, and Rene Descartes, who argued that wisdom lies in the search for power, and the Christian and pre-Christian philosophy of Socrates, St. Augustine and St. Thomas Aquinas, who argued that “faith makes reason possible.” O’Callaghan said the series focuses on conflict between the love of power and divine love. “The novels are not a tale of ordinary magic,” Callaghan said, “They are a tale of extraordinary magic, exploring the tale of two loves: the love of power, which is a philosophy of domination and wealth, versus a power of wisdom, which puts one in the presence of divine love.” Callaghan said Rowling uses Latin and French terms to name characters, objects, and spells, evoking imagery of the nature of those elements. For example, when naming Harry Potter’s archrival, Draco Malfoy, Rowling uses the Latin word for serpent (Draco) and the French term for bad faith (Mal foi). Voldemort, the antagonist of the series, is a reimagining of a French phrase meaning,”will to death,” he said O’Callaghan said many of these names and the stories that come with them are also directly representative of medieval Christian symbols. For example, Gryffindor, the Hogwarts House Harry joins at the beginning of the series, is a “golden griffin,” a mixture of a lion and an eagle, both symbols of Christ, he said. On the other hand, Slytherin, the House of many of the series’ antagonists, evokes the image of a snake, which he said is the enemy of Christ. This symbolism, Callaghan said, extends to the plots of the books themselves. He said the relationship between the Philosopher’s Stone of the series’ first book and the tears of the phoenix in the second are important symbols because both give life to those who are about to lose it. The Philosopher’s Stone, however, gives a “technological” kind of life, one in pursuit of wealth, O’Callaghan said. The phoenix gives a different message. He said the phoenix, a bird that will burst into flame and be reborn from its ashes in the same way that Jesus was resurrected, heals Harry’s mortal wound with its tears, giving him a more fulfilling kind of life. “Genuine life, that is, genuine love, is often found more in the tears of life … than in untold riches and power,” O’Callaghan said. O’Callaghan said Harry wins in the struggle for wisdom at the end of the series when he walks to his death without hesitation and upon coming back to life, leaves behind his own dark side. J.K. Rowling has herself confirmed the religious parallels in her series, he said. In a news conference shortly after the release of the final novel in 2007, she revealed that there had always been religious undertones in her work, but that she had refrained from confirming them because she was afraid she would give away the ending of the story. “Harry Potter and the King’s Cross” was the second lecture of the four-part Children’s Literature Series. The third lecture, “Young Adult Literature,” will take place on Tuesday, Oct. 15, and the fourth, “The Hunger Games,” will take place on Tuesday, Nov. 12.
Instead of Lord Christopher Patten, Rev. Ray Hammond will deliver Notre Dame’s 169th Commencement address May 18, the University announced today in a press release.Patten had to cancel his scheduled speech at Notre Dame, as well as several other engagements for health reasons, vice president for public affairs and communications Paul Browne told The Observer.Hammond, a Philadelphia native, is the founder of Bethel African Methodist Episcopal Church in Boston and was announced in March as an honorary degree recipient for this year’s ceremony.“We are disappointed that Lord Patten will be unable to join us and will keep him in our prayers,” University President Fr. John Jenkins said in the press release. “At the same time, we are delighted and grateful that Rev. Ray Hammond has accepted our invitation to address the class of 2014.“His life’s story and work are an inspiration, and I know he will provide our graduates with a powerful address.”Browne said Jenkins’ personal interactions with Hammond played a role in the decision.“Fr. John had met [Hammond] personally and was impressed with his spiritual demeanor as well as his life’s accomplishments and thought he would deliver a powerful message to the students,” Browne said.Hammond entered Harvard University as a 15-year-old, earned his bachelor’s degree at 19 and his medical degree at 23, according to the release. He worked as a doctor before turning to ministry in 1976 and earned a Master of Arts degree in the Study of Religion (Christian and Medical Ethics) at Harvard Graduate School of Arts and Sciences in 1982, the release said.Hammond served as the former chair of the Boston Foundation and founder and chairman of the Ten Point Coalition, which the release described as “an ecumenical group of Christian clergy and lay leaders behind Boston’s successful efforts to quell gang violence in the 1990s.”He also has served as executive director of Bethel’s Generation Excel program, executive committee member of the Black Ministerial Alliance, chair of the Boston Opportunity Agenda and a member of the Strategy Team for the Greater Boston Interfaith Organization, the release said. Beyond that, he is a trustee of the Yawkey Foundation, the Isabella Stewart Gardner Museum, the John F. Kennedy Library Foundation and the Math and Technology Charter High School.Tags: Commencement, Commencement Speaker, Graduation