The final part of our video series with the Financial Times, features John Gill, BearingPoint managing director. John begins by discussing how organizations can manage risks by utilizing dashboards to deliver the right risk indicators and meaningful insights. He also explores why data warehousing projects fail and what can be done to accelerate business intelligence including utilizing BearingPoint’s Rapid Execution dashboard methodology. Finally John covers the need for a data dictionary, utilizing active dashboard and the biggest information challenges for organizations.
In the video interview with the Financial Times, Sandeep Vishnu looks at the strategies to market survival, including the observation that there may be too many strategy choices. He outlines the guiding prinicpals in risk management in particular the lack of data and governance issues. While examining governance Sandeep outlines some lessons risk officers should take from the current environment, amount other things their data infrastruture and shouting loudly. The interview also presents information on the lack of regulation that led to the current state and what we can expect from regulators in the future, specifically how they can work better with business … or if they should.
Repost from Atlassian Blog
November 18, 2008
In the link below I walk you through how BearingPoint implemented Confluence from the ground up to over 16,000 co-workers across the globe. We promoted organic advocacy and our wiki was started by users for users, not by IT, though they are now on board. BearingPoint’s current wiki started in a basement on a personal laptop and has since grown to a 2-node cluster with over 11K pages.
In the demo we show we use the wiki along with numerous macros and plugins we have implemented. Contegix and Customware, both Atlassian partners, get an honorable mention as they were crucial elements in getting BearingPoint’s wiki to reach the ‘critical mass’ it has.
Video can be seen here: http://blogs.atlassian.com/news/2008/11/going_global_wi.html
Author: Nate Nash
In this video Jack Perkins of the Financial Times interviews, Frank Mackris, head of BearingPoint’s banking practice. The video was conducted during the Financial Times Cost, Performance and Market Survival Editorial Breakfast. In the interview Frank discusses how cost take out and performance improvement needs to become a greater imperative for financial services organizations. He examines how companies can make cost take out more of a priority, which may mean a fundamental change of the company’s cost structure and the importance of executive sponsorship.
In this podcast Jack Perkins of the Financial Times interviews, BearingPoint managing director, Brian Hart during the Financial Times Data Management and Use Editorial Breakfast. During the interview Brian addresses the economic turmoil and the lessons risk executives can take from recent activities. Brian outlines recommendations for executives on how they can manage the economic downturn and addresses the need for a cultural overhaul. Finally the podcast reviews regulators and their responsibilities to the market while outlining some factors for future success.
Posted in video
Tagged Brian Hart, compensation, data management, finance, financial markets, Financial Times, governance, Jack Perkins, regulations, risk, risk management, transparency
The fluctuations of financial services firms’ business volumes reflect the cyclical nature of the overall financial markets. These dynamics are often caused by specific crises—such as the most recent subprime mortgage problems—or a slowdown in the economy. These fluctuations require that executives and technology leaders have the ability to restrict spending levels in market downturns and quickly scale up when business volumes rise again.
During previous periods of market turbulence, executives have demanded budget cutbacks and cost savings from their IT organizations. However, because many IT costs are fixed, IT executives have limited options for reducing expenditures. Typical cost-saving initiatives entail rationalizing IT assets and resources and renegotiating vendor contracts.
Fixed IT costs cannot be scaled back easily to react quickly and appropriately to market downturns. Optimization of the “operating leverage,” which is defined as the percentage of fixed costs relative to overall operating costs, increases a company’s ability to lower its IT operating expenses quickly during an economic slowdown.
This issue of the Financial Services Technology Journal discusses approaches to optimizing operating leverage. We examine key areas or “levers” that often transition well from a fixed- to a variable-cost basis. Including articles that relate to these levers and provide key considerations for defining and assessing how to better manage IT costs.
Information is the backbone of life sciences organizations. When used
effectively and formally woven into the culture of an organization,
information can streamline processes and help an organization gain
But exponential growth in data volumes and complexity is posing a serious and daunting challenge across the industry. Simply managing information better in a technological sense is not enough. Life sciences organizations gain better control of enterprise information by understanding the scope and complexity of the data management problem and by defining the strategic business objectives of effective information management.
Read more about how life sciences companies can address these information management challenges.
Virtualization sounds like the Holy Grail for IT managers—and for executives in the cost-sensitive C-suite. Today’s world-class organizations increasingly see virtualization of their entire enterprise—from servers to security, from software processes to production utilities—as a means to control costs, better allocate resources and increase their return on IT investments.
Virtualization provides a smaller footprint for companies to achieve more benefits across the entire enterprise. Virtualization promises tremendous savings for large enterprises because it offers them ways to create discrete environments in which to develop and test software functionality. In addition, while virtualization has been of tremendous assistance in the development environment by helping companies introduce new applications into a complex operational environment, the real benefit is moving virtualized machines into the production and operating environment enterprise-wide.
The case for virtualization is compelling. IT organizations concerned about underutilization of assets, rising energy costs, improving efficiency in IT infrastructure, simplifying physical architecture, and the constant pressure to reduce technology expenses are adopting server virtualization in the hopes of reaping its potential benefits.
Read how to improve operational efficiencies with server virtualization.
We look forward to your comments.