xfce: Automounting
Das Problem, dass Thunar meinen USB-Stick nicht einhängen wollte, ist gelöst: man muss nur die Datei /etc/hal/fdi/policy/preferences.fdi
mit der folgenden Zeile anlegen:
<merge key="volume.ignore" type="bool">false</merge>
Das Problem, dass Thunar meinen USB-Stick nicht einhängen wollte, ist gelöst: man muss nur die Datei /etc/hal/fdi/policy/preferences.fdi
mit der folgenden Zeile anlegen:
<merge key="volume.ignore" type="bool">false</merge>
Zu den Anfängen des Internets kannte man nur HTML. Alle Webseiten bestanden komplett aus HTML-Code, manchmal kam ein bisschen JavaScript für MouseOver-Effekte hinzu, aber das war's dann auch schon. Um Strukturen und Layout in die Seiten zu bringen (z.B. Header oben, Navigation links, Footer unten) wurden Tabellen verwendet, deren Rahmen auf eine Breite von 0 Pixeln konfiguriert wurden.
Jedoch war HTML eigentlich nur als Strukturierungssprache gedacht, und die Vermischung von Struktur und Layout z.B. für blinde Menschen und/oder Textbrowser wirklich problematisch. Heutzutage verwendet man daher Cascading Style Sheets (CSS), die heute von allen modernen Browsern problemlos interpretiert werden. Sie erlauben eine vollständige Trennung von Struktur (HTML-Code) und Layout (CSS-Code).
Nun ist das ja eine wirklich feine Sache, aber ein aktuelles Projekt stellte mich vor eine Herausvorderung. Das (vorgegebene) Layout sieht eine Fußzeile vor, die bei wenig Inhalt an der Unterseite des Browsers, bei viel Inhalt jedoch am Ende der Seite stehen sollte. Natürlich soll sie sich auch nicht mit dem Inhalt überlappen. Einfache Sache, sollte man meinen. Früher hätte man dazu einfach eine Tabelle genommen, und diese mit height="100%"
(mindestens gesamtes Fenster) aufgeblasen. Heute natürlich nicht mehr. Stattdessen nimmt man CSS, also etwa so:
<div id="contents">
Some contents....
</div>
<div id="footer">
Some footer
</div>
Will man gültiges HTML 4.01 schreiben, könnte die erste Idee für die passenden CSS-Anweisungen vielleicht so aussehen:
Deutschlands größtes Nachrichtenportal für IT-News, heise.de erstrahlt seit heute in einem neuen Design. Als Positiv sind die hellen Farben und das endlich valide XHTML/CSS anzusehen (die bisherige tabellenbasierte Struktur war einfach nur peinlich für ein IT-Portal).
Negativ finde ich jedoch die feste Seitenbreite, und dass das ganze dann auch noch linksbündig ausgerichtet ist - dadurch befindet sich die Werbung bei höheren Auflösungen in der Mitte des Bildschirms! Laut Jürgen Kuri soll bald eine Vollbild-Version nachgereicht werden. Wer dies nicht erwarten kann (und Opera verwendet), kommt folgendermaßen schon heute zu einer solchen Ansicht:
Die CSS-Datei überschreibt einige der vorgebenen Anweisungen, um die Seitenbreite auf 100% zu bringen (für den rechten Rand werden dabei 375px reserviert). Nebenbei entfernt es auch noch den Banner über der Seite.
Für Verbessungen bin ich jederzeit offen!
Update: Für Firefox gibt es eine ähnliche Lösung.
Update 2: Den Teaser stark zusammengefasst und einen Workaround für das von cycore beschriebene Problem eingebaut.
Update 3: Heise hat eine Reihe von Änderungen vorgenommen, welche die ursprünglichen Korrekturen obsolet werden lassen. Aber ein paar Kleinigkeiten gibts da doch noch, und deshalb hier meine heise2.css. Entfernt ein paar Werbebanner, reduziert den Platzverbrauch des Top-Teasers und macht besuchte Artikel in der Übersicht besser erkennbar.
If you're running a Drupal-based webpage which is completely restricted to authenticated users, Anonymous would get an "403 Access Denied" on every page he tries. But if you want your visitors to see a friendly login page (instead of just this error message and a small "login block") you might want to put the following code into your sites/default/setting.php
:
function custom_url_rewrite($op, $result, $path) {
global $user;
if (!$user->uid) {
return "user/login";
}
return $path;
}
You should also check $path if you want to use this on some pages only, e.g. on your frontpage.
UPDATE: This code is erroneous! You should return $result instead of $path, or some other modules like pathauto or path_redirect won't work as expected!
Ubuntu 8.10 ("Intrepid Ibex") uses the kernel's default suspend system, swsusp, but lacks support for TuxOnIce (formally known as "suspend2"). TOI has some advantages to the classic suspend system, for example "suspend to file", "LZW compression" (for faster resuming) and the ability to cancel any suspend request by pressing ESC. Have a look at the feature list for a comparison of these systems.
Espacially "suspend to file" is extremly important to me, because it allows to resume from an encrypted root partition. Of course, this also works with swsusp by using an encrypted swap device, but then you can't use a random password (or a password file stored on root) and have to enter at least two passwords when booting or resuming: one for root (/) and one for swap.
To add TOI support to Ubuntu, you have to build your own kernel. It's pretty easy, but if you use restricted kernel modules (drivers for your graphic card, VMWare/VirtualBox, ...) you will have to recompile them. So, here we go. First of all, download the kernel source and install some additional packages required at compile time:
Warning: At the moment I can not recommend to use this script in conjunction with filewriter if you use a journaling filesystem for your root partition (ext3, reiser, ...). The reason is that when using filewriter the initramfs-script in /etc/initramfs-tools/scripts/local-premount/resume_tuxonice
mounts this partition read-only to get the hibernation file's target. Unfortunately, the journal will be replayed in any case, so "mount -r /dev/hdX" does not mean "mount /dev/hdX, but don't make any changes on it". And this may result in a filesystem corruption because the resumed system things that these open transaction have not been handled yet. I'll do some changes to the scripts during the weekend, so please be patient of you want to "suspend to file". Yesterday, I described how to patch TuxOnIce-support into Ubuntu's kernel image. Today we will learn how to integrate this into Ubuntu's default hibernation framework, pm-hibernate. By doing this, you'll benefit from TuxOnIce' features without modifying your bootloader's configuration file and will be able to use your desktop's "Suspend to Disk"-command without changing any system file (because a customized system file might be overwritten on your next update). Furthermore, you will be able to combine suspend to disk with encrypted swap devices or (I prefer this) suspend to a file on your encrypted root partition.
Please note that the scripts I'll introduce are tested only by myself (yet), and they still lack of some features. I would be happy about every improvement. And of course I do not provide any warranty, you'll do this on your own risk. I think that the worst thing that might happen is that data on your root partition gets lost. But of course you're doing backups, don't you?
So, let's start. Oh, wait... did you ensure that you have a suitable backup? Ok. First I'll introduce what has to be done, and at the end I'll provide the scripts that implements this. Well, the first thing we have to do is to add resume support into Ubuntu's initramfs-Image. That is the file stored in /boot
, starting with ' initrd.img
' and ending with your kernel's version. If you're curious: it's a gzip'd cpio-archive, you can extract it using the following command:
Update 2009-10-03:
For further development and improvements, contact me or have a look at this public github repository created by Adam Nelson.
Update 2010-04-12:
If you need flash support, you should have a look at the current github version of this script at http://github.com/AdamN/python-webkit2png/ mentioned above. We've extend the script a few month ago.
From time to time you may want to create a screenshot of a web page from command line, for example if you wish to create thumbnails for your web-application. So you might search for such a program and find tools like webkit2png, which is for Mac OS X only, or khtml2png, which requires a lot of KDE stuff to be installed on your server.
But since Qt Software, formerly known als Trolltech, integrated Safari's famous rendering engine WebKit (which is based on Konqueror's khtml engine) into its framework, we are now able to make use of it with the help of some Python and PyQt4.
Symfony provides a nice feature called "embedded Forms" ( sfForm::embedForm
) to embed subforms into a parent form. This can be used to edit multiple records at the same time. So let's say you have a basic user table called 'sf_guard_user' and a profile table called 'user_profile', then you might follow this guide to merge these forms together: lib/forms/doctrine/sfUserGuardAdminForm.php:
class sfGuardUserAdminForm extends BasesfGuardUserAdminForm
{
public function configure()
{
parent::configure();
// Embed UserProfileForm into sfGuardUserAdminForm
$profileForm = new UserProfileForm($this->object->Profile);
unset($profileForm['id'], $profileForm['sf_guard_user_id']);
$this->embedForm("profile", $profileForm);
}
}
Remember to add "profile" to the list of visible columns in apps/backend/modules/sfGuardUser/config/generator.yml as decribed in the linked guide. The result may look like this:
This does what it is expected to do, but it doesn't look very nice. Especially for 1:1 related tables I'm more interested in a solution that looks like this:
You can reach this using sfForm::mergeForm, but sadly the merged model won't get updated and you'll run into problems if the forms are sharing fieldnames. The solution is the following method embedMergeForm which can be defined in BaseFormDoctrine to be avaible in all other forms:
lib/forms/doctrine/BaseFormDoctrine.php:
abstract class BaseFormDoctrine extends sfFormDoctrine
{
/**
* Embeds a form like "mergeForm" does, but will still
* save the input data.
*/
public function embedMergeForm($name, sfForm $form)
{
// This starts like sfForm::embedForm
$name = (string) $name;
if (true === $this->isBound() || true === $form->isBound())
{
throw new LogicException('A bound form cannot be merged');
}
$this->embeddedForms[$name] = $form;
$form = clone $form;
unset($form[self::$CSRFFieldName]);
// But now, copy each widget instead of the while form into the current
// form. Each widget ist named "formname|fieldname".
foreach ($form->getWidgetSchema()->getFields() as $field => $widget)
{
$widgetName = "$name|$field";
if (isset($this->widgetSchema[$widgetName]))
{
throw new LogicException("The forms cannot be merged. A field name '$widgetName' already exists.");
}
$this->widgetSchema[$widgetName] = $widget; // Copy widget
$this->validatorSchema[$widgetName] = $form->validatorSchema[$field]; // Copy schema
$this->setDefault($widgetName, $form->getDefault($field)); // Copy default value
if (!$widget->getLabel())
{
// Re-create label if not set (otherwise it would be named 'ucfirst($widgetName)')
$label = $form->getWidgetSchema()->getFormFormatter()->generateLabelName($field);
$this->getWidgetSchema()->setLabel($widgetName, $label);
}
}
// And this is like in sfForm::embedForm
$this->resetFormFields();
}
/**
* Override sfFormDoctrine to prepare the
* values: FORMNAME|FIELDNAME has to be transformed
* to FORMNAME[FIELDNAME]
*/
public function updateObject($values = null)
{
if (is_null($values))
{
$values = $this->values;
foreach ($this->embeddedForms AS $name => $form)
{
foreach ($form AS $field => $f)
{
if (isset($values["$name|$field"]))
{
// Re-rename the form field and remove
// the original field
$values[$name][$field] = $values["$name|$field"];
unset($values["$name|$field"]);
}
}
}
}
// Give the request to the original method
parent::updateObject($values);
}
}
This method ensures that each fieldname is unique (named 'FORMNAME|FIELDNAME') and the subform is validated and saved. It is used like embedForm:
lib/forms/doctrine/sfUserGuardAdminForm.php:
class sfGuardUserAdminForm extends BasesfGuardUserAdminForm
{
public function configure()
{
parent::configure();
// Embed UserProfileForm into sfGuardUserAdminForm
// without looking like an embedded form
$profileForm = new UserProfileForm($this->object->Profile);
unset($profileForm['id'], $profileForm['sf_guard_user_id']);
$this->embedMergeForm("profile", $profileForm);
}
}
Feel free to use this method in your own project. Maybe this method get's merged into Symfony some day ;-)
Update
frostpearl reported a problem using embedFormMerge() in conjunction with the autocompleter widget from sfFormExtraPlugin. If you expire these problems try to replace all occurences of $name|$field
with $name-$field
.
In my installation, KDE failes to lock the screen on suspend / hibernate, even if the checkbox "lock screen" in the "Energieverwaltung" (how is this labeled in english? Power configuration? Don't know...) is enabled. So I've adapted this patch to Mandriva.
Create the file /etc/pm/sleep.d/50-lock
with the following content:
#!/bin/sh
lockX() {
for x in /tmp/.X11-unix/*; do
displaynum=`echo $x | sed s#/tmp/.X11-unix/X##`
user=`w -hs | awk '{ if ($3 == ":'$displaynum'" || $2 == ":'$displaynum'" ) { print $1; exit; } }'`
export DISPLAY=":$displaynum"
su $user -c "dbus-send --session --dest=org.freedesktop.ScreenSaver --type=method_call --print-reply /ScreenSaver org.freedesktop.ScreenSaver.Lock"
done
}
case "$1" in
hibernate|suspend)
lockX
;;
thaw|resume)
;;
*) exit $NA
;;
esac
Make this file executable (chmod 755 /etc/pm/sleep.d/50-lock). That's it. Pretty simple, isn't it?
Update 2009-05-30:
Here is another version of this script (from Ubuntu Bug #283315 ).
In my current Symfony project I have a model called "Country" and a model called "Region". A Region always belongs to a Country, and this country will never change.
I've used Doctrine's admin generator to create the administration backend:
$ php symfony doctrine:generate-admin --plural="Countries" backend Country
$ php symfony doctrine:generate-admin backend Region
This command produces two modules, "apps/backend/modules/country/
" and "apps/backend/modules/region/
", and the following entries in apps/backend/config/routing.yml
:
region:
class: sfDoctrineRouteCollection
options:
model: Region
module: region
prefix_path: region
column: id
with_wildcard_routes: true
country:
class: sfDoctrineRouteCollection
options:
model: Country
module: country
prefix_path: country
column: id
with_wildcard_routes: true
The URLs will look like "http://www.example.com/backend.php/country/index" and "http://www.example.com/backend.php/region/index". But what I want is something like "http://www.example.com/backend.php/country/COUNTRY_ID/region/index", so the Region view is always bound to an specific country.
You can reach this with minimal effort. First, you have to modify the routing. Change the entry for region as shown below (changed line is in bold font):
apps/backend/config/routing.yml
:
region:
class: sfDoctrineRouteCollection
options:
model: Region
module: region
prefix_path: country/:country_id/region
column: id
with_wildcard_routes: true
If you try to open "http://www.example.com/backend.php/country/1/region/index" now (assuming that '1' is a valid country id) you'll get an error like this:
500 | Internal Server Error | InvalidArgumentException
The "/country/:country_id/region/:action/action.:sf_format" route has some missing mandatory parameters (:country_id).This is because the (automatically generated) region view tries to call 'url_for()' for actions like 'filter' or 'add', and there is an parameter called 'country_id' defined which is missing in the argument list. You can solve this problem by overwriting the 'execute()' function in the action class.
apps/backend/modules/region/actions/actions.class.php
:
public function execute($sfRequest)
{
$this->forward404Unless($country_id = $sfRequest->getUrlParameter(‘country_id’));
$this->forward404Unless($this->country = Doctrine::getTable(‘Country’)->find($country_id));
$this->getContext()->getRouting()->setDefaultParameter(‘country_id’, $country_id);
if ($id = $sfRequest->getUrlParameter(‘id’))
{
$this->getContext()->getRouting()->setDefaultParameter(‘id’, $id);
}
$result = parent::execute($sfRequest);
// UPDATE: This is required for the 'new' action
if (isset($this->form) && $this->form->getObject() && $this->form->getObject()->isNew())
{
$this->form->getObject()->country_id = $country_id;
}
return $result;
}
This will set the current country id as default parameter for all calls to methods like 'link_to()' or 'url_for()', and abort if an valid id is missing.
Of course you still have to modify the filters and forms to regard the given country as default, and add some extra actions to the Country view, but the most trickiest part is done. Have a look at this article from Sven to learn how to modify the default filter and read Jobeet Tutorial, Chapter 12 to see how the default actions can be customized.
If you expire some problems with 'new' and 'edit' forms (action="/backend.php/country/region"
in <form>
-tag), read bug report #6881. Update: This problem can be solved by setting the 'country_id' field of new objects (e.g. by overriding executeNew()
and executeCreate()
).
In my current Abobe Flex project, I needed to upload multple generated files and form fields to a server. UploadPostHelper by Jonathan Marston is a great for uploading one file at time, but sadly I couldn't use (or even modify) it - it's licensed under NonCommercial CC, and my project is a commercial one.
So I had to write my own code which I would like to share with other's (dual-licensed under MPL 1.1 and LGPL 2.1).
Usage of HttpPostData-class: <Update April 1st 2010>I had an error in this text for a half year and nobody noticed? Strange...</Update>
var postData:HttpPostData = new HttpPostData()
postData.addParameter('foo1', 'bar1'); // POST-Field 'foo1' has value 'bar1'
postData.addParameter('foo2', 'bar2'); // POST-Field 'foo2' has value 'bar2'
// POST-Field 'uploadedFile1' contains someBinaryData1 (ByteArray)
// as 'application/octet-stream' with filename 'uploadedFile1'
postData.addFile('uploadedFile1', someBinaryData1);
// POST-Field 'uploadedFile2' contains someBinaryData2 (ByteArray)
// as 'image/png' with filename 'image.png'
postData.addFile('uploadedFile2', someBinaryData2, 'image.png', 'image/png')
postData.close();
var request:URLRequest = new URLRequest("http://www.example.org/someDestination/");
postData.bind(request)
var urlLoader:URLLoader = new URLLoader();
urlLoader.load(request);
Have fun!
vlcvob:
#!/bin/sh
# Check prerequirements (assume that 'rm', 'mkfifo' etc are available)
echo "Checking prerequirements..."
if ! (which mktemp && which pv && which cvlc); then
echo "Missing required programs. See output for info." >&2
exit 1
fi
source="$1"
destination="$2"
fifo="$(mktemp)"
if [ "x$source" = "x" ] || [ "x$destination" = "x" ]; then
echo "Read vob file from DVD using VLC." >&2
echo "Usage: $0 " >&2
echo "Example:" >&2
echo " Store first title from device '/dev/dvd' to 'filename.vob':">&2
echo " $0 dvdsimple:///dev/dvd@1 filename.vob" >&2
exit 1
fi
if [ -e "$destination" ]; then
echo "File '$destination' already exists. Please remove first." >&2
exit
fi
# Command from http://www.gentoo-wiki.info/HOWTO_Backup_a_DVD.
# Other commands that should work (untested):
# CMD="mplayer dvd://$TITLE -dvd-device $DEVICE -dumpstream -dumpfile '$fifo'"
# CMD="mplayer dvdnav://$TITLE -nocache -dvd-device $DEVICE -dumpstream -dumpfile '$fifo'"
CMD="cvlc '$source' --sout '#standard{access=file,mux=ps,dst=$fifo}' vlc://quit"
echo "Creating FIFO file '$fifo'..."
rm "$fifo" && mkfifo "$fifo" || exit 1
echo "Starting VLC: $CMD"
eval $CMD &
vlcpid="$!"
# Wait a second or so to let VLC do it's work
sleep 5
pv -petrb "$fifo" > "$destination" &
pvpid="$!"
wait "$vlcpid"
kill "$pvpid"
rm "$fifo"
Disclaimer:
Selbstverständlich sind dabei die Urheberrechte zu beachten! Für selbsterstellte DVDs ist das unproblematisch. Ein Kopierschutz darf jedoch zumindest in Deutschland nicht umgangen werden.
XStream is a nice Java library for serializing and deserializing objects. One of it's advantages is that it does not require the deserialized class to have a default constructor. But sometimes this will be a problem. A simple real-life example:
public class Person {
public transient final PropertyChangeSupport pcs = new PropertyChangeSupport(this);
private String name = "";
public String getName() {
return name;
}
public void setPerson(Name name) {
pcs.firePropertyChange("name", this.name, this.name = name);
}
}
The PropertyChangeSupport
object should be marked transient
, otherwise serializing would include the whole object, including its listeners. But sadly, he following code won't work:
XStream xstream = new XStream();
Person p = new Person();
p.setName("Roland");
String serialized = xstream.toXML(p);
// ...
p = xstream.fromXML(serialized);
System.out.println(p.getName()); // prints "Roland"
p.setName("Cybso"); // Throws NullPointerException in Person.setName(Name)
The call to p.setName("Cybso")
throws a NullPointerException
because pcs has not been initialized.
There are two standard ways to work around this problem. The first is to initialize XStream using a PureJavaReflectionProvider
-Instance:
XStream xstream = new XStream(new PureJavaReflectionProvider());
This would force XStream create new objects using Class.newInstance()
- and prevents you from (de)serializing classes without default constructor. The other way is to implement a method called readResolve()
which will be called after the object has been created:
public class Person {
public transient final PropertyChangeSupport pcs;
private String name;
public Person() {
readResolve();
}
public void readResolve() {
pcs = new PropertyChangeSupport(this);
name = "";
}
public String getName() {
return name;
}
public void setPerson(Name name) {
pcs.firePropertyChange("name", this.name, this.name = name);
}
}
This means to abandon the using of final transient
fields and in this special case it enforces the implementation of a getPCS()
or delegation methods. So let me suggest another solution: Create a custom converter that feels responsible for all classes having a default constructor. This reduces the final transient
problem to classes without a default constructor.
public static class DefaultConstructorConverter extends ReflectionConverter {
public DefaultConstructorConverter(Mapper mapper, ReflectionProvider reflectionProvider) {
super(mapper, reflectionProvider);
}
@Override
public boolean canConvert(Class clazz) {
for (Constructor c : clazz.getConstructors()) {
if (c.getParameterTypes().length == 0) {
return true;
}
}
return false;
}
@Override
protected Object instantiateNewInstance(HierarchicalStreamReader reader, UnmarshallingContext context) {
try {
Class clazz = Class.forName(reader.getNodeName());
return clazz.newInstance();
} catch (Exception e) {
throw new ConversionException("Could not create instance of class " + reader.getNodeName(), e);
}
}
}
Using this converter the original code will work:
XStream xstream = new XStream();
xstream.registerConverter(new DefaultConstructorConverter(xstream.getMapper(), xstream.getReflectionProvider()));
Person p = new Person();
p.setName("Roland");
String serialized = xstream.toXML(p);
//...
p = xstream.fromXML(serialized);
System.out.println(p.getName()); // prints "Roland"
p.setName("Cybso"); // No exception thrown!
System.out.println(p.getName()); // prints "Cybso"
Happy hacking ;)
Today I wanted to embed a SWT component (Browser) into an existing JFrame. This is the way it works:
Flexbackup is a very nice and flexible tool to create full, incremental and differential backups. But if you store your backups in an untrusted environment you might want do encrypt the created archive files. Flexbackup cannot handle it by default, but there is a very simple way to get the desired results by replacing the default gzip binary with a wrapper file.
In this example I'm using mcrypt with symmetric block cipher DES. Replace it with gnupg if you want asymmetric encryption, but remember: if someone gains root access to read the key he doesn't need to decrypt your backup files - he already has access to the originals.
Create a file named /usr/local/bin/gzip_crypt
:
#!/bin/sh
gzip $* | mcrypt -a des --keyfile "$HOME/mcrypt.key"
Another example that uses 256-Bit-AES-Encryption:
#!/bin/sh
gzip $* | ccencrypt --keyfile "$HOME/mcrypt.key"
Make this file executable:
$ chmod 0755 /usr/local/bin/gzip_crypt
Store an encryption key in $HOME/mcrypt.key
, e.g. /root/mcrypt.key
. I would suggest to use at least 16 random characters for it, see the manpage of mcrypt for details. Ensure that the key isn't readable for someone else:
$ chmod 0600 "$HOME/mcrypt.key"
Don't - DON'T, DON'T, DON'T - enter the key as command line argument to mcrypt as it would be visible in the process list for every user while mcrypt is running!
Now edit your flexbackup.conf
and change the following options to these values:
$compress = 'gzip';
$comp_log = 'bzip2'; # or just 'false', gzip_crypt isn't able to handle this
$path{'gzip'} = '/usr/local/bin/gzip_crypt';
That's it:
$ flexbackup -set home
...lot of stdout stuff here...
$ file home.0.201101141830.tar.gz
home.0.201101141830.tar.gz: mcrypt 2.5 encrypted data, algorithm: des, keysize: 8 bytes, mode: cbc,
Use mdecrypt --key "$HOME/mcrypt.key" home.0.201101141830-decrypted.tar.gz
to decrypt the file.
Schonmal versucht, nen Mac Mini in einen 19"-Schrank zu stellen? So geht's:
(anklicken zum Vergrößern)
Thx @ MP45: "ein hoch auf kabelbinder!"
JasperReports is a library which can be used to fill reports from Java applications or just create simple PDFs. It allows you to not only use static output strings but also Groovy expressions. Sadly, this is restricted to simple expressions that result in a value and don't generate multiple class files at compile time.
For example, you could use the following expression to print different values depending if your document has more or less than 10 pages:
$V{PAGE_COUNT} < 10 ? "foo" : "bar"
But when you have to loop over values you'll face a problem as you have no possibility to define own methods. Even the usage of closures (Groovy) or anonymous inner classes (Java) is prohibited as Jasper expects every expression to result in one single class file.
Of course you could extend the class path or use Scriptlets for this, but this requires you to ship the compiled class together with the report library.
Using the power of groovy there is a way to work around this: compile your expression at runtime! The following code will calculate the faculty of the number of pages:
new GroovyClassLoader().parseClass('''
def static fac(x) {
def res = 1;
1.upto(x) {
res *= it
}
return res
}
''').fac($V{PAGE_COUNT})
The first line creates a new instance of the GroovyClassLoader, parses the code given in the following multiline string expression and executes the method "fac(x)" defined statically before.
To prove the power of this method the following code will embed a recursive listing of your home directory in your report:
new GroovyClassLoader().parseClass('''
def static list(path, prefix) {
String result = ""
path.listFiles().each() {
result += prefix + it.name + "\\n"
result += list(it, prefix + " ")
}
return result
}
''').list(new File(System.getProperty("user.home")), "")
Remember that you have to double-escape backslashes as the first escape will be handles by the Jasper compiler.
Another possibility (and the reason why I researched at this topic) is that this method allows you to generate images at runtime and use the full power of the JFreeChart library used by Jasper itself.
Wenn ich mit ffmpeg Videos in das x264-Format (in einem mkv-Container) umkodieren will, stoße ich manchmal auf folgende Fehlermeldung:
Application provided invalid, non monotonically increasing dts to muxer in stream 1: 6010976 >= 6010976
av_interleaved_write_frame(): Invalid argument
Danach bricht ffmpeg ab. "dts" hat dabei übrigens nicht mit dem Tonformat "DTS" zu tun, dennoch liegt das Problem an der Tonspur, die offenbar nicht ganz standardkonform ist, auch wenn ein Videoplayer sie abspielen kann. Es gibt diverse Patches, die dieses Problem umgehen sollen, aber nach meiner Erfahrung funktionieren sich nicht in allen Fällen oder haben andere Nebenwirkungen (z.B. kurze Aussetzer).
Erfolgsversprechender ist es, Video- und Tonspur einzelnd zu kodieren (bzw. letztere entnehme ich meistens unverändert aus der Quelle) und diese danach zusammenzufügen. Theoretisch kann es dabei zwar zu Sync-Problemen kommen, aber wenn beides aus der gleichen Quelldatei stammt sollte es funktionieren.
Zuerst konvertiert man die Videospur:
$ ffmpeg -i quelle.vob \
<i># Verwende alle Kerne. Ggf. die Zahl der zu verwendenen CPU-Kerne angeben</i>
-threads 0 \
<i># Kein Audio und keine Untertitel (Subtitle)</i>
-an -sn \
<i># eigene Präferenzen für Videokodierung</i>
-vcodec libx264 -preset slow -tune film -level 41 -crf 23 \
<i>Zieldatei im mkv-Format</i>
ziel-nosound.mkv
Anschließend fügt man die Tonspuren hinzu. In diesem Beispiel wollen wir die erste (englisch) und die dritte (deutsch) Tonspur aus der Quelle nehmen, wobei die deutsche jedoch vom Player standardmäßig ausgewählt werden soll:
$ mkvmerge \
<i># Ziel mit Tonspuren</i>
-o ziel.mkv \
<i># Optionaler Filmtitel</i>
--title "Titel des Films" \
<i># Nur Videospur (-A) aus temp. Ziel nehmen</i>
-A ziel-nosound.mkv \
<i># Keine Videospur (-D) und die Audiospuren 1 und 3 der Quelle...</i>
-D -a 1,3 \
<i># ...Sprachen und Standardspur in der Quelle...</i>
--language 1:eng --language 3:ger --default-track 3 \
<i># Video und Tonspur entstammen der selben Quelle, sonst file</i>
--append-mode track \
quelle.vob
Vorsicht, die Angabe der Sprache und der Standardspur bezieht sich auf die Nummer in der Quelldatei. Dementsprecend beziehen sich solche Parameter immer auf die nächste angegebene Quelldatei.
Danach sollte man die korrekte Zuordnung und die Synchronisierung der Tonspuren prüfen.
Apache's include
directive does not accept wildcards, so something like this won't be allowed:
Include /srv/www/vhosts/*/conf/vhost.conf
You can use mod_perl to realize this. Additionally, the following example does a simple permission check to ensure that the included file has not been modified by an ordinary user:
<perl>
use File::stat;
foreach $file (glob '/srv/www/vhosts/*/conf/vhost.conf') {
my $stat = stat($file);
if ($stat->uid != 0 || $stat->gid != 0) {
warn "$file is not owned by root:root, skipping!\n";
next;
}
if ($stat->mode & 0002) {
warn "$file is world-writable, skipping!\n";
next;
}
push @Include, $file;
}
</perl>
This is a re-post of my mail to the suphp mailing list. Although the behaviour described here is a serious design issue, I've never got any feedback.
The attached patch is some kind of "proof of concept" to solve a security related problem I have with suPHP.Problem: Run script with file/directory owner threatens the user's files.
suPHP is intended to run a PHP script using a specific process owner. When configured in "owner" or "paranoid" mode this always will be the owner of the file, but in any case it will be executed having the same owner as the parent directory (if it isn't owned by root).
This can be (and I think in most cases is) used in multihosting environments to prohibit the users from reading other user's files on an operating system level.
But there is a big security problem with this solution. Not only an exploitable PHP script can modify itself, but it is also possible for an attacker to compromise the user's configuration files. E.g. a bad guy could place a key logger in the user's .profile file. Or even simpler, fake a failed login to retrieve the user's password.
A solution can be to execute the PHP process only with the user's group and a restricted "nobody" user. Sadly, suPHP wouldn't allow the script to be executed if it and it's parent directory are not owned by the nobody user. Not a very satisfying situation if you need the system administator's help to modify your site.
My suggestion is to distinct between the user/group the script is executed with and the user/group the script have to be owned by. The attached patch (quick&dirty, only for apache2) add's an optional third parameter "processUser" to suPHP_UserGroup and a new environment variable SUPHP_PROCESS_USER, but maybe a new configuration directive named "suPHP_ScriptOwner" is preferable. Ownership checks are done using targetUser, permission change is done using processUser. If empty, processUser = targetUser.
This way you even can ensure that a php file created by suPHP (e.g. because of a file upload with .php extension into a public directory or an exploitable call to file_put_contents()) would never be executed - because it is owned by processOwner where it should be owned by targetUser!
Create a branch production
which contains all changes that should went onto the production server:
$ hg branch production && hg commit -m "Added new branch 'production'"
Clone repo to remote:
$ hg clone . ssh://user@host//path/to/repository
On the server, switch to production branch:
$ hg update production
Now add the following hook into the .hg/hgrc
:
[hooks]
# If current branch is out of date, update and refresh
changegroup = (LANG=C hg summary | grep -q 'update: (current)' || (hg update && mvn clean compile install && systemctl )) >&2
This example uses Maven to rebuild the src into the target directory if any update in the production branch appears.
Der Tolino Shine hat offenbar einen fiesen Bug, wenn er zu sehr entladen wird. Sein Display zeigt dann ein trauriges Smiley zusammen mit dem Text "Akku erschöpft. Bitte laden." an. Aber egal was man macht, er wird weder geladen noch kann man ihn hochfahren. Selbst der Reset-Knopf hilft nicht. In einschlägigen Foren findet man nur den "Tipp", das Gerät einschicken und austauschen zu lassen.
Das hätte ich gestern auch beinahe gemacht, aber durch Zufall habe ich einen anderen Weg gefunden, den Reader wieder zum Leben zu erwecken. Und zwar habe ich hier ein USB-Ladegerät, welches sowohl Anschlüsse für Apple als auch für Android-Geräte hat. Mangels Apple-Geräte nutze ich normalerweise nur den Android-Anschluss, aber als letzten verzweifelten habe ich den Reader schließlich noch an den Apple-Port angeschlossen. Und: Er läd!
Sobald der Reader genug Strom hatte, um eingeschaltet zu werden, ließ er sich dann auch wieder an jedem anderen USB-Anschluss aufladen.
Ich schließe daraus, dass die Hardware-Entwickler des Readers ausschließlich Apple-Geräte nutzen.
Vielleicht helfe ich mit diesem Beitrag ja dem ein oder anderen Tolino-Shine-Besitzer, dessen Gerät den gleichen Fehler zeigt :-)
My Wii Sensor Bar is built of two arrays of IR-LED that get its power from a proprietary port on the Wii / WiiU. Since I have a beamer as second screen with its own Sensor Bar, a Wii-to-HDMI-Adapter and a 4x2 HDMI switch I don't want to unplug and reconnect the sensor bar cable anymore when I switch from one screen to the other.
My idea: connect one sensor bar to the USB port of the TV, and the other one to the USB port of the beamer, so both will only be enabled when the specific screen is in use. Another reason to get power from an external power supply could be that you have both, a Wii and a WiiU.
So, I've tried to find out what current and amperage the sensor bar requires, and since I did not find any reliable information I've took a screwdriver, my multimeter and had a look at it.
Important: These pictures where taken on a cloned sensor bar from ebay since I don't have a screwdriver for the Tri-Wings Nintendo used on the original one. The original sensor bar has five LEDs on each side which means that you would have to short-circuit two of them when following this guide. Otherwise you would need a power source with a minimum current of 7.5V.
If you don't want to hack your original sensor bar and don't have a cloned one you could also buy a cloned sensor bar that comes with an USB connector.
When measured at the connector on the Wii's backside the multimeter shows a current of 12V. But measured under load directly at the sensor bar the current goes down to 5.5V. I think this is due to the relative long cable combined with extremly thin wires, resulting in a relative high resistivity R0. However, the exact value doesn't matter since I will replace the cable anyway.
My sensor bar consists of two arrays of three LED, both having their own R1=24Ω resistors. I'm not sure what kind of infrared LED is used but some googling reveals that 940nm seems to be reasonable. Typical values for these LEDs are a Vf=1.5V and If=50mA, so we have 150mA on each side and a total of 300mA a total of 100mA (thanks @ Daniel for correcting this error). The minimal required current is 3*1.5V=4.5V, which is lower than the 5V specified for USB[1] (if your sensor bar has more LEDs on each side you should bridge or remove all but three on each side). Combined with a 10Ω resistor R1 on each side this should be fine.
The first step is to unsolder the existing cable. Use a desoldering pump to clean the holes on the board from the old solder. Now take an existing USB cable (for example an old MicroUSB cable), remove the Type B plug and strip the isolation. You only need the wires for +5V and Ground, which are typically colored in red and black, so you should remove the other ones. Strip the inner isolation, put some solder on the conductors and connect them with the ports on the board where the original cable was connected. The holes are labed on the back side of the board with + (red) and - (black). The + conductor is connected with the first LED on both sides, the - conductor with the resistors. If you inadvertently switched the poles, this is not fatal, since diodes only allow the current to pass in one direction - but the sensor bar will not work, of course.
Next, remove the original resistors and replace each of them with a 10Ω resistor.
Now you should take your multimeter and a camera (most digital cameras are able to see infrared light) and check if the sensor bar works. For personal security I suggest to use an external power bank for this test.
If you see the IR-LEDs on the camera screen you can connect the sensor bar to your TV and test them. Maybe you have to calibrate the sensivity of your Wiimotes since the brightness might have changed, especially when you were forced to bridge some LEDs.
[1]: Please note that USB uses a low-power mode per default that does not allow a device to pull more that 100mA from the port. If a device requires more power (up to 500mA) it has to ask the USB controller for permission. Since this requires active components we do not have please ensure that no other USB devices are connected to the same port, or usea dedicated USB power supply.