Skip to main content
Topic: Accessing additional percussion banks in soundfont (Read 4828 times) previous topic - next topic

Accessing additional percussion banks in soundfont

I have an SB Live Value card and am experimenting with different soundfonts. It's really mindboggling after having had an AWE64!

There's one thing I haven't been able to do with NWC, though. A lot of the big soundfonts have several different percussion banks (bank 0 may be rock drums, bank 1 may be jazz drums, etc.), and yet I can't access any but the first.

I've tried every combination I could think of in NWC's MIDI staff properties, but I only ever get the first percussion bank.

An example: My soundfont has 128 melodic instruments + 5 percussion banks. Patch 0 is the first percussion bank; patch 1 is the Acoustic Grand Piano; Patch 2 is the Bright Acoustic Piano; etc.

How do I access percussion banks 2-5? I've tried changing the patch bank with Controller 0, Controller 32, and both, and have tried combining those changes with different patch selections (which end up being nothing or melodic instruments).

MTIA

Re: Accessing additional percussion banks in soundfont

Reply #1
The alternate banks aren't sequentially numbered. Try this: in Staff Properties, go to the Instrument tab, set the Patch List Type to "Numeric, 1 Based" and select 41, 49 or 57 from the patch list. 41 is the "Brush" patch, 49 is "Orchestra" and 57 is "Sound Effects".

There's a file at the Scriptorium that shows what sounds you should be hearing from each of these patches. Go to http://www.vadu.com/nwc/helpful.html#NWCFiles and download "drum.nwc". I've tried these patches with my AWE64 Gold and they work like a charm.

Good luck!

Re: Accessing additional percussion banks in soundfont

Reply #2
Another way of circumventing this that ignores the General MIDI protocol (and therefore will not sound the same as a .mid or .nwc file on another sequencer/pc) but is very easy to do and nifty in accenting drums properly is to create an extra melodic preset for each kit. You can add this to the same bank on the soundfont (if there is an empty slot) or another one. This is done via Vienna in Creative products. The benefit of this is that you can use any number of staffs to write the drum line on any number of channels, accenting each struck drum/cymbal as required on a per-staff basis. This increases the realism feel of the playback significantly.

Using channel 10 for drums means that you can send only one volume controller command at a certain time. E.g. You want a loud base drum (kick) with a soft crash cymbal sound to be played simultaneously (written as a chord). This cannot be done on one staff, for controllers regulate playback en masse, not specific notes on a a chord.

Lets say you write two staves, one for the kick and one for the cymbal and use different accent or volume controllers on each staff to be effected at the same time point, but both staves are played via channel 10. The synth will execute one of the two commands only resulting in either loud or quiet playback of both staves (i.e. both cymbal and kick). This is true even when you use an accent on one staff and a volume on another, since they control different MIDI controllers, but both send the message to the same channel, resulting probably in an average of the two commands being played back (low volume and fff accent result in low volume, loud volume but pp accent results in low volume).

If however you assign different channels to the different staffs, then the playback will be as required, since the controllers are directed to different channels and do not interfere with eachother. GM, however, will not playback drums on a channel other than 10, but piano. So you need to assign a melodic preset that actually contains a drumkit to that staff played on another channel. In this case, the two sounds will be played with the different volumes since the controllers are directed to different channels.

Needless to say that this is very helpful for variably interaccented chords on any instrument (different chord member notes played at different volumes) but I cannot think of an example of this. Humanisation is greater however if this is done in piano chords (simultaneous strike) as each finger in reality plays at a different volume (different attack velocity). Maybe this way will humanise the guitar chords better, as at low tempi, beamed 64ths do not resemble strums some times.

Just an idea.

S

Re: Accessing additional percussion banks in soundfont

Reply #3
"You want a loud base drum (kick) with a soft crash cymbal sound to be played simultaneously (written as a chord). This cannot be done on one staff, for controllers regulate playback en masse, not specific notes on a a chord."

Interesting - but I prefer the following method.

You can vary the note velocities since they are tied to each individual note.
Volume controller should be used for track volume (mix balance) not accents.
Dynamics (on melodic parts) should be implemented with Expression controllers - that's what the Expression controller is designed for.
Velocity should be used for accents.

Create the various percussion instruments on separate staves by all means but use velocity for the accents.
In your example the kick drum can be assigned a high velocity and the crash cymbal a lower velocity.
The velocity is assigned to the note not the channel so the distinction is maintained even when the drum and cymbal share the same channel.
You can then use Channel 10 for all the percussion output.

The same can be applied to piano chord or guitar chords which can be further enhanced by sliding the chord notes slightly apart in time to humanize the attack.

Re: Accessing additional percussion banks in soundfont

Reply #4
I apologise, but I fail to understand this. Can you be abit more clear? As I understand it, attack velocity marks how hard the keyboard is hit. In a soundfont, one can map samples based on attack velocity, right? Does therefore attack velocity correspond to mf, f, p, pp etc.? Or do these correspond to the Expresion controller? Because I always thought that the later holds true. Volume is the Hard MIDI controller and Expression is the soft (or something) MIDI controller effecting in a sense the same result, provided NWC does not include there some obscure humanisation algorhythm. If mf, ff etc. correspond to the expression controller, then how can one implement the Attack Velocity whilst scripting? In real instruments, the "attack velocity" corresponds to mf, f, pp etc., or am I wrong? And finally, ALthough I do use volume for track mixing and expressions (mf, fff etc.) for colouring (accenting), what benefit shall I reap by using the attack velocity?

S

Re: Accessing additional percussion banks in soundfont

Reply #5
In NWC, dynamics markings (p, mp, f, ff...) affect attack velocity (a note attribute) and not Expression (a channel attribute).

So, in MIDI it is technically possible to have several notes in a chord with different attack velocities. To implement this in NWC though you need to use separate staves (tracks) mapped to the same MIDI channel and prefix different dynamics markings to each note in the chord.

BTW, the overall volume of a sequence in MIDI (at least in Creative's Live implemetation) is the sum of attack velocity, Volume controller and Expression controller.

Hope this clarifies.

 

Re: Accessing additional percussion banks in soundfont

Reply #6
Sterghios I'll try to be a bit clearer.

I don't mess with soundfonts to achieve diffent attack velocities I don't need to.

The velocity is contained in the Note On message (in fact in NWC the Note Off message is also a Note On message with a velocity of zero). So each note has the velocity built in.
I usually write the basic drum parts into separate tracks so that I can get different velocities on snare,cymbals, kick drum etc. and vary these according to where the accents are applied.
All these notes are sent on Channel 10 which will respond to any dynamic controls on that channel.
So changes in Volume (Controller 7) or Expression (Controller 11) on channel 10 will increase the dynamic output on the Channel over and above the effects of the individual velocities attached to each note.

Understand that the Expression controller works like a percentage (in 128 steps) of the current channel Volume.
So you are right - the actual dynamic output of each note is something like Volume x Expression/127 x Velocity/127.
(Someone will correct my assumption here - I'm sure - but if any of these factors is zero there will be lttle dynamic output).
A change in any of these factors changes the sound level (I'm trying hard not to use "Volume" as a descriptor of dynamic output).

Just going back over what I said initially:

Volume (Controller 7) is Channel "Balance".
Conventional wisdom says use only at the head of a track but I prefer to use it also when a track moves in and out of solo or background.

Expression (Controller 11) is "Dynamics" (swells/fades or 'dynamic variance' in NWC language).
The advantage here is that you can set swells/fades with Expression (remember this controller is a percentage of the Channel volume) and if you need to re-balance the track you need only change the Volume controller not every expression mark in the track.

Velocity I use exclusively for "accents" and small pushes in the melodic line to make the performance live.
I never use this as a dynamic device (to increase volume) only to provide accents.
And I prefer this to adding > over notes since I have more control over how heavy the accent is this way.

Note that you cannot achieve swells or fades on a held note with velocity because it only acts at Note On.
Any dynamic change while the note is held can only be done with Volume or Expression.
(And since Volume is "Balance" this leaves only Expression).

I have, for a long time, had troubles coming to grips with how dynamics are implemented in NWC. Now that we have Expression available as a dynamic controller I would hope that this becomes the basis for "dynamic variance" in the future.

This is all IMHO of course but it's based on a lot of research and discussion with mid producers on the net - I don't think I'm one-out in this area of midi implementation.
But your free to ignore all this at your leisure!